id
stringlengths 20
20
| content
stringlengths 211
2.4M
| meta
dict |
---|---|---|
BkiUdoY5qrqCysbsCffn | \section{Introduction}
Recent technological development enables us to purchase various kinds of
items and services via E-commerce systems. The emergence of Internet
applications has had an unprecedented impact on our life style to
purchase goods and services. From available data of items and services
at E-commerce platforms, we may expect that utilities of agents in
socio-economic systems are directly estimated.
Such an impact on travel and tourism, specifically, on hotel room
reservations, is significantly considered~\citep{Law}. According to
\citet{Pilia}, 40 per cent of hotel reservations will be made via
Internet in 2008, up from 33 per cent in 2007 and 29 per cent in 2006.
Therefore, the coverage of room opportunities via the Internet may be
sufficient to provide statistically significant results, and to
conduct a comprehensive analysis based on the hotel booking data collected
from Internet booking sites.
From our personal experience, it is found that it is becoming more
popular to make reservations of hotels via the Internet. When we use a hotel
booking site, we notice that we sometimes find preferable room
opportunities or not. Namely, the hotel accessibility seems to be
random. We further know that both the date and place of stay are
important factors to determine the availability of room opportunities.
Hence, the room availability depends on the calendars (weekdays, weekends,
and holidays) and regions.
This availability of the hotel rooms may indicate the future
migration trends of travelers. Therefore, it is worth considering
accumulation of comprehensive data of hotel availability in order to
detect inter-migration in countries.
The migration processes have been intensively studied in the context of
socio-economic dynamics with particular interest for quantitative
research. Weidlich and Haag proposed the Master equation with transition
probabilities depending on both regional-dependent and time-dependent utility
and mobility in order to describe collective tendency of agent decision
in migration chance~\citep{Haag:84}.
Since the motivation of migration seems to come from both psychological
and physical factors, the understanding of the dynamics of the migration
is expected to provide useful insights on inner states of
agents and their collective behavior.
In the present article, we discuss a model to capture behavior of
consumers at a hotel booking site and investigate statistics of the
number of available room opportunities from several perspectives.
This article is organized as follows. In Sec. \ref{sec:data}, we give a
brief explanation of data description collected from a Japanese hotel
booking site. In Sec. \ref{sec:outlook}, we show an outlook of collected
data of the room opportunities. In Sec. \ref{sec:model}, we consider a
model to capture room opportunities and derive a finite mixture of Poisson
distributions from binomial processes. In Sec. \ref{sec:estimation}, we
introduce the EM-algorithm to estimate parameters of the mixture of
Poisson distributions. In Sec. \ref{sec:numerical-study}, we computed
parameter estimates for an artificial data set generated from the mixture
of Poisson distributions. In Sec. \ref{sec:empirical-analysis}, we show
results of the empirical analysis on the room opportunities and discuss
relationship between existing probabilities of opportunities and their
rates. Sec. \ref{sec:conclusion} is devoted to conclusions.
\section{Data description}
\label{sec:data}
In this section, we give a brief explanation of a method to collect
data on hotel availability. In this study, we used a Web API (Application
Programing Interface) in order to collect the data from a Japanese hotel
booking site named Jalan~\footnote{The data are provided by Jalan Web
service.}. Jalan is one of the most popular hotel reservation
services which provide a WebAPI in Japan. The API is an interface code
set which is designed for a purpose to simplify development of
application programs.
Jalan Web service provides interfaces for both hotel managers
and customers (see Fig. \ref{fig:illustration}). The mechanism of Jalan
is as follows: The hotel managers can enter information on
room opportunities served by their hotels via an Internet interface. The
consumers can book rooms from available opportunities via the Jalan Web
site. The third parties can even built their web services with the Jalan
data by using the Web API.
\begin{figure}[hbt]
\begin{center}
\includegraphics[scale=0.4]{concept.eps}
\end{center}
\caption{A conceptual illustration of Jalan web service. The hotel
managers enter information on rooms (plans) which will be served at the
hotels. Customers can search and book rooms from all the available rooms
(plans) via Jalan web page. }
\label{fig:illustration}
\end{figure}
We are collecting all the available opportunities which appear in Jalan
regarding room opportunities in which two adults will be able
to stay one night. The data are sampled from the Jalan net web site
({\em http://www.jalan.net}) daily. The data on room opportunities
collected through Jalan Web API are stored as csv files.
In the data set, there exist over 100,000 room opportunities from over
14,000 hotels. In Tab. \ref{tab:data}, we show contents included in the
data set. Each plan contains sampled date, stay date, regional
sequential number, hotel identification number, hotel name, postal
address, URL of the hotel web page, geographical position, plan name,
and rate.
Since the data contain regional information, it is possible
for us to analyze regional dependence of hotel rates. Throughout the
investigation, we regard the number of recorded opportunities (plan) as a
proxy variable of the number of available room stocks.
\begin{table}[h]
\caption{The data format of room opportunities.}
\label{tab:data}
\centering
\begin{tabular}{l}
\hline
\hline
Date of collection \\
Date of Stay \\
Hotel identification number \\
Hotel name \\
Hotel name (Kana) \\
Postal code \\
Address \\
URL \\
Latitude \\
Longitude \\
Opportunity name \\
Meal availability \\
the latest best rate \\
Rate per night \\
\hline
\hline
\end{tabular}
\end{table}
For this analysis, we used the data for the period from 24th Dec 2009 to
4th November 2010. Fig. \ref{fig:map} shows an example
of distributions and representative rates. The yellow (black) filled squares
represent hotel plans cost 50,000 JPY (1,000) JPY per night. The red
filled squares represent hotel plans cost over 50,000 JPY per night. We
found that there is strong dependence of opportunities on
places. Specifically, we find that many hotels are located around
several centralized cities such as Tokyo, Osaka, Nagoya, Fukuoka and so
on.
Fig. \ref{fig:map} (bottom) shows a probability density distribution on
15th April 2010 all over the Japan. It is found that there are two
peaks around 10,000 JPY and 20,000 JPY on the probability density.
\begin{figure}[phbt]
\begin{center}
\includegraphics[scale=0.16]{20100415-20100409-price.eps}
\includegraphics[scale=0.7]{20100415-20100409-price-pdf.eps}
\end{center}
\caption{An example of rates distributions under the condition that two adults can
stay at the hotel for one night at 15th April 2010 (Top). A
probability density distribution of rates at 15th April 2010 (Bottom).
This data have been sampled on 9th April 2010. Yellow (black) filled
squares represent hotel plans cost 50,000 JPY (1,000) JPY per night. Red
filled squares represent hotel plans cost over 50,000 JPY per night. }
\label{fig:map}
\end{figure}
\begin{figure}[hbt]
\begin{center}
\includegraphics[scale=0.9]{totalcount.eps}
\end{center}
\caption{The number of hotels in which two adults can stay one night for
a period from 24th Dec 2009 to 4th November 2010.}
\label{fig:number}
\end{figure}
\section{Overview of the data}
\label{sec:outlook}
The number of room opportunities in which two adults can stay is counted
from the recorded csv files throughout the whole sampled
period. Fig. \ref{fig:number} shows the daily number of room
opportunities. From this graph, we found three facts:
\begin{description}
\item{(1)}There exists weekly fluctuation for the number of available room opportunities.
\item{(2)}There is a strong dependence of the number of available
opportunities on the Japanese calendar. Namely,
Saturdays and holidays drove reservation activities of
consumers. For example, during the New Year holidays
(around 12/30-1/1) and holidays in the spring season (around
3/20), the time series of the numbers show big drops.
\item{(3)}The number eventually increases as the date of stay
reaches. Specifically, it is observed that the number of
opportunities drastically decreases two days before the date of stay.
\end{description}
Fig. \ref{fig:region} shows the number of available room
opportunities at four regions for the period from 24th Dec 2009 to
4th November 2010. We calculated the
numbers at 010502 (Otaru), 072005 (Aizu-Kohgen, Yunogami, and
Minami-Aizu), 136812 (Shiragane), and 171408 (Yuzawa). It is found that
there are regional dependences of their temporal development.
\begin{figure}[hbt]
\includegraphics[scale=0.9]{regionaldemand1.eps}
\caption{The number of demand for four region per
day. It is found that there exists regional dependence of their
fluctuations.}
\label{fig:region}
\end{figure}
Furthermore, we show that dependence of averaged rates all over the Japan
on calendar dates in Fig. \ref{fig:mean}. On the New Year holidays in
2010, it is confirmed that the averaged rates rapidly decrease, meanwhile,
on the spring holidays in 2010 the averaged rates rapidly increase. This
difference seems to arise from the difference of consumers motivation
structure and preference on price levels between these holiday seasons.
Fig. \ref{fig:regionalmean} shows that the dependence of averaged rates
at four regions on calendar dates. The tendency of averaged rates
differs from each other. Specifically, the New Year holidays and the
summer vacation season exhibit such difference. This means that
demand-supply situations depend on regions. We need to know
tendency of the demand-supply situations of each area in a rigorous manner.
\begin{figure}[hbt]
\centering
\includegraphics[scale=0.8]{mean1.eps}
\caption{Time series of average rates of room
opportunities on stay dates for four region. The mean value of rates is
calculated from all the available room opportunities which are observed
on each stay date.}
\label{fig:mean}
\end{figure}
\begin{figure}[hbt]
\centering
\includegraphics[scale=0.8]{regionalmean1.eps}
\caption{Time series of average rates of room
opportunities on stay dates for four region. The mean value of rates is
calculated from all the available room opportunities which are observed
on each stay date.}
\label{fig:regionalmean}
\end{figure}
\section{Model}
\label{sec:model}
Let $N_m$ and $M$ be the total number of potential rooms at the area $m$
and the total number of potential consumers. The total number of
opportunities $N_m$ may be assumed to be constant since the Internet
booking style has been sufficiently accepted, and almost hotels offer
their rooms via the Internet. Ignoring the birth-death process of
consumers, we also assume that $M$ should be be constant.
We further assume that a Bernoulli random variable
represents booking decision of a consumer from $N_m$ kinds of room
opportunities. In order to express the status of rooms within the
observation period (one day), we introduce $M$ Bernoulli random
variables with time-dependent success probability $p_m(t)$,
\begin{equation}
y_{mi}(t) =
\left\{
\begin{array}{lll}
1 & w.p. \quad p_m(t) & \mbox{(the $i$-th consumer holds a reservation)}\\
0 & w.p. \quad 1-p_m(t) & \mbox{(the $i$-th consumer does not holds a reservation)}\\
\end{array}
\right.,
\end{equation}
where $y_{mi}(t) \quad (i=1,\ldots,M)$ represents the status of the $i$-th
consumer for a room at time $t$.
If we assume that $p_m(t)$ is sufficiently small, so that $N_m >
\sum_{i=1}^M y_{mi}(t)$, then the number of room opportunities at
time $t$ may be proportional to the differences between the total number
of potential rooms and the number of booked rooms at time $t$
\begin{equation}
Y_m(t) \propto N_m - \sum_{i=1}^M y_{mi}(t).
\end{equation}
Namely, we have
\begin{equation}
Z_m(t) = kN_m - Y(t) = k\sum_{i=1}^M y_{mi}(t),
\end{equation}
where $k$ is a positive constant.
Assuming further that $y_1(t), \ldots, y_M(t)$ are independently,
identically, distributed, we obtain that the number of available
opportunities $\sum_{i=1}^M y_{mi}(t)$ follows a binomial distribution
$\mbox{B}(M,r_m(t))$. Furthermore, assuming $r_m \ll 1$,
$M \gg 1$, $Mr_m \gg 1$, we can approximate the demand $Z_m(t) = kN_m -
Y_m(t)$ as a Poisson random variable, which follows
\begin{equation}
\mbox{Pr}_{Z}(l=Z_m|r_m(t)) = \frac{\{Mkp_m(t)\}^l}{l!}e^{-\{Mkp_m(t)\}} = \frac{\{Mr_m(t)\}^l}{l!}e^{-\{Mr_m(t)\}},
\label{eq:Pr-Z}
\end{equation}
where we define $kp_m(t)$ as $r_m(t)$.
Since the agents have some interactions with one another, their
psychological atmosphere (mood), which is collectively created by
agents, influences their decision. Such a psychological effect may be expressed
as probability fluctuations for success probability
$r_m(t)$ at time $t$ in the Bernoulli random variable.
Let us assume that the time-dependent probability
$r_m(t) \quad (0 \leq r_m(t) \leq 1)$ is sampled from a
probability density $F_m(r)$. From Eq. (\ref{eq:Pr-Z}),
the marginal distribution for the Poisson distribution conditioning on
$r_m$ with probability fluctuation
$F_m(r_m)$ is given by
\begin{equation}
\mbox{Pr}_{Zm}(l=Z_m) = \int_{0}^{1}F_m(r_m)\frac{(Mr_m)^l}{l!}e^{-Mr_m}dr_m.
\label{eq:dist}
\end{equation}
Since we can observe the number of available opportunities $Z_m(t)$,
we may estimate parameters of the distribution
$F_m(r_m)$ from the successive observations.
For the sake of simplicity, we further assume that
$r_m(t)$ is sampled from discrete categories $r_{mi}$
with probability $a_{mi}$ ($ 0 \leq r_{mi} \leq 1$; $i =
1, \ldots, K_m; \sum_{i=1}^{K_m} a_{mi} = 1$). These
parameters are expected to describe motivation structure of consumers
depending on calendar days (weekdays/weekends and special holidays, business
purpose/recreation and so forth). Then since $F_m(r_m)$
is given by
\begin{equation}
F_m(r_m) = \sum_{i=1}^{K_m}a_{mi} \delta (r_m-r_{mi}),
\label{eq:bi}
\end{equation}
$\mbox{Pr}_{Zm}(l=Z_m)$ is calculated as
\begin{eqnarray}
\nonumber
\mbox{Pr}_{Zm}(l=Z_m) &=& \int_{0}^{1}F(r_m)
\frac{(Mr_m)^{l}}{l!}e^{-Mr_m}dr_m, \\
&=& \sum_{i=1}^{K_m} a_{mi} \frac{(Mr_{mi})^{l}}{l!}e^{-Mr_{mi}}.
\label{eq:marginal}
\end{eqnarray}
Hence, Eq. (\ref{eq:marginal}) is concerned with a finite mixture of
Poisson distributions.
\section{Estimation procedure by means of the EM algorithm}
\label{sec:estimation}
The construction of estimators for finite mixtures of distributions has
been considered in the literature of estimation. Estimation procedures
for Poissonian mixture model have been successively studied by several
researchers. Specifically moment estimators and maximum likelihood
estimators are intensively studied.
The moment estimators were tried on a mixture of two normal distributions by
Karl Peason as early as 1894. Graphical solutions have been given by
\citet{Cassie}, \citet{Harding} and \citet{Bhattacharya}. Rider discusses mixtures of binomial
and mixtures of Poisson distributions in the case of two
distributions~\citep{Rider}.
Hasselblad proposed the maximum likelihood estimator and derived
recursive equations for parameters~\citep{Hasselblad}. The effectiveness
of the maximum likelihood estimator for the mixtures of Poissonian
distributions is widely recognized. Dempster discusses the EM-algorithm
for mixtures of distributions in the several cases~\citep{Dempster}. By
using the EM-algorithm, we can obtain parameter estimates from mixing data.
Let $z_m(1),\ldots,z_m(T)$ be the number of demand (the
number of potential room opportunities minus the number of available
room opportunities) computed at each observation day. From the
observation sequences, let us consider a method to estimate parameters
of Eq. (\ref{eq:marginal}) based on the maximum likelihood method. In
this case, since the log-likelihood function can be described as
\begin{equation}
L_m(a_{m1},\ldots,a_{mK_m},r_{m1},\ldots,r_{mK_m}) = \sum_{s=1}^{T}\log\Bigl(
\sum_{i=1}^{K_m} a_{mi} \frac{(Mr_{mi})^{z(s)}}{z(s)!}e^{-Mr_{mi}}
\Bigr),
\label{eq:LLF}
\end{equation}
parameter estimates are obtained by the maximization of
the log-likelihood function
$L_m(a_{m1},\ldots,a_{mK_m},r_{m1}\ldots,r_{mK_m})$
\begin{equation}
\{\hat{a}_{m1},\ldots,\hat{a}_{mK},\hat{r}_{m1},\ldots,\hat{r}_{mK}\}
= \underset{\{a_{mi}\},\{r_{mi}\}}{\mbox{arg max}} \quad L_m(a_{m1},\ldots,a_{mK_m},r_{m1}\ldots,r_{mK})
\label{eq:MLE}
\end{equation}
under the constraint $\sum_{i=1}^{K_m}a_{mi}=1$.
The maximum likelihood estimator for the mixture of Poisson models
given by Eq. (\ref{eq:MLE}) can be derived by setting the partial
differentiations of Eq. (\ref{eq:LLF}) with respect to each parameter as
zero (See \ref{sec:derivation-EM}). They lead to the
following recursive equations for parameters;
\begin{eqnarray}
a_{mi}^{(\nu+1)} &=& \frac{1}{T}\sum_{t=1}^T
\frac{a_{mi}^{(\nu)}F_{mi}^{(\nu)}(z_m(t))}{G_m^{(\nu)}(z_m(t))}
\quad (i=1,\ldots,K_m),
\label{eq:a-update}
\\
r_{mi}^{(\nu+1)} &=& \frac{1}{M}\frac{\sum_{t=1}^T
z_m(t) \frac{F_{mi}^{(\nu)}(z_m(t))}{G_m^{(\nu)}(z_m(t))}}{\sum_{t=1}^T
\frac{F_{mi}^{(\nu)}(z_m(t))}{G_m^{(\nu)}(z_m(t))}} \quad (i=1,\ldots,K_m),
\label{eq:q-update}
\end{eqnarray}
where
\begin{eqnarray}
F_{mi}^{(\nu)}(x) &=& \frac{(Mr_{mi}^{(\nu)})^x e^{-Mr_{mi}^{(\nu)}}}{x!}, \\
G_m^{(\nu)}(x) &=&\sum_{i=1}^{K_m} a_{mi}^{(\nu)} F_{mi}^{(\nu)}(x).
\end{eqnarray}
These recursive equations give us a way to estimate parameters by
starting from an adequate set of initial values. These recursive
equations are also referred to as the EM-algorithm for the mixture of
Poisson distributions~\citep{Dempster,Liu}.
In order to determine the adequate number of parameters, we introduce
the Akaike Information Criteria (AIC), which is defined as
\begin{equation}
AIC(K_m) = 4K_m - 2\hat{L}_m,
\end{equation}
where $\hat{L}_m$ is the maximum value of the log-likelihood function
in terms of $2K_m$ parameter estimates. $\hat{L}_m$
is computed from the log-likelihood value per observation with parameter
estimates obtained from the EM-algorithm,
\begin{equation}
\hat{L}_m = \sum_{s=1}^{T}\log\Bigl(
\sum_{i=1}^{K_m} \hat{a}_{mi}
\frac{(M\hat{r}_{mi})^{z_m(s)}}{z_m(s)!}e^{-M\hat{r}_{mi}}
\Bigr).
\end{equation}
Since it is known that the preferred model should be the one with the
lowest AIC value, we obtain the adequate number of categories $K_m$ as
\begin{equation}
\hat{K_m} = \underset{K_m}{\mbox{arg min}} \quad AIC(K_m).
\end{equation}
Furthermore, we consider the method to determine an underlying Poisson
distribution from which the observation $z_m(s)$ was
sampled. Since the underlying Poisson distribution is one of Poisson
distributions for the mixture, its local likelihood function of
$z_m(s)$ may be maximized over
all the local likelihood functions of $z_m(s)$. Based
on this idea we propose the following method.
Let $R_{mi}(z) \quad (i=1,\ldots,K_m)$ be
log-likelihood functions of the $i$-th category at area
$m$ with parameter estimate $\hat{r}_{mi}$. From
Eq. (\ref{eq:marginal}), it is defined as
\begin{equation}
R_{mi}(z) = z \log M + \log \hat{r}_{mi} - M \hat{r}_{mi}\ \log\Bigl(z!\Bigr).
\end{equation}
By finding the maximum log-likelihood value $R_{mi}(z(s))$
for $i=1,\ldots,K_m$, we can select the adequate
distribution where $z_m(s)$ was extracted. Namely, the
adequate category $\hat{i}_s$ for $z_m(s)$ should be
given as
\begin{equation}
\hat{i}_s = \underset{i}{\mbox{arg max}} R_i(z_m(s)).
\end{equation}
\section{Numerical simulation}
\label{sec:numerical-study}
Before going into empirical analysis on actual data on room opportunities
with the proposed parameter estimation method, we calculate parameter
estimates for artificial data with it.
We generate the time series $z(s) \quad (s=1,\ldots,T)$ from a mixture of
Poisson distributions, given by
\begin{equation}
\left\{
\begin{array}{llll}
r(t) &=& r_i \quad w.p. \quad a_i \\
z(t) &\sim& \mbox{Pr}(l=Z(t)|r(t)) = \frac{(Mr(t))^l}{l!}e^{-Mr(t)}
\end{array}
\right..
\end{equation}
where $K$ is the number of categories, $a_i$ represents the probability
for the $i$-th category to appear $(i=1,\ldots,K; \sum_{i=1}^K a_i=1)$.
We set $K=12$ and $M=100,000,000$. Using parameters shown in
Tab. \ref{tab:simulation-parameters}, we generated the artificial data
shown in Fig. \ref{fig:numerical-ts}. Next, we estimated parameters from
$T(=200)$ observations without any prier knowledge on the parameters.
As shown in Fig. \ref{fig:numerical-AIC} (top) the AIC values with
respect to $K$ take the minima at $\hat{K}=12$. In order to confirm
adequacy of parameter estimates, we conduct Kolmogorov-Smirnov (KS) test
between the artificial data and sequences of random numbers with
parameter estimates.
Fig. \ref{fig:numerical-AIC} (bottom) shows KS statistic at each
$K$. Since at $\hat{K}=12$ the KS statistic is computed as 0.327, which
is less than 1.36, the null hypothesis that these time series are
sampled from the same distribution is not rejected at 5\% significance
level. Tab. \ref{tab:simulation} shows parameter estimates.
Furthermore, we selected values of $r_i$ for each observation by means
of the proposed method mentioned in Sec. \ref{sec:estimation}. The
parameter estimates can be computed as a function of time $t$.
However, we found differences between the parameter estimates and the true
values. Especially, if the close true values of parameters were estimated
as the same parameters. As a result, the number of categories is estimated
as $\hat{K}=12$, which differs from the number of the true set of
parameters as $K=12$.
After determining the underlying distribution for each observation, we
further computed estimation errors between the parameter estimates and true
parameters for each observation. As shown in
Fig. \ref{fig:artificial-estimation}, we confirmed that their
estimation error, defined as $|\hat{r}_i(t)-r_i(t)|$ is less than
$8.0\times 10^{-6}$, and that their relative error, defined as
$|\hat{r}_i(t)-r_i(t)|/r_i(t)$, is less than 0.3 \%. It is confirmed that the
parameter estimates by using the EM-algorithm agree with true values of
parameters for the artificial time series.
Hence, it is concluded that the discrimination errors between two close
parameters do not play a critical role for the purpose of the parameter
identification at each observation.
\begin{figure}[hbt]
\begin{center}
\includegraphics[scale=0.8]{test.eps}
\end{center}
\caption{Examples of time series generated from the Poissonian mixture
model for $K=12$ and $M=100,000,000$.}
\label{fig:numerical-ts}
\end{figure}
\begin{figure}[phbt]
\begin{center}
\includegraphics[scale=0.45]{testaic.eps}
\includegraphics[scale=0.45]{testks.eps}
\end{center}
\caption{The value of AIC for artificial data is shown
as a function in terms of the number of parameters $K$ (left). The
lowest value of AIC is found as 3803.20 at $K=12$. The KS statistic
between the artificial data and estimated ones at each $K$ (right).}
\label{fig:numerical-AIC}
\end{figure}
\begin{figure}[hbt]
\begin{center}
\includegraphics[scale=0.45]{testest.eps}
\includegraphics[scale=0.45]{testesterr.eps}
\end{center}
\caption{Each Parameter estimate for an observation is
shown as a function in terms of time (left). The relative error between
the true parameter and the estimated one (right).}
\label{fig:artificial-estimation}
\end{figure}
\begin{table}[hbt]
\centering
\caption{Parameters of the Poissonian mixture model to generate
artificial time series. The number of categories is set as $K=12$.}
\label{tab:simulation-parameters}
\centering
\begin{tabular}{lllll}
\hline
\hline
$r_{1}$ & 0.000025 & $a_{1}$ 0.109726 \\
$r_{2}$ & 0.000223 & $a_{2}$ 0.070612 \\
$r_{3}$ & 0.000280 & $a_{3}$ 0.073355 \\
$r_{4}$ & 0.000479 & $a_{4}$ 0.077612 \\
$r_{5}$ & 0.000613 & $a_{5}$ 0.094848 \\
$r_{6}$ & 0.000652 & $a_{6}$ 0.073841 \\
$r_{7}$ & 0.001219 & $a_{7}$ 0.090867 \\
$r_{8}$ & 0.001233 & $a_{8}$ 0.062191 \\
$r_{9}$ & 0.001295 & $a_{9}$ 0.077662 \\
$r_{10}$ & 0.001341 & $a_{10}$ 0.102573 \\
$r_{11}$ & 0.001412 & $a_{11}$ 0.085892 \\
$r_{12}$ & 0.001570 & $a_{12}$ 0.080821 \\
\hline
\hline
\end{tabular}
\end{table}
\begin{table}[hbt]
\centering
\caption{Parameter estimates of the Poissonian mixture model by using the EM
estimator (bottom). The number of categories was estimated as $\hat{K}=12$ and
its AIC value is obtained as $AIC = 3803.20$.}
\label{tab:simulation}
\centering
\begin{tabular}{llll}
\hline
\hline
$r_{1}$ & 0.0000247783 $a_{1}$ & 0.0900000000 \\
$r_{2}$ & 0.0002229207 $a_{2}$ & 0.0700000000 \\
$r_{3}$ & 0.0002806173 $a_{3}$ & 0.0550000000 \\
$r_{4}$ & 0.0004798446 $a_{4}$ & 0.0650000000 \\
$r_{5}$ & 0.0006137419 $a_{5}$ & 0.0800000000 \\
$r_{6}$ & 0.0006516237 $a_{6}$ & 0.0950000000 \\
$r_{7}$ & 0.0012185180 $a_{7}$ & 0.1041702140 \\
$r_{8}$ & 0.0012324086 $a_{8}$ & 0.0808297860 \\
$r_{9}$ & 0.0012946718 $a_{9}$ & 0.0850000000 \\
$r_{10}$ & 0.0013420367 $a_{10}$ & 0.1050000000 \\
$r_{11}$ & 0.0014120537 $a_{11}$ & 0.0800000000 \\
$r_{12}$ & 0.0015688622 $a_{12}$ & 0.0900000000 \\
\hline
\hline
\end{tabular}
\end{table}
\section{Empirical results and discussion}
\label{sec:empirical-analysis}
In this section, we apply the proposed method to
estimating parameters for actual data. We estimate the parameters
$a_{mi}$ and $r_{mi}$ from the numbers, which is shown in
Fig. \ref{fig:region} at four regions with the log-likelihood functions
given in Eq. (\ref{eq:LLF}).
Fig. \ref{fig:region} shows estimated time series of the demand
from 24th Dec 2009 to 4th November 2010. In order to obtain this demand,
we assume that $N_m=\max_{t}\{{Z_m(t)+10}\}$ and that $M$
is approximately equivalent to the total population of Japan, so that $M
= 1,000,000,000$.
According to the values of AIC as shown in Fig. \ref{fig:AIC-KS} (left), the
adequate number of parameters is estimated as $K=12$ (010502), $K=10$
(072005), $K=5$ (136812), and $K=11$ (171408),
respectively. Fig. \ref{fig:AIC-KS} (right) shows the KS value at each
$K$. It is found that the KS test approves the mixture of Poisson
distribution with parameter estimates in statistically
significant. Tab. \ref{tab:AIC-KS} shows the AIC and KS values at the
adequate number of parameters for each area.
\begin{table}[h]
\centering
\caption{At the adequate number of parameters, AIC, maximum log--likelihood
($ll$), KS value, and $p$-value for each region.}
\label{tab:AIC-KS}
\begin{tabular}{lllllll}
\hline
\hline
regional number & $K$ & $AIC$ & $ll$ & $p$-value & KS value \\
\hline
010502 & 12 & 3558.31 & 1756.15 & 0.807 & 0.532 \\
072005 & 10 & 3009.10 & 1485.55 & 0.187 & 1.088 \\
136812 & 5 & 2572.33 & 1277.17 & 0.107 & 1.245 \\
171408 & 11 & 3695.25 & 1826.62 & 0.465 & 0.850 \\
\hline
\hline
\end{tabular}
\end{table}
Secondly, we confirmed that the relationship between mean of room rates
and the number of opportunities (left) and that between existing
probabilities $r_{mi}$ (right) for each day. Fig. \ref{fig:prob-rate}
shows their scatter plots during the periods of 25th December 2009 and
4th November 2010. Each point represents their relation for each
day. The variance of room rates proportional to the existence
probability. It is confirmed that the mean of room rates for two adults
per night is about 20,000 JPY. This means that the excess supply
increases the uncertainty of room rates.
Thirdly, by means of the method to select the underlying distribution,
from Poisson distributions for the mixture, we determined the category
$i$ for each day. As shown in Fig. \ref{fig:estimation-region} (bottom), the
probabilities show strong dependence on the Japanese
calendar.
It is confirmed that there is both regional and
temporal dependence of the probabilities. We found that the
probabilities take higher values at each region on holidays and weekends
(Saturday). It is observed that higher probabilities maintained in
winter season at 072005 (Aizu-Kohgen,Yunogami,Minami-Aizu). This reason
is because this place is one of winter ski resorts.
Specifically, on holidays and Saturdays, they take smaller
values than on weekdays. Tabs. \ref{tab:empirical-param} and
\ref{tab:empirical} show parameter estimates and exact dates included
in each category at 010502 (Otaru), respectively. From this table we
found that there is travel tendency of this region on seasons.
It is found that a lot of travelers visited and the hotel rooms
were actively booked at this area on dates included in categories 1
to 3. On the other hand, this area were actively booked on dates
included in categories 10 to 12.
The covariates among the numbers of room opportunities at different
regions are the important factors to determine the demand-supply
situations all over the Japan.
From Tab. \ref{tab:empirical-param}, it is confirmed that in the case of
Otaru the end of October to the beginning of November in 2010 was highly
demanded season. This tendency is different from the calendar dates. The
relationship between the number of opportunities and averaged rates is
slightly different from that between the existence probability and the
averaged rates. By using our proposed method we can compare the
difference of comsumers' demand between the dates. From the dependence
of averaged prices on the probability $r_i$, we can understand
preference and motivation structure of consumers for travel and tourism.
\begin{table}[hbt]
\centering
\caption{Parameter estimates of the Poissonian mixture model by using the EM
estimator. The number of categories were estimated as $\hat{K}=12$ at 010502.}
\label{tab:empirical-param}
\begin{tabular}{clcl}
\hline
\hline
$r_{1}$ & 0.0000001884 & $a_{1}$ & 0.0195519998 \\
$r_{2}$ & 0.0000005108 & $a_{2}$ & 0.0245098292 \\
$r_{3}$ & 0.0000008236 & $a_{3}$ & 0.0548239374 \\
$r_{4}$ & 0.0000010361 & $a_{4}$ & 0.0332800534 \\
$r_{5}$ & 0.0000010742 & $a_{5}$ & 0.1572987311 \\
$r_{6}$ & 0.0000012821 & $a_{6}$ & 0.2045505810 \\
$r_{7}$ & 0.0000015900 & $a_{7}$ & 0.1402093740 \\
$r_{8}$ & 0.0000019395 & $a_{8}$ & 0.0801500614 \\
$r_{9}$ & 0.0000023878 & $a_{9}$ & 0.0959105486 \\
$r_{10}$ & 0.0000027989 & $a_{10}$ & 0.0544931969 \\
$r_{11}$ & 0.0000032900 & $a_{11}$ & 0.0661506482 \\
$r_{12}$ & 0.0000041136 & $a_{12}$ & 0.0690710390 \\
\hline
\hline
\end{tabular}
\end{table}
\begin{table}[p]
\caption{The dates included in each category for 010502 (Otaru).}
\label{tab:empirical}
{\small
\begin{tabular}{|l|p{12cm}|}
\hline
1 & 2010-10-26,2010-10-27,201-10-28,2010-11-01,2010-11-03,2010-11-04 \\
\hline
2 & 2010-09-01,2010-09-26,2010-09-30,2010-10-05,2010-10-18,2010-10-25,2010-10-31,2010-11-02\\
\hline
3 & 2010-02-01,2010-02-02,2010-02-03,2010-04-19,2010-04-21,2010-04-22,2010-05-12,2010-05-19,2010-05-23,2010-05-24,2010-05-25,2010-05-30,2010-07-14,2010-07-15,2010-07-15,2010-07-22,2010-07-31,2010-08-31,2010-09-06,2010-10-12,2010-10-12,2010-10-29 \\
\hline
4 & 2010-01-11,2010-01-12,2010-01-15,2010-01-24,2010-02-04,2010-03-15,2010-03-23,2010-04-12,2010-04-13,2010-04-14,2010-04-18,2010-04-25,2010-05-09,2010-05-11,2010-05-13,2010-05-18,2010-05-26,2010-06-01,2010-06-03,2010-06-16,2010-06-30,2010-07-01,2010-07-06,2010-07-26,2010-07-27,2010-08-30,2010-09-15,2010-09-16,2010-09-20,2010-09-28,2010-10-04,2010-10-17,2010-10-21 \\
\hline
5 & 2010-01-06,2010-01-07,2010-01-08,2010-01-13,2010-01-22,2010-01-29,2010-02-22,2010-03-01,2010-03-02,2010-03-03,2010-03-04,2010-03-07,2010-03-08,2010-03-10,2010-03-11,2010-03-16,2010-03-17,2010-03-18,2010-03-22,2010-03-25,2010-03-30,2010-03-31,2010-04-01,2010-04-06,2010-04-07,2010-04-11,2010-04-15,2010-04-16,2010-04-28,2010-05-06,2010-05-07,2010-05-14,2010-05-16,2010-05-22,2010-06-04,2010-06-06,2010-06-07,2010-06-08,2010-06-10,2010-06-11,2010-06-17,2010-06-21,2010-06-22,2010-07-02,2010-07-20,2010-08-03,2010-08-04,2010-08-05,2010-08-17,2010-08-20,2010-08-24,2010-09-07,2010-09-08,2010-09-12,2010-09-21,2010-09-22,2010-10-08,2010-10-20\\
\hline
6 & 2010-01-28,2010-01-30,2010-02-18,2010-02-26,2010-02-28,2010-03-05,2010-03-09,2010-03-12,2010-03-19,2010-03-29,2010-04-04,2010-04-09,2010-04-29,2010-05-05,2010-05-08,2010-05-15,2010-05-17,2010-05-21,2010-05-27,2010-06-02,2010-06-18,2010-06-20,2010-06-23,2010-06-27,2010-06-28,2010-07-09,2010-07-11,2010-07-12,2010-07-29,2010-08-02,2010-08-06,2010-08-19,2010-08-23,2010-08-29,2010-09-05,2010-09-17,2010-09-23,2010-09-27,2010-09-29,2010-10-01,2010-10-02,2010-10-19\\
\hline
7 & 2010-01-04,2010-01-16,2010-01-23,2010-02-09,2010-02-15,2010-02-16,2010-02-19,2010-02-21,2010-02-24,2010-03-14,2010-03-28,2010-04-02,2010-04-03,2010-04-10,2010-04-24,2010-06-09,2010-06-24,2010-07-04,2010-07-16,2010-08-22,2010-09-03,2010-09-10,2010-09-14,2010-09-24,2010-10-22,2010-10-30\\
\hline
8 & 2009-12-24,2010-12-27,2010-12-28,2010-01-03,2010-01-05,2010-01-10,2010-02-07,2010-02-08,2010-02-10,2010-02-17,2010-02-27,2010-03-26,2010-04-05,2010-04-08,2010-06-25,2010-07-08,2010-07-23,2010-07-25,2010-08-01,2010-08-11,2010-08-15,2010-08-16,2010-08-18,2010-08-21,2010-08-25,2010-10-07,2010-10-15,2010-10-16\\
\hline
9 & 2009-12-25,2009-12-26,2009-12-29,2010-01-09,2010-01-21,2010-02-05,2010-02-11,2010-02-14,2010-03-13,2010-04-20,2010-04-30,2010-07-03,2010-07-10,2010-08-08,2010-08-09,2010-08-10,2010-08-12,2010-08-26,2010-08-27 \\
\hline
10 & 2009-12-30,2010-01-01,2010-01-02,2010-01-19,2010-02-12,2010-02-20,2010-03-06,2010-03-21,2010-05-29,2010-06-05,2010-06-12,2010-06-19,2010-07-30,2010-08-07,2010-08-13,2010-08-14,2010-09-04,2010-09-11,2010-10-11,2010-10-23 \\
\hline
11 & 2009-12-31,2010-02-06,2010-02-13,2010-03-20,2010-03-27,2010-05-01,2010-05-02,2010-05-03,2010-05-04,2010-06-26,2010-07-17,2010-07-18,2010-07-24,2010-07-31,2010-08-28,2010-09-18,2010-09-19,2010-09-25,2010-10-09,2010-10-10 \\
\hline
12 & 2010-01-18,2010-01-20,2010-01-25,2010-01-27,2010-04-23,2010-04-26,2010-04-27,2010-05-20,2010-05-28,2010-05-31,2010-06-13,2010-06-14,2010-06-29,2010-07-05,2010-07-13,2010-07-19,2010-07-21,2010-07-28,2010-09-02,2010-09-09,2010-09-13,2010-10-03,2010-10-13,2010-10-14,2010-10-24 \\
\hline
\end{tabular}
}
\end{table}
\begin{figure}[hbt]
\begin{center}
\includegraphics[scale=0.45]{demandaic.eps}
\includegraphics[scale=0.45]{demandks.eps}
\end{center}
\caption{The value of AIC (left) and that of KS
statistics (right) shown as a function in terms of the number of
parameters $K$ for four areas.}
\label{fig:AIC-KS}
\end{figure}
\begin{figure}[phbt]
\begin{center}
\includegraphics[scale=0.45]{demand_010502map1.eps}
\includegraphics[scale=0.45]{demand_010502map2.eps}
\end{center}
\caption{The relationship between mean of rates per
night and the number of opportunities and the relationship between it
and existence probability for a period from 25th December 2009 to 4th Nov
2010 (left). Each point represents the relation on each observation day.}
\label{fig:prob-rate}
\end{figure}
\begin{figure}[hbt]
\includegraphics[scale=0.85]{demandest.eps}
\caption{The parameter estimates from the number of available
opportunities on observation date for four regions.}
\label{fig:estimation-region}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We analyzed the data of room opportunities collected from a Japanese
hotel booking site. We found that there is strong dependence of the
number of available opportunities on the Japanese calendar.
Firstly, We proposed a model of hotel booking activities based on a
mixture of Poisson distributions with time-dependent intensity. From a
binomial model with a time-dependent success probability, we derived
the mixture of Poisson distributions. Based on the mixture model, we
characterized the number of room opportunities at each day with
different parameters regarding their difference as motivation structure
of consumers dependent on the Japanese calendar.
Secondarily, we proposed a parameter estimation method on the basis of the
EM-algorithm and a method to select the underlying distribution for each
observation from Poisson distributions for the mixture through the
maximization of the local log-likelihood value.
Thirdly, we computed parameters for artificial time series generated
from the mixture of Poisson distributions with the proposed method, and
confirmed that the parameter estimates agree with the true values of
parameters in statistically significant. We conducted an empirical
analysis on the room opportunity data. We confirmed that the
relationship between the averaged prices and the probabilities of
opportunities existing is associated with demand-supply
situations. Furthermore, we extracted multiple time series of the
numbers at four regions and found that the migration trends of
travelers seem to depend on regions.
It was found that these large-scaled data on hotel opportunities enable us
to see several invisible properties of travelers' behavior in Japan.
As future work, we need to use more high-resolution data on booking of
consumers at each hotel to capture demand-supply situations. If we can
use such data, then we will be able to control room rates based
on consumers' preference. A future emerging technology will make it
possible to see or foresee something which we can not see at this moment.
\section*{Acknowledgement}
This study was financially supported by the Excellent Young Researcher
Overseas Visiting Program (\# 21-5341) of the Japan Society for the
Promotion of Science (JSPS). The author is thankful very much to
Prof. Dr. Thomas Lux for his fruitful discussions and kind
suggestions. The author expresses to Prof. Dr. Dirk
Helbing his sincere gratitude for stimulative discussions. This is a
research study, which has been started in collaboration with
Prof. Dr. Dirk Helbing.
| {
"attr-fineweb-edu": 1.933594,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdpc5qoaAwdii6riF | \section{Introduction}
As data acquisition methods become more pervasive, sports analytics has received increased interest in contemporary sports, like soccer, basketball and baseball~\cite{DBLP:journals/bigdata/AssuncaoP18}. One common application in sports analytics is valuing player actions and decision-making. For example, Decroos~et~al.~introduce a framework to value soccer players according to how their actions change their team's chance of scoring~\cite{DBLP:conf/kdd/DecroosBHD19}.
Esports, also known as professional video gaming, is one of the fastest growing sports markets in the world. Yet esports has attracted little sports analytics interest. Most analytical work in esports covers massively online battle arena (MOBA) games, such as League of Legends or Defense of the Ancients 2 (``DOTA2"). Accordingly, there exists a dearth of work on Counter-Strike: Global Offensive (CSGO), one of the oldest yet most popular esports. A picking and banning process is a common process in many esports, where some entities are banned from being played in a particular game. For example, in League of Legends, teams ban a set of characters, and their players each pick a character to play before the game starts. In CSGO, teams typically perform a map selection process where each team takes turns picking and banning maps to play. However, map selection routines are often not based on analytics and data, but rather on players' inclinations at selection time.
Contextual bandits are statistical models that take a context $x$ and return a probability distribution over possible actions $a$, with the objective of maximizing the reward $r$ returned by the action taken. In this paper, we apply a contextual bandit framework to the domain of map selection in CSGO. We use a novel data set of over 25,000 map pick and ban decisions from over 3,500 professional CSGO matches to train three different bandit framings to the problem. We find that teams' choices in the map selection process are suboptimal and do not yield the highest expected win probability.
The paper is structured accordingly. In section~\ref{section:RelatedWork}, we review relevant esports and contextual bandit works. In section~\ref{section:CSMapSelection}, we cover CSGO's map selection process. In section~\ref{section:Modeling}, we introduce our contextual bandit model. In section~\ref{section:Experiments}, we describe our dataset and our evaluation methodology. Section~\ref{section:Results} contains our results. We discuss the benefits of our model, the choices of evaluation metrics and suggest possible areas of future work in section~\ref{section:Discussion} and conclude the paper in section~\ref{section:Conclusion}.
\begin{figure*}
\center{\includegraphics[width=\linewidth]
{Figures/map_picking.png}}
\caption{\label{fig:mappicking} Example map selection process for a best-of-three match. The available map pool is shown above each pick/ban decision. The first team, usually decided by tournament rules, bans a map. The second team then does the same. The two teams then both pick a map, and then both ban a map. In total, there are six decisions, four of which are bans, and two are picks.}
\end{figure*}
\section{Related Work} \label{section:RelatedWork}
Reinforcement learning (RL) techniques are increasingly being applied to sports analytics problems. Liu~et~al.~first used RL in sports to estimate an action-value Q function from millions of NHL plays~\cite{DBLP:conf/ijcai/LiuS18}. They used the learned Q function to value players based on the aggregate value of their actions. Liu~et~al.~also apply mimic learning to make their models more interpretable~\cite{DBLP:conf/pkdd/LiuZS18a}. Sun~et~al.~extend this work by considering a linear model tree~\cite{DBLP:conf/kdd/SunDSL20}. While the previous works heavily focused on ice hockey, Liu~et~al.~also learn an action-value Q function for soccer~\cite{DBLP:journals/datamine/LiuLSK20}. Despite the heavy use of other RL approaches such as Q-learning, contextual bandits have not been as heavily utilized in sports analytics.
This paper applies contextual bandits to the multi-arm map selection process in esports matches for the game CSGO. Contextual bandits are a simplified case of reinforcement learning. In reinforcement learning, an action is chosen based on the context (or state) and a reward is observed, and this process is repeated for many rounds. Rewards are not observed for actions not chosen. In the contextual bandit case, the contexts of different rounds are independent. \cite{tewari:context_bandit} provides a thorough review of contextual bandits, tracing the concept back to \cite{woodroofe:context_bandit} and the term back to \cite{langford:context_bandit}. Many approaches have been explored for learning policies in the contextual bandit setting. \cite{williams:rl} introduced gradient approaches in the reinforcement learning setting, and \cite{sutton_bartow:rl} applied the approach to the specific case of contextual bandits. Comparing proposed policies often requires off-policy evaluation: estimating the value of a policy from data that was generated by a different policy (the ``logging policy"). This paper utilizes two off-policy evaluation approaches: the self-normalized importance-weighted estimator \cite{swaminathan:sn-iw} and the direct method of regression imputation \cite{Dud_k_2014}. To our knowledge, ban actions have never been modeled in the bandit setting.
Esports have mostly attracted sports analytics interest in the form of win prediction and player valuation. Numerous efforts have been made to predict win probabilities in popular esports games such as CSGO and DOTA2. \cite{yang:match_pred} and \cite{hodge:match_pred} first use logistic regression and ensemble methods to predict win probabilities in DOTA2, a popular MOBA game. \cite{makarov:csgo} first predicted CSGO win probabilities using logistic regression, however their data only included less than 200 games. \cite{xenopoulos:csgo} expand on previous CSGO work by introducing a data parser and an XGBoost based win probability model for CSGO. They also value players based on how their actions change their team's chance of winning a round. \cite{bednarek:csgo} value players by clustering death locations.
Map selection is a process largely unique to CSGO and has not been well studied, but is loosely related to another esports process unique to MOBA games: hero selection. In DOTA2, for example, players from opposing teams alternate choosing from over one hundred heroes, with full knowledge of previous hero selections. \cite{yang:match_pred} and \cite{song:dota} use the selected heroes as features to predict win probability, but do not recommend hero selections or explicitly model the selection process. More relevant is the hero selection recommendation engine of \cite{conley:dota}, which uses logistic regression and K-nearest neighbors to rank available heroes based on estimated win probability; they do not, however, consider historical team or player context.
\section{Counter-Strike Map Selection}\label{section:CSMapSelection}
Counter-Strike is a popular esport that first came out in 2000, and CSGO is the latest version. The game mechanics have largely stayed the same since the first version of the game. Before a CSGO match starts, two teams go through the map selection process to decide which maps the teams will play for that match. A map is a virtual world where CSGO takes place. Typically, matches are structured as a best-of-three, meaning the team that wins two out of three maps wins the match. A team wins a map by winning rounds, which are won by completing objectives.
The collection of available maps in the map selection process is called the \textit{map pool}. Typically, there are seven maps to choose from in the map pool. Although the maps rarely change, a new map may be introduced and replace an existing map. Our data contains map selections using the following map pool: \texttt{dust2}, \texttt{train}, \texttt{mirage}, \texttt{inferno}, \texttt{nuke}, \texttt{overpass}, \texttt{vertigo}. The map selection process is exemplified in Figure~\ref{fig:mappicking}. First, team A \textit{bans} a map. This means that the teams will not play the map in the match. The team that goes first in the map selection process is usually higher seeded, or determined through tournament rules. Next, team B will ban a map. The teams then will each \textit{pick} a map that they wish to play in the match. Next, team A will ban one more map. At this point, team B will ban one of the two remaining maps, and the map not yet picked or banned is called the \textit{decider}.
Professional teams may sometimes have what is referred to as a \textit{permaban} -- a map that they will always ban with their first ban. For example, some teams may choose to ban the same map in over 75\% of their matches. From interviews with four CSGO teams ranked in the top 30, two of which are in the top 10, teams choose their maps from a variety of factors. Some teams try to choose maps they have a historically high win percentage, or maps where their opponents have low win percentages. Other teams may also choose maps purely based on how their recent practice matches performances.
\section{Bandit Model for CSGO Map Selection} \label{section:Modeling}
In order to model the map selection process, we elected to use a \textit{k}-armed contextual bandit. This was a clear choice: the actions taken by teams only yield a single shared reality, where we cannot observe the counterfactual of different choices. The bandit model enables us to approximate the counterfactual reality and frame this problem as a counterfactual learning problem.
In particular, we used the context from teams' previous matches, as well as information available at the time of selection, such as which maps were still in the selection pool. There are two kinds of actions: picks and bans, which must be manipulated differently. The reward is the map being won by the choosing team or not, as well as more granular version of this in which we include margin of victory.
\subsection{Context and Actions}
Our initial choice for the context given a particular round $t$ in the map-picking process was a one-hot encoding for the available maps in that particular round, such that the bandit would learn to not pick the map if it was not available. To give the bandit more information about the teams that were deciding for that particular match, we implemented two historical win percentages, the first being the team's historical match win percentage, and the second being the team's historical map win percentage for each map. The first percentage is utilized to indicate team strength compared to other teams, and the second the team's overall ability to play well on each map. We applied Laplace smoothing to the initial percentages for numerical stability, using the formula
\begin{equation}
\text{Win\%} = \dfrac{\text{Wins} + 5}{\text{Matches} + 10}.
\end{equation}
Both win percentages were stored in the context vector for both the deciding team and the opponent team alongside the available maps. For both picks and bans, the given \textit{context} is the same as described above, and the corresponding \textit{action} would be the map picked or banned by the deciding team.
\subsection{Rewards}
\subsubsection{Picks}
Due to the nature of the map-picking process, where the decider is a forced pick, we chose to remove the rewards from all final map picks, as it would not make sense to reward either team for a forced choice. As a result, only the first two picks from each map selection process were given a reward. Rewards for map-picking were implemented with two different methods. Our first method utilized a simple 0-1 reward (``0/1"), where if the deciding team won on the map they had picked, they would be rewarded with an overall reward of $1$ for that action, or $0$ otherwise. Our second method rewarded the deciding team based on the margin of rounds won (``MoR") in the best-of-30 rounds on the decided map. The reward function for deciding team $i$ and an opponent team $j$ is given below:
\begin{equation}
R_{i,j}= \frac{\text{Rounds won by $i$} - \text{Rounds won by $j$}}{\text{Total number of Rounds on map}}
\end{equation}
The round proportion rewards were implemented as a more granular method to compare team performance on each map.
\subsubsection{Bans}
Since there is no data on how any deciding team would perform on a banned map, we chose to reward bans based on the deciding team's overall performance in the match, where if the deciding team won the match, they would be rewarded for choosing to not play on the banned map with an overall reward of $1$, or, if they lost, a reward of $-1$. In addition, we implemented a exponentially decreasing reward over the ban process, where earlier bans would have higher rewards. Later map picks have fewer available choices: restricting the action space means a team may be forced to make a choice they do not want, and so we de-emphasize the later choices. The ban reward function for team $i$ playing in match $t$ is given below:
\begin{equation}
R_{i,t}(n) =
\begin{cases}
1 \cdot \frac{1}{2^n} & \text{if team $i$ won match $t$} \\
-1 \cdot \frac{1}{2^n}& \text{if team $i$ lost match $t$} \\
\end{cases}
\end{equation}
where $n$ is the $n$th ban in the map picking process. In our case, $n \in \{1,2,3,4\}$, as there are always four bans in the map picking process for CSGO.
\subsection{Policy Gradient Learning}
The most straightforward way to train a bandit is via policy gradient learning \cite{sutton_bartow:rl}. For our policy class, we use a multinomial logistic regression parameterized by weights $\theta$ and an action-context mapping function $\phi(x, a)$, with the softmax function to transform the affinity of the bandit for each action into a probability:
\begin{equation}
\pi(a|x) = \dfrac{\exp(\theta^{T} \phi(x, a))}{\sum_{i=1}^k \exp(\theta^{T} \phi(x, i))}
\end{equation}
The policy gradient approach trains the model via SGD \cite{sutton_bartow:rl}, enabling both online and episodic learning. In particular, the optimization maximizes the expected reward for the bandit, using the update function
\begin{table*}[]
\centering
\begin{tabular}{@{}lrrrr@{}}
\toprule
& \multicolumn{1}{c}{Picks (0/1)} & \multicolumn{1}{c}{Picks (MoR)} & \multicolumn{1}{c}{Bans (0/1)} &
\multicolumn{1}{c}{Bans (MoR)} \\ \midrule
Uniform policy (split) & 0.568/0.541 & 0.568/0.541 & -0.018/-0.003 & -0.018/-0.003 \\
Logging policy & 0.549/0.549 & 0.549/0.549 & -0.014/-0.014 & -0.014/-0.014 \\
SplitBandit & 0.587/0.554 & \textbf{0.659/0.528} & -0.016/0.004 & -0.016/0.004 \\
ComboBandit & \textbf{0.640/0.528} & 0.613/0.573 & \textbf{0.021/0.003} & \textbf{0.036/-0.015} \\
EpisodicBandit & 0.568/0.551 & 0.561/0.547 & 0.013/0.006 & 0.013/0.006 \\ \bottomrule
\end{tabular}
\caption{Expected reward for each policy type under four different evaluations. The best policy parameters were found via grid search and the policy was optimized with policy gradient. Both the SN-IW (left) and DM (right) evaluation methods are presented, except for Logging policy where the on-policy value is presented. Every model tested outperforms or matches the baseline uniform policy, with the best overall model being the bandit trained on both picks and bans. Comparisons between the uniform and logging policy indicate teams choose their bans well, but their picks poorly.}
\label{table:mainresults}
\end{table*}
\begin{equation}
\theta_{t+1} \leftarrow \theta + \eta R_t(A_t) \nabla_{\theta} \log \pi_{\theta_t}(A_t|X_t)
\end{equation}
with $\pi$ defined above and the gradient
\begin{equation}
\resizebox{.91\linewidth}{!}{$\nabla_{\theta} \log \pi(a|x) = \phi(x,a) - \dfrac{\sum_{i=1}^k \phi(x, i) \exp(\theta^T \phi(x, i))}{\sum_{i=1}^k \exp(\theta^T \phi(x, i))}.$}
\end{equation}
In the context of picks, we can use online learning to iteratively update the parameters $\theta$. For bans, however, we do not observe a reward at the time the choices are made; as a result, we used episodic learning, where an episode is an entire match.
\section{Experiments}\label{section:Experiments}
\subsection{Data}
We obtained our data from HLTV.org, a popular CSGO fan site. The site contains scores, statistics and map selections for most professional matches. We use matches from April 2020 to March 2021. In total, this consisted 628 teams that played a total of 6283 matches, summing to 13154 games. We only consider best-of-three matches, which are by far the most popular match format. We focus on the games where the most common set of seven maps is selected from the map pool of \texttt{dust2}, \texttt{inferno}, \texttt{mirage}, \texttt{nuke}, \texttt{overpass}, \texttt{train}, \texttt{vertigo}. In addition, we also remove teams such that in the final dataset, each team has played at least 25 games, or approximately 10 matches, with another team in the dataset. This leaves us with 165 teams, playing a total of 3595 matches, summing to 8753 games. The resulting dataset was split into an 80-20 train-test split by matches for bandit learning and evaluation.
\subsection{Evaluation}
We use two typical off-policy evaluation methods, the \textit{direct method} (``DM") \cite{Dud_k_2014} and the self-normalized importance-weighted estimator (``SN-IW") \cite{swaminathan:sn-iw}. We also present the mean reward observed as a baseline.
The goal of the direct method is to estimate the reward function $r(x, a)$ that returns the reward for any given action $a$ for the context $x$. We estimate the reward function by using an importance-weighted ridge regression for each action. We use the self-normalized importance-weighted estimator with no modifications.
Value estimates are presented for four different reward and model training settings:
\begin{itemize}
\item Picks(0/1): Expected pick reward for models trained with 0/1 rewards
\item Picks(MoR): Expected pick reward for models trained with MoR rewards
\item Bans(0/1): Expected ban reward for a models trained with 0/1 rewards
\item Bans(MoR): Expected ban reward for models trained with MoR rewards
\end{itemize}
\subsection{Variety of Policies}
We experimented with three different varieties of contextual bandits: \texttt{SplitBandit}, \texttt{ComboBandit}, and \texttt{EpisodicBandit}.
\texttt{SplitBandit} is composed of two individual, simple contextual bandits, each with a $\theta$ parameter size of $(\texttt{n\_features} \cdot \texttt{n\_arms})$. The first contextual bandit is trained on the picks via online learning. The second contextual bandit is trained on the bans in an episodic fashion.
\texttt{ComboBandit} is a single model also trained on the picks via online learning and on the bans via episodic learning with a $\theta$ parameter size of $(\texttt{n\_features} \cdot \texttt{n\_arms})$. \texttt{ComboBandit} learns a single set of parameters that define a policy for both picks and bans. The ban policy is derived from the pick policy:
\begin{equation}
\pi_B(a|X) = \frac{1-\pi_P(a|X)}{\sum_{\alpha \in A}1-\pi_P(\alpha|X)}
\end{equation}
for pick policy $\pi_P$ and ban policy $\pi_B$ over actions $A$ and context $X$.
\texttt{EpisodicBandit} is similarly a single model, but it is trained on both the picks and bans simultaneously via episodic learning with a $\theta$ parameter size of $(2 \cdot \texttt{n\_features} \cdot \texttt{n\_arms})$. We expected this model to perform similarly to \texttt{SplitBandit}, since its gradient estimates are better estimates than the estimates derived from individual datapoints, offsetting the quicker adaptability of the online gradient calculation with less noise.
\section{Results} \label{section:Results}
Our main results are summarized in table~\ref{table:mainresults}. Considering the self-normalized estimator, the best model for picks was \texttt{SplitBandit} trained on proportional rewards, while the best model for bans was \texttt{ComboBandit} trained on proportional rewards. The uniform policy performs better than the logging policy for the picks in our dataset but worse for bans, which indicates teams' picks might be overconfident, whereas their bans are chosen more carefully.
\begin{figure}[t]
{\includegraphics[width=9cm]
{Figures/pick_value_over_time.png}}
\caption{\label{fig:value_over_time} Picks(0/1) value on the test set for \texttt{ComboBandit} and Uniform policy, evaluated every 100 rounds over 3 epochs of training. The bandit quickly surpasses the uniform policy's performance and plateaus around an expected reward value of approximately $0.64$.}
\end{figure}
\texttt{ComboBandit} substantially outperforms all other policies. We believe this is due to its training including the additional data from both picks and bans instead of selecting only one of the two categories for training a given parameter. This yields a better optimization through better gradient estimates. \texttt{EpisodicBandit} is trained on both picks and bans, but its parameters do not depend on both subsets of data, which does not provide that optimization advantage. The learning curve in Figure \ref{fig:value_over_time} shows that ComboBandit surpasses the uniform policy benchmark after only a few training rounds, continuing to improve over 3 epochs of training.
\begin{figure}[t]
{\includegraphics[width=9cm]
{Figures/policy_ex.png}}
\caption{\label{fig:policy_comparison} The best model's probability distribution for pick 4 in a match between \textit{TIGER} and \textit{Beyond}. \textit{TIGER}, the deciding team, chose \texttt{Nuke} and lost the map, later going on to win map \texttt{Overpass}, which was \texttt{ComboBandit}'s suggestion.}
\end{figure}
Figure~\ref{fig:policy_comparison} shows an example of \texttt{ComboBandit}'s policy. In this match, team \textit{TIGER} chose to play on the map \texttt{Nuke}, which they later lost. \texttt{ComboBandit} suggested instead to play on \texttt{Overpass}, with 71\% probability. In the same match, \texttt{Overpass} was chosen as the decider and \textit{TIGER} won that map, indicating that the bandit model's policy distribution was more valuable than the team's intuition on map choice.
\section{Discussion} \label{section:Discussion}
The results indicate that teams using our chosen policy instead of their traditional map-picking process can increase their expected win probability by 9 to 11 percentage points, depending on the policy used. This is a substantial advantage for a best-of-3 match, since the model could confer that added win probability to all three map choices. The ban choice can be improved as well by using our model. The logging policy yields an expected reward of approximately $-0.014$, which indicates that bans have a slight negative effect on match win probability. However, our best model's expected reward for bans is $0.036$, thus increasing match win probability by approximately 5\% after a ban choice. For two teams that are evenly matched, using our bandit for both pick and ban decisions translates to the team that uses the model having an expected overall match win probability of 69.8\% instead of 50\%, a substantial advantage for a team.
The choice of evaluation metric is particularly important in examining the results. Using the direct method instead of the self-normalized estimator, we reach drastically different conclusions about which model to use, with the best overall model being \texttt{EpisodicBandit}. In our experiments, we used ridge regressions for our regression imputation. This is clearly a suboptimal model for this estimation, since the context features of win probabilities are bounded: there is a non-linear relationship between the context and the rewards. This is a big limitation of our experiments: we instead relied on the importance-weighted estimator, which is known to be imprecise in estimating policies far from the logging policy.
Future work in this area will be concentrated on examining better choices for evaluation metrics, as well as expanding the contextual features further by adding, for example, player turnover, team-based Elo metrics or rankings, or examining recent performances, such as win percentage in the last 10 matches. The rewards can also be expanded by using not only margin of rounds won per map, but also the margin of players alive per map at the end of a round. Additionally, different framings for the bandit can be considered, such as creating a ranking of which maps are best to choose instead of the model selecting a single map for the user.
\section{Conclusion} \label{section:Conclusion}
We modeled the map selection process in Counter-Strike: Global Offensive as a bandit, framing the problem in several different ways. Our key contributions are (1) the introduction of bandits and simple reinforcement learning models to esports and CSGO in particular, and (2) novel ways of implementing negative choices in bandits, for which we explicitly choose not to observe their rewards. We find that our model shows that teams are making sub-optimal map selections.
\section*{Acknowledgments}
This work was partially supported by: NSF awards CNS-1229185, CCF-1533564, CNS-1544753, CNS-1730396, CNS-1828576, CNS-1626098. We additionally thank David Rosenberg.
| {
"attr-fineweb-edu": 1.591797,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfcQ5qhDCeBHYsZVe | \section{Introduction}
\label{sec:intro}
Nowadays, a large amount of performance data of professional football (soccer) players is routinely collected. The analysis of such data is of great commercial interest. Here we cluster complex player performance data with mixed type variables from the 2014-15 season of eight European major leagues.
Sports have embraced statistics in assisting player recruitment and playing strategies. Different statistical methodologies have been applied to various types of sports data. Cluster analysis has been used for aggregating similar types of players in several applications. \cite{ogles2003typology} suggested that by using cluster analysis (Ward's method), marathon runners can be categorised into five groups in terms of their motives for running. \cite{gaudreau2004different} examined coping strategies used by groups of athletes based on a hierarchical cluster analysis using Ward's method. \cite{wang2009intra} observed coaching behaviour among basketball players, and showed that three distinct groups could be identified by using an agglomerative hierarchical clustering method. \cite{Yingying2010ModelingAP} applied different clustering techniques to athletes' physiological data, and proposed a new hierarchical clustering approach. \cite{Kosmidis2015} used NBA players' data to form groups of players in terms of their performance using copula-based finite mixture models. \cite{DuttaYurkoVentura} adopted model based clustering for data of defensive NFL players.
There is also connected work on football data. \cite{bialkowski2014identifying} adopted k-means clustering and minimum entropy data partitioning to identify a team's structure. \cite{feuerhake2016recognition} used the Levenstein distance and then k-means and DBSCAN clustering to analyse sequences of movements in a soccer game. \cite{hobbs2018quantifying} applied spatio-temporal trajectory clustering that could automatically identify counter-attacks and counter-pressing without requiring unreliable human annotations. \cite{decroos2020player} created a ``player vector'' that characterizes a player's playing style using methods such as clustering and nearest neighbour.
A key contribution of the present work is the assessment of the quality of different clusterings, which allows us to select from a wide range of clustering solutions for the analysed data set coming from different clustering approaches and numbers of clusters. \cite{hennig2015true,hennig2015clustering31} have argued that there is no single ``true'' clustering for a given data set, and that the quality of different clusterings depends on the requirements of the specific application, and in particular on what characteristics make a clustering desirable for how the clusters are later used and interpreted. Different uses can be imagined for clusterings of football players according to performance data, and we aim at measuring clustering quality with such uses in mind. We propose two different such measurements for different aims of clustering. The first one is to give a rough representation of the structure in the data in terms of a low number of clusters corresponding to easily interpretable types of players. This can be used for example to analyse team compositions and positioning in terms of these clusters, and to relate it to success. The second one is to have small clusters of very similar players that can be used for finding potential replacements for a player, and to analyse similarities between teams on a finer scale. The second aim requires a much larger number of clusters than the first one. Arguably, none of the existing standard methods for determining the number of clusters in the literature (see Section \ref{sec:indexes}) is reliable when comparing very small (around 4, say) with very large (more than 100) numbers of clusters based on the data alone. In fact, on most data sets, these will not directly compete. Rather it depends on the clustering aim whether a rather small or a rather large number of clusters is required.
We will take the approach proposed by \cite{hennig2017cluster} and elaborated in \cite{akhanli2020comparing}, which is based on a set of indexes that are meant to measure different desirable features of a clustering in a separate manner, and then the user can select indexes and weights according to the requirements of the application in order to define a composite index. This requires a calibration scheme that makes the values of the different indexes comparable, so that their weights can be interpreted in terms of the relative importance of the respective characteristic. Although we analyse data from the 2014-15 season, the composite indexes resulting from this approach are applicable to other data sets of a similar kind.
Another important ingredient of our clusterings is a suitable dissimilarity measure between players. This involves a number of nontrivial choices, as the data are of mixed type (there are categorical position variables, counts, ratios, and compositional variables as well as variables that are very skewly distributed and require transformation and other ways of re-expression). A suitable dissimilarity measure for football player performance data was proposed in \cite{akhanli2017some} with the intention to use it for mapping the players by means of multidimensional scaling (MDS) \citep{BorGro12} and dissimilarity-based clustering. Some details that were not covered in \cite{akhanli2017some} are explained here.
In Section~\ref{sec:datadissimilarity} the data set is introduced and the dissimilarity measure is defined. Section~\ref{sec:cmethods} lists the cluster analysis methods that have been used. Section~\ref{sec:aggregation_indexes} introduces various indexes for cluster validation from the literature, and the indexes used for individual aspects of clustering quality along with the calibration and weighting scheme according to \cite{akhanli2020comparing}. Section~\ref{sec:valresults} applies these ideas to the football players data set. This includes a discussion of the weights to be chosen, which involves a survey among football experts regarding whether specific players should be clustered together in order to justify one of the weighting schemes. Section~\ref{sec:conclusion} concludes the paper.
\subsection{General notation}
\label{sec:general_notation}
Given a data set, i.a., a set of distinguishable objects $\mathcal{X}=\left\{ x_{1}, x_{2}, \ldots, x_{n} \right\}$, the aim of cluster analysis is to group them into subsets of $\mathcal{X}$. A clustering is denoted by $\mathcal{C}=\left\{ C_{1}, C_{2}, \ldots, C_{K} \right\}$, $C_k\subseteq \mathcal{X},$ with cluster size $n_{k}= |C_{k}|,\ k=1,\ldots,K$. We require $\mathcal{C}$ to be a partition, e.g., $k \neq g \Rightarrow C_{k} \cap C_{g} = \emptyset$ and $\bigcup_{k=1}^{K} C_{k} = \mathcal{X}$. Clusters are assumed to be crisp rather than fuzzy, i.e., an object is either a full member of a cluster or not a member of this cluster at all. An alternative way to write $x_{i}\in C_k$ is $l_i=k$, i.e., $l_i\in\{1,\ldots,K\}$ is the cluster label of $x_{i}$.
The approach presented here is defined for general dissimilarity data. A dissimilarity is a function $d : \mathcal{X}^{2} \rightarrow \mathbb{R}_{0}^{+}$ so that $d(x_{i}, x_{j}) = d(x_{j}, x_{i}) \geq 0$ and $d(x_{i}, x_{i}) = 0$ for $x_{i}, x_{j} \in \mathcal{X}$. Many dissimilarities are distances, i.e., they also fulfill the triangle inequality, but this is not necessarily required here.
\section{Football players dataset and dissimilarity construction}
\label{sec:datadissimilarity}
The data set analysed here contains 1501 football players characterized by 107 variables. It was obtained from the website \url{www.whoscored.com}. Data refer to the 2014-2015 football season in 8 major leagues (England, Spain, Italy, Germany, France, Russia, Netherlands, Turkey). The original data set had 3003 players, which were those who have appeared in at least one game during the season. Goalkeepers have completely different characteristics from outfield players and were therefore excluded from the analysis. Because data about players who did not play very often are less reliable, and because the methods that we apply are computer intensive, we analysed the 1501 (about 50\%) players who played most (at least 1403 or 37\% out of a maximum of 3711 minutes). Variables are of mixed type, containing binary, count and continuous information. The variables can be grouped as follows:
\begin{itemize}
\item \textbf{Team and league variables}: League and team ranking score based on the information on UEFA website, and team points from the ranking table of each league,
\item \textbf{Position variables}: 11 variables indicating possible positions on which a player can play and has played,
\item \textbf{Characteristic variables}: Age, height, weight,
\item \textbf{Appearance variables}: Number of appearances of teams and players, and players number of minutes played,
\item \textbf{Top level count variables}: Interceptions, fouls, offsides, clearances, unsuccessful touch, dispossess, cards, etc.
\item \textbf{Lower level count variables}: Subdivision of some top level count variables as shown in Table~\ref{tab:lowerlevel}
\end{itemize}
\begin{table}[h]
\renewcommand{\arraystretch}{1.25}
\caption{Top and lower level count variables \label{tab:lowerlevel}}
\begin{tabular}{p{2cm} p{9.5cm}}
\thickhline
\textit{TOP LEVEL} & \textit{LOWER LEVEL} \\
\thickhline
\rowcolor{lightgray}
&\textit{Zone:} Out of box, six yard box, penalty area \\
\rowcolor{lightgray}
&\textit{Situation:} Open play, counter, set piece, penalty taken \\
\rowcolor{lightgray}
&\textit{Body part:} Left foot, right foot, header, other \\
\rowcolor{lightgray}
\multirow{-4}{*}{\textbf{SHOT}} &\textit{Accuracy:} On target, off target, blocked \\
\multirow{3}{*}{\textbf{GOAL}}
& \textit{Zone:} Out of box, six yard box, penalty area \\
& \textit{Situation:} Open play, counter, set piece, penalty taken \\
& \textit{Body part:} Left foot, right foot, header, other \\
\rowcolor{lightgray}
& \textit{Length:} AccLP, InAccLP, AccSP, InAccSP \\
\rowcolor{lightgray}
\multirow{-2}{*}{\textbf{PASS}} & \textit{Type:} AccCr, InAccCr, AccCrn, InAccCrn, AccFrk, InAccFrk \\
\multirow{2}{*}{\textbf{KEY PASS}}
& \textit{Length:} Long, short \\
& \textit{Type:} Cross, corner, free kick, through ball, throw-in, other \\
\rowcolor{lightgray}
\textbf{ASSIST} & \textit{Type:} Cross, corner, free kick, through ball, throw-in, other \\
\textbf{BLOCK} & Pass blocked, cross blocked, shot blocked \\
\rowcolor{lightgray}
\textbf{TACKLE} & Tackles, dribble past \\
\textbf{AERIAL} & Aerial won, aerial lost \\
\rowcolor{lightgray}
\textbf{DRIBBLE} & Dribble won, dribble lost \\
\thickhline
\multicolumn{2}{l}{\scriptsize{*Acc: Accurate, *InAcc: Inaccurate}} \\
\multicolumn{2}{l}{\scriptsize{*LP: Long pass, *SP: Short pass, *Cr: Cross, *Crn: Corner, *Frk: Free kick}}\\
\end{tabular}
\label{tab:data}
\end{table}
In order to appropriately take into account the information content in the different variables, \cite{akhanli2017some} constructed a dissimilarity measure between players, which we review here (the choice of $c$ in Section \ref{subsec:trans} was not explained there). See that paper for more details including missing value treatment. The construction process had five stages:
\begin{enumerate}
\item \textbf{Representation:} Re-defining variables in order to represent the relevant information in the variables appropriately;
\item \textbf{transformation} of variables, where the impact of variables on the resulting dissimilarity is appropriately formalised in a nonlinear manner;
\item \textbf{standardisation} in order to make within-variable variations comparable between variables;
\item \textbf{weighting} to take into account that not all variables have the same importance;
\item \textbf{aggregation:} Defining a dissimilarity putting together the information from the different variables; the first four stages need to be informed by the method of aggregation.
\end{enumerate}
Data should be processed in such a way that the resulting dissimilarity between observations matches how dissimilarity is interpreted in the application of interest, see \cite{HenHau06,hennig2015clustering31}. The resulting dissimilarities between observations may strongly depend on transformation, standardisation, etc., which makes variable pre-processing very important.
\subsection{Representation}
\label{subsec:repre}
Counts of actions such as shots, blocks etc. should be used relative to the period of time the player played. A game of football lasts for 90 minutes, so we represent the counts as ``per 90 minutes'', i.e., divided by the minutes played and multiplied by 90. We will still refer to these variables as ``count variables'' despite them technically not being counts anymore in this way.
Regarding count variables at different levels such as shots overall, shots per zone, shot accuracy, there is essentially different information in (a) the overall number and (b) the distribution over sub-categories. Therefore the top level counts are kept (per 90 minutes), whereas the lower level counts are expressed as proportions of the overall counts. Some counts in sub-categories can be interpreted as successes of actions counted by other variables. For example there is accuracy information for passes, and goals are successful shots. In these cases, success rates are used (i.e., goals from the six yard box are expressed as success percentage of shots from the six yard box). In some cases both success rates and sub-category proportions are of interest in their own right, in which case they are both kept, see Table \ref{tab:repre} for an overview. Note that later variables are aggregated in such a way that redundant information (such as keeping all sub-category proportions despite them adding up to 1 and therefore losing a degree of freedom) does not cause mathematical problems, although this should be taken into account when weighting the variables, see Section \ref{subsec:weight}.
\begin{table}[h]
\renewcommand{\arraystretch}{1.25}
\caption{Representation of lower level count variables \label{tab:rep} }
\begin{tabular}{p{3.5cm}p{2.5cm}p{5.5cm}}
\hline
\textbf{Variables} & \textbf{Proportional total} & \textbf{Success rate} \\
\textbf{(Include sub-categories)} & \textbf{(standardised by)} & \textbf{(standardised by)} \\
\hline
Block & Total Blocks & \ding{56} \\
Tackle, Aerial, Dribble & \ding{56} & Total tackles, total aerials, and total dribbles \\
Shot (4 sub-categories) & Total shots & \ding{56} \\
Goal (4 sub-categories) & Total goals & Shot count in different sub-categories, and total shots for overall success rate \\
Pass (2 sub-categories) & Total passes & Pass count in different sub-categories, and total passes for overall success rate \\
Key pass (2 sub-categories) & Total key passes & \ding{56} \\
Assist & Total assists & Key pass count in different sub-categories, and total assists for overall success rate \\
\hline
\end{tabular}
\label{tab:repre}
\end{table}
\subsection{Transformation}
\label{subsec:trans}
The top level count variables have more or less skew distributions; for example, many players, particularly defenders, shoot very rarely during a game, and a few forward players may be responsible for the majority of shots. On the other hand, most blocks come from a few defenders, whereas most players block rarely.
This means that there may be large absolute differences between players that shoot or block often, whereas differences at the low end will be low; but from the point of view of interpretation, the dissimilarity between two players with large but fairly different numbers of blocks and shots is not that large, compared with the difference between, for example, a player who never shoots and one who occasionally but rarely shoots. Most of these variables $x$ have therefore been transformed by $y=\log(x+c)$, where the constant $c$ (or no transformation) has been chosen dependently of the variable in question by taking into account data from the previous season. The transformation was chosen in order to make the differences between the two years as stable as possible over the range of $x$, according to the rationale that in this way the amount of ``random variation'' is near constant everywhere on the value range. More precisely, a regression was run, where the response was the absolute value of the player-wise transformed count difference between the two seasons, and the explanatory variable was the weighted mean (by minutes played) of the two transformed count values. $c$ is then chosen so that the regression slope is as close to zero as possible (see \cite{akhanlithesis} for more details and issues regarding matching player data from the two seasons).
\subsection{Standardisation}
\label{subsec:stand}
The general principle of aggregation of variables will be to sum up weighted variable-wise dissimilarities (see Section \ref{sec:aggre}), which for standard continuous variables amounts to computing the $L_1$ (Manhattan) distance. Accordingly, variables are standardised by the average absolute distance from the median. For the lower level percentages, we standardise by dividing by the pooled average $L_1$ distance from the median. We pool this over all categories belonging to the same composition of lower level variables. This means that all category variables of the same composition are standardised by the same value, regardless of their individual relative variances. The reason for this is that a certain difference in percentages between two players has comparable meaning between the categories, which does not depend on the individual variance of the category variable (see \cite{akhanli2017some} for a discussion of the treatment of compositional variables).
\subsection{Weighting}
\label{subsec:weight}
An aspect of variable weighting here is that in case that there are one or more lower level compositions of a top level variable, the top level variable is transformed and standardised individually, whereas the categories of the lower level percentage composition are standardised together. This reflects the fact that the top level count and the lower level distribution represent distinct aspects of a player's characteristics, and on this basis we assign the same weight to the top level variable as to the whole vector of compositional variables, e.g., a weight of one for transformed shot counts is matched by a weight of $1/3$ for each of the zone variables ``out of the box'', ``six yard box'', ``penalty area''. Implicitly this deals with the linear dependence of these variables (as they add to one); their overall weight is fixed and would not change if the information were represented by fewer variables removing linear dependence.
In case that a top level count variable is zero for a player, the percentage variables are missing. In this situation, for overall dissimilarity computation between such a player and another player, the composition variables are assigned weight zero and the weight that is normally on a top level variable and its low level variables combined is assigned to the top level variable.
\subsection{Aggregation of variables}
\label{sec:aggre}
There are different types of variables in this data set which we treat as different groups of variables. There are therefore two levels of aggregation, namely aggregation within a group, and aggregation of the groups. Group-wise dissimilarities $d_k$ are aggregated as follows:
\begin{equation}
\label{eq:dist_agg}
d_{fin}(\mathbf{x}, \mathbf{y}) = \sum_{k=1}^{3} \frac{w_{k}*d_{k}(\mathbf{x}, \mathbf{y})}{s_{k}},
\end{equation}
\noindent where $w_{k}$ is the weight of group $k$, and $s_{k}$ is the standard deviation of the vector of all dissimilarities $d_k$ from group $k$. $w_k$ is chosen proportionally to the number of variables in the $k^{th}$ group. Note that there is another layer of weighting and standardising here on top of what was discussed in Sections \ref{subsec:stand} and \ref{subsec:weight}. This was done in order to allow for a clear interpretation of weights and measures of variability; it would have been much more difficult to standardise and weight individual variables of different types against each other. (\ref{eq:dist_agg}) takes inspiration from the Gower coefficient for mixed type data \citep{Gow71}, although Gower did not treat groups of variables and advocated range standardisation, which may be too dominated by outliers.
For quantitative variables (characteristics, appearances, top and lower level count variables), (\ref{eq:dist_agg}) with $d_k$ chosen as absolute value of the differences amounts to the $L_1$ (Manhattan) distance. These variables therefore do not have to be grouped.
The league ranking scores and the team points from the ranking table of each league based on the 2014-2015 football season are aggregated to a single joint dissimilarity by adding standardised differences on both variables in such a way that a top team in a lower rated league is treated as similar to a lower ranked team in a higher rated league.
The position variables can take values 0 or 1 for the presence, over the season, of the player on 11 different possible positions on the pitch. These are aggregated to a single dissimilarity using the geco coefficient for presence-absence data with geographical location, taking into account geographical distances, as proposed in \cite{HenHau06}, using a suitable standardised Euclidean distance between positions, see Table \ref{tab:dist_pos2}.
\begin{table}
\scriptsize
\centering
\caption{Distances between each position. Here the values are obtained by using Euclidean geometry}
\begin{tabular}{l|ccc c ccc ccc c}
$\mathbf{d_{R}(a,b)}$ & DC & DL & DR & DMC & MC & ML & MR & AMC & AML & AMR & FW \\
\hline
DC & $0$ & $1$ & $1$ & $1$ & $2$ & $\sqrt{5}$ & $\sqrt{5}$ & $3$ & $\sqrt{10}$ & $\sqrt{10}$ & $4$ \\
DL & $1$ & $0$ & $1$ & $\sqrt{2}$ & $\sqrt{5}$ & $2$ & $\sqrt{5}$ & $\sqrt{10}$ & $3$ & $\sqrt{10}$ & $\sqrt{17}$\\
DR & $1$ & $1$ & $0$ & $\sqrt{2}$ & $\sqrt{5}$ & $\sqrt{5}$ & $2$ & $\sqrt{10}$ & $\sqrt{10}$ & $3$ & $\sqrt{17}$\\
DMC & $1$ & $\sqrt{2}$ & $\sqrt{2}$ & $0$ & $1$ & $\sqrt{2}$ & $\sqrt{2}$ & $2$ & $\sqrt{5}$ & $\sqrt{5}$ & $3$ \\
MC & $2$ & $\sqrt{5}$ & $\sqrt{5}$ & $1$ & $0$ & $1$ & $1$ & $1$ & $\sqrt{2}$ & $\sqrt{2}$ & $2$ \\
ML & $\sqrt{5}$ & $2$ & $\sqrt{5}$ & $\sqrt{2}$ & $1$ & $0$ & $1$ & $\sqrt{2}$ & $1$ & $\sqrt{2}$ & $\sqrt{5}$ \\
MR & $\sqrt{5}$ & $\sqrt{5}$ & $2$ & $\sqrt{2}$ & $1$ & $1$ & $0$ & $\sqrt{2}$ & $\sqrt{2}$ & $1$ & $\sqrt{5}$ \\
AMC & $3$ & $\sqrt{10}$ & $\sqrt{10}$ & $2$ & $1$ & $\sqrt{2}$ & $\sqrt{2}$ & $0$ & $1$ & $1$ & $1$ \\
AML & $\sqrt{10}$ & $3$ & $\sqrt{10}$ & $\sqrt{5}$ & $\sqrt{2}$ & $1$ & $\sqrt{2}$ & $1$ & $0$ & $1$ & $\sqrt{2}$\\
AMR & $\sqrt{10}$ & $\sqrt{10}$ & $3$ & $\sqrt{5}$ & $\sqrt{2}$ & $\sqrt{2}$ & $1$ & $1$ & $1$ & $0$ & $\sqrt{2}$\\
FW & $4$ & $\sqrt{17}$ & $\sqrt{17}$ & $3$ & $2$ & $\sqrt{5}$ & $\sqrt{5}$ & $1$ & $\sqrt{2}$ & $\sqrt{2}$ & $0$\\
\end{tabular}
\label{tab:dist_pos2}
\end{table}
\section{Clustering methods}
\label{sec:cmethods}
Clustering has been carried out by standard dissimilarity-based clustering methods with the aim of finding the best clusterings by comparing all clusterings using a composite cluster validity index based on indexes measuring different aspects of clustering, see Section \ref{sec:aggregation_indexes}.
The following six clustering algorithms (all of which unless otherwise stated are described in \citet{kaufman2009finding}) were used, all with standard R-implementations and default settings:
\begin{itemize}
\item Partitioning Around Medoids (PAM),
\item single linkage,
\item average linkage,
\item complete linkage,
\item Ward's method (this was originally defined for Euclidean data but can be generalised to general dissimilarities, see \citet{MurLeg14}),
\item spectral clustering (\cite{NgJoWe01}).
\end{itemize}
\section{A composite cluster validity index based on indexes measuring different aspects of clustering}
\label{sec:aggregation_indexes}
\subsection{Cluster validity indexes}
\label{sec:indexes}
In order to choose a clustering method and number of clusters for clustering the players, we will follow the concept of aggregation of calibrated cluster validity indexes as introduced in \cite{hennig2017cluster} and elaborated in \cite{akhanli2020comparing}.
A large number of cluster validity indexes are proposed in the literature, for example the Average Silhouette Width (ASW) \citep{kaufman2009finding}, the Calinski-Harabasz index (CH) \citep{calinski1974dendrite}, the Dunn index \citep{dunn1974well}, a Clustering Validity Index Based on Nearest Neighbours (CVNN) \citep{liu2013understanding}, and Hubert's $\Gamma$ \citep{hubert1976quadratic}. All these indexes attempt to summarise the quality of a clustering as a single number. They are normally optimised in order to find the best clustering out of several clusterings. Mostly the set of compared clusterings is computed from the same clustering method but with different numbers of clusters. Clusterings computed by different methods can also be compared in this way, but this is done much less often, and some indexes are closer connected to specific clustering methods than others (e.g., optimising CH for a fixed number of clusters is equivalent to $k$-means). See \cite{AGMPP12} for a comparative simulation study, and \cite{HVH15} for more indexes and discussion. The indexes are usually presented as attempts to solve the problem of finding the uniquely best clustering on a data set. Occasionally the ASW is also used to assess a clustering's validity without systematic optimisation. Alternatively, stability under resampling has been suggested as a criterion for measuring the quality of a clustering (\cite{tibshirani2005cluster, fang2012selection}). Further approaches to choose the number of clusters are more closely related to specific clustering methods and their objective functions, such as the gap statistic \citep{TiWaHa01}. In model-based clustering, information criteria such as the BIC are popular \citep{BCMR19}. As the indexes above, these are also usually interpreted as stand-alone measures of the clustering quality.
As argued in \cite{hennig2015clustering31,hennig2015true}, there are various aspects of clusterings that can be of interest, such as separation between clusters, within-cluster homogeneity in the sense of small within-cluster dissimilarities or homogeneous distributional shapes, representation of clusters by their centroids, stability under resampling, and entropy. In many situations two or more of these aspects are in conflict; for example single linkage clustering will emphasise between-cluster separation disregarding within-cluster homogeneity, whereas complete linkage will try to keep within-cluster dissimilarities uniformly small disregarding separation. In different applications, different aspects of clustering are of main interest, and there can be different legitimate clusterings on the same data set depending on which characteristics are required. For example, different biological species need to be genetically separated, whereas within-cluster homogeneity is often more important than separation for example when colouring a map for highlighting clusters of similar regions according to criteria such as economic growth, severity of a pandemic, or avalanche risk.
The chosen clustering then needs to depend on a user specification of relevant features of the clustering. The traditional literature on validity indexes gives little guidance in this respect; where such indexes are introduced, authors tend to argue that their new index is the best over a wide range of situations, and comparative studies such as \cite{AGMPP12} normally focus on the ability of the indexes to recover a given ``true'' clustering. The approach taken here is different. It is based on defining indexes that separately measure different aspects of clustering quality that might be of interest, and the user can then aggregate the indexes, potentially involving weights, in order to find a clustering that fulfills the specific requirements of a given application.
In the following we will first define indexes that measure various characteristics of a clustering that are potentially of interest for the clustering of football players, and then we will propose how they can be aggregated in order to define an overall index that can be used to assess clusterings and select an optimal one.
\subsection{Measurement of individual aspects of clustering quality}
\label{subsubsec:aspect_cquality}
\cite{hennig2017cluster} and \cite{akhanli2020comparing} defined several indexes that measure desirable characteristics of a clustering (and contain more details than given below). Not all of these are relevant for clustering football players. We will define the indexes that are later used in the present work, and then give reasons why further indexes have not been involved.
\begin{description}
\item[Average within-cluster dissimilarities:] This index formalises within-cluster homogeneity in the sense that observations in the same cluster should all be similar. This is an essential requirement for useful clusters of football players.
\begin{displaymath}
I_{ave.within}(\mathcal{C}) = \frac{1}{n} \sum_{k=1}^{K} \frac{1}{n_k-1}\sum_{x_{i} \neq x_{j} \in C_{k}} d(x_{i},x_{j}).
\end{displaymath}
A smaller value indicates better clustering quality.
\item[Separation index:] Objects in different clusters should be different from each other. This is to some extent guaranteed if the within-cluster dissimilarities are low (as then the larger dissimilarities tend to be between clusters), but usually, on top of this, separation is desirable, meaning that there is some kind of gap between the clusters. The idea is that clusters should not just result from arbitrarily partitioning a uniformly or otherwise homogeneously distributed set of observations. There is no guarantee that there is meaningful separation between clusters in the set of football players, but if such separation exists between subsets, these are good cluster candidates. Separation refers to dissimilarities between observations that are at the border of clusters, and closer to other clusters than the interior points of clusters. Therefore, separation measurement is based on the observations that have smallest dissimilarities to points in other clusters.
For every object $x_{i} \in C_{k}$, $i = 1, \ldots, n$, $k \in {1, \ldots, K}$, let $d_{k:i} = \min_{x_{j} \notin C_{k}} d(x_{i},x_{j})$. Let $d_{k:(1)} \leq \ldots \leq d_{k:(n_{k})}$ be the values of $d_{k:i}$ for $x_{i} \in C_{k}$ ordered from the smallest to the largest, and let $[pn_{k}]$ be the largest integer $\leq pn_{k}$. Then, the separation index with the parameter $p$ is defined as
\begin{displaymath}
I_{sep}(\mathcal{C};p) = \frac{1}{\sum_{k=1}^{K} [pn_{k}]} \sum_{k=1}^{K} \sum_{i=1}^{[pn_{k}]} d_{k:(i)},
\end{displaymath}
Larger values are better. The proportion $p$ is a tuning parameter specifying what percentage of points should contribute to the ``cluster border''. We suggest $p=0.1$ as default.
\item[Representation of dissimilarity structure by the clustering:] A clustering can be seen as a parsimonious representation of the overall dissimilarities. In fact, a clustering of football players can be used as a simplification of the dissimilarity structure by focusing on players in the same cluster rather than using the exact dissimilarities to consider more or less similar players. The quality of a clustering as representation of the dissimilarity structure can be measured by several versions of the family of indexes known as Hubert's $\Gamma$ introduced by \cite{hubert1976quadratic}. The version that can be most easily computed for a data set of the given size is based on the Pearson sample correlation $\rho$. It interprets the ``clustering induced dissimilarity'' $\mathbf{c} = vec([c_{ij}]_{i<j})$, where $c_{ij} = \mathbf{1}(l_{i} \neq l_{j})$, i.e. the indicator whether $x_i$ and $x_j$ are in different clusters, as a ``fit'' of the given data dissimilarity $\mathbf{d} = vec\left([d(x_{i}, x_{j})]_{i<j}\right)$, and measures its quality as
\begin{displaymath}
I_{Pearson \Gamma}(\mathcal{C}) = \rho(\mathbf{d}, \mathbf{c}).
\end{displaymath}
This index has been used on its own to measure clustering quality, but we use it as measuring a specific aspect of clustering quality. Large values are good.
\item[Entropy:] Although not normally seen as primary aim of clustering, in some applications very small clusters are not very useful, and cluster sizes should optimally be close to uniform. This is measured by the well known ``entropy'' \cite{shannon1948mathematical}:
\begin{displaymath}
I_{entropy}(\mathcal{C}) = - \sum_{k=1}^{K} \frac{n_{k}}{n} \log(\frac{n_{k}}{n}).
\end{displaymath}
Large values are good. For the clustering of football players, we aim at a high entropy, as too large clusters will not differentiate sufficiently between players, and very small clusters (with just one or two players, say) are hardly informative for the overall structure of the data.
\item[Stability:] Clusterings are often interpreted as meaningful if they can be generalised as stable substantive patterns. Stability means that they can be replicated on different data sets of the same kind. Without requiring that new independent data are available, this can be assessed by resampling methods such as cross-validation and bootstrap.
It is probably not of much interest to interpret the given set of football players as a random sample representing some underlying true substantially meaningful clusters that would also be reproduced by different players. However, it is relevant to study the stability of the clustering of football players under resampling, as such stability means that whether certain players tend to be clustered together does not depend strongly on which other players are in the sample, which is essential for interpreting the clusters as meaningful.
Two approaches from the literature have been used for clustering stability measurement in \cite{akhanli2020comparing}, namely the prediction strength \cite{tibshirani2005cluster}, and a bootstrap-based method (called ``Bootstab'' here) by
\citet{fang2012selection}. We focus on the latter below. In the original paper this (as well as the prediction strength) was proposed for assessing clustering quality and making decisions such as regarding the number of clusters on their own, but this is problematic. Whereas it makes sense to require a good clustering to be stable, it cannot be ruled out that an undesirable clustering is also stable. We therefore involve Bootstab as measuring just one of several desirable clustering characteristics.
$B$ times two bootstrap samples are drawn from the data with replacement. Let $X_{[1]},\ X_{[2]}$ the two bootstrap samples in the $b$th bootstrap iteration. For $t=1, 2,$ let $L_{b}^{(t)} = \left( l_{1b}^{(t)}, \ldots, l_{nb}^{(t)} \right)$ based on the clustering of $X_{[t]}$. This means that for points $x_i$ that are resampled as member of $X_{[t]}$, $l_{ib}^{(t)}$ is just the cluster membership indicator, whereas for points $x_i$ not resampled as member of $X_{[t]}$, $l_{ib}^{(t)}$ indicates the cluster on $X_{[t]}$ to which $x_i$ is classified using a suitable supervised classification method (we use the methods listed in \cite{akhanli2020comparing}, extending the original proposal in \cite{fang2012selection}). The Bootstab index is
\begin{displaymath}
I_{Bootstab}(\mathcal{C}) = \frac{1}{B} \sum_{b=1}^{B} \left\{ \frac{1}{n^2} \sum_{i,i'} \left|f_{ii^{'}b}^{(1)} - f_{ii^{'}b}^{(2)}\right| \right\},
\end{displaymath}
\noindent where for $t=1,2$,
\begin{displaymath}
f_{ii^{'}b}^{(t)} = \mathbf{1} \left( l_{i'b}^{(t)}= l_{ib}^{(t)} \right),
\end{displaymath}
\noindent indicating whether $x_i$ and $x_{i'}$ are in or classified to the same cluster based on the clustering of $X_{[1t]}$. $I_{Bootstab}$ is a percentage of pairs that have different ``co-membership'' status based on clusterings on two bootstrap samples. Small values of $I_{Bootstab}$ are better.
\end{description}
The following indexes from \cite{hennig2017cluster} are not involved here, because they seem rather irrelevant to potential uses of clusters of football players: representation of clusters by centroids; small within-cluster gaps; clusters corresponding to density modes; uniform or normal distributional shape of clusters.
\subsection{Aggregation and calibration of indexes}
\label{subsec:aggregation_indexes}
Following \cite{akhanli2020comparing}, indexes measuring different desirable aspects of a clustering are aggregated computing a weighted mean. For selected indexes $I^*_{1}, \ldots, I^*_{s}$ with weights $w_{1}, \ldots, w_{s} > 0$:
\begin{equation}
\mathcal{A}(\mathcal{C}) = \frac{\sum_{j=1}^{s} w_{j} I^*_{j}(\mathcal{C})}{\sum_{j=1}^{s} w_{j}}.
\label{eq:aggregation_indexes}
\end{equation}
The weights are used to up- or down-weight indexes that are more or less important than the others for the aim of clustering in the situation at hand. This assumes that all involved indexes are calibrated so that their values are comparable and that they point in the same direction, e.g., that large values are better for all of them. The latter can be achieved easily by multiplying those indexes that are better for smaller values by $-1$.
The following approach is used to make the values of the different indexes comparable. We generate a large number $m$
of random clusterings $\mathcal{C}_{R1},\ldots,\mathcal{C}_{Rm}$ on the data. On top of these there are $q$ clusterings produced by regular clustering methods as listed in Section \ref{sec:cmethods}, denoted by ${\mathcal C}_1,\ldots,\mathcal{C}_q$. For given data set $\mathcal{X}$ and index $I$, the clusterings are used to standardise $I$:
\begin{eqnarray*}
m(I,\mathcal{X})&=&\frac{1}{m+q}\left(\sum_{i=1}^m I(\mathcal{C}_{Ri})+ \sum_{i=1}^q I(\mathcal{C}_{i})\right),\\
s^2(I,\mathcal{X})&=& \frac{1}{m+q-1}\left(\sum_{i=1}^m \left[I(\mathcal{C}_{Ri})-
m(I,\mathcal{X})\right]^2+ \sum_{i=1}^q \left[I(\mathcal{C}_{i})-m(I,\mathcal{X})\right]^2\right),\\
I^*(\mathcal{C}_{i})&=&\frac{I(\mathcal{C}_i)-m(I,\mathcal{X})}{s(I,\mathcal{X})},\
i=1,\ldots,q.
\end{eqnarray*}
$I^*$ is therefore scaled so that its values can be interpreted as expressing the quality compared to what the collection of clusterings $\mathcal{C}_{R1},\ldots,\mathcal{C}_{Rm},{\mathcal C}_1,\ldots,\mathcal{C}_q$ achieves on the same data set. The approach depends on the definition of the random clusterings. These should generate enough random variation in order to work as a tool for calibration, but they also need to be reasonable as clusterings, because if all random clusterings are several standard deviations away from the clusterings provided by the standard clustering methods, the exact distance may not be very meaningful.
Four different algorithms are used for generating the random clusterings, ``random $K$-centroids'', ``random nearest neighbour'', ``random farthest neighbour'', and ``random average distances'', for details see \cite{akhanli2020comparing}.
Assume that we are interested in numbers of clusters $K\in\{2,\ldots,K_{max}\}$, and that all clustering methods of interest are applied for all these numbers of clusters. Section \ref{sec:cmethods} lists six clustering methods, and there are four approaches to generate random clusterings. Therefore we compare $q=6(K_{max}-1)$ clusterings from the methods and $m=4B(K_{max}-1)$ random clusterings, where $B=100$ is the number of random clusterings generated by each approach for each $K$.
Two different ways to calibrate the index values have been proposed in \cite{akhanli2020comparing}:
\begin{description}
\item[C1:] All index values can be calibrated involving clusterings with all numbers of clusters.
\item[C2:] Index values for a given number of clusters $k$ can be calibrated involving only clusterings with $k$ clusters.
\end{description}
In order to understand the implications of these possibilities it is important to note that some of the indexes defined in Section \ref{subsubsec:aspect_cquality} will systematically favour either larger or smaller numbers of clusters. For example, a large number of clusters will make it easier for $I_{ave.within}$ to achieve better values, whereas a smaller number of clusters will make it easier for $I_{sep}$ to achieve better values. Option C1 will not correct potential biases of the collection of involved indexes in favour of larger or smaller numbers of clusters. It is the method of choice if any tendency in favour of larger or smaller numbers of clusters implied by the involved indexes is desired, which is the case if the indexes have been chosen to reflect desirable characteristics of the clusterings regardless of the number of clusters. Option C2 employs the involved indexes relative to the number of clusters, and will favour a clustering that stands out on its specific number of clusters, even if not in absolute terms. When using option C1, the choice of the number of clusters is more directly determined by the chosen indexes, whereas calibration according to option C2 will remove systematic tendencies of the indexes when choosing the number of clusters, and can therefore be seen as a more data driven choice.
\section{Application to the football player data}
\label{sec:valresults}
The clustering methods listed in Section \ref{sec:cmethods} will be applied to the football player data set using a range of numbers of clusters. The quality of the resulting clusterings is measured and compared according to the composite cluster validity index $\mathcal{A}$ as defined in (\ref{eq:aggregation_indexes}). The involved indexes are $I^*_1=I^*_{ave.within}, I^*_2=I^*_{sep}, I^*_3=I^*_{Pearson \Gamma}, I^*_4=I^*_{entropy}, I^*_5=I^*_{Bootstab}$, see Section \ref{subsubsec:aspect_cquality}, where the upper star index means that indexes are calibrated, see Section \ref{subsec:aggregation_indexes}.
Corresponding to the two different aims of clustering as outlined in Section \ref{sec:intro}, two different sets of weights $w_1,\ldots,w_5$ will be used.
\subsection{A data driven composite index}
The first clustering is computed with the aim of giving a raw representation of inherent grouping structure in the data. For this aim we choose calibration strategy C2 from Section \ref{subsec:aggregation_indexes}. A first intuitive choice of weights, given that the five involved indexes all formalise different desirable features of the clustering, would be $w_1=w_2=w_3=w_4=w_5=1$ (W1). Experience with the working of the indexes suggests that $I^*_{sep}$ has a tendency to favour clusterings that isolate small groups or even one point clusters of observations. It even tends to yield better values if the remainder of the observations is left together (as splitting them up will produce weaker separated clusters). Although a certain amount of separation is desirable, it is advisable to downweight $I^*_{sep}$, as it would otherwise go too strongly against the requirements of small within-cluster distances and entropy, which are more important. Similarity of the players in the same cluster is a more elementary feature for interpreting the clusters, and the clustering should differentiate players properly, which would not be the case if their sizes are too imbalanced. For this reason we settle for $\mathcal{A}_{1}(\mathcal{C})$ defined by $w_2=\frac{1}{2},\ w_1=w_3=w_4=w_5=1$ (W2). The optimal clustering, the five cluster solution of Ward's method, is in fact the same for W1 and W2, but the next best clusterings are different, and the best clusterings stick out quite clearly using $\mathcal{A}_{1}(\mathcal{C})$, see Figure \ref{fig:a1} and Table \ref{tab:football_data_validitiy_index_comparison} (note that the listed values of $\mathcal{A}_{1}(\mathcal{C})$ and $\mathcal{A}_{2}(\mathcal{C})$ as defined below can be interpreted in terms of the standard deviations per involved index compared to the set of clusterings used for calibration).
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\textwidth]{composite_index_a1.jpg}
\includegraphics[width=0.48\textwidth]{composite_index_a1_2_20.jpg}
\caption{Results for football data with calibration index $\mathcal{A}_{1}(C)=I_{ave.wit} + 0.5I_{sep.index} + I_{Pearson \Gamma} + I_{entropy} + I_{Bootstab}$. Left side: full range of the number of clusters; right side: number of clusters in the range $[2:20]$.}
\label{fig:a1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth]{ward_5.jpg}
\caption{Multidimensional scaling representation of the data with Ward clustering, $K=5$.}
\label{fig:ward_mds_pam_51}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth]{football_mds_ward_5.jpg}
\caption{Multidimensional scaling representation of the data with Ward clustering, $K=5$, with location of some well known players.}
\label{fig:ward_mds_pam_52}
\end{figure}
A visualisation of the clustering using MDS is in Figures~\ref{fig:ward_mds_pam_51}, \ref{fig:ward_mds_pam_52}. Commenting on clusters from left to right in the MDS plot, corresponding to going from defensive to offensive players, cluster 3 mainly contains centre backs (DC), cluster 2 mainly contains full backs (DR or DL), cluster 1 mainly involves midfielders (M), cluster 4 has attacking midfielders (AM), and cluster 5 mainly contains forwards (FW), respectively. Table~\ref{tab:top_level_summary} in the Appendix gives cluster-wise comprehensive statistical summaries for the top level performance variables. Cluster 3 is characterised by strong values in defensive features, such as interceptions, clearances, aerial duels and long passes. Somewhat surprisingly they also do most free kicks. Cluster 2 players are on average strongest in blocks, and good at cross passes compared with the other more defensive clusters. They are weakest at scoring goals. Players in cluster 1 are on average the strongest in tackles and short passes. Otherwise their values are in between the two more defensive and the two more offensive clusters 4 and 5. Players in cluster 4 support the goalscorers, who mainly are in cluster 5. In cluster 4, players have most dribbles, crosses, key passes, assists, fouls given in their favour, tend to play most corners, but are also dispossessed most. Cluster 5 leads regarding shots and goals, but these players also commit most fouls, are most often in an offside position, have most unsuccessful touches, and have the clearly lowest values regarding passes.
The clusters are strongly aligned with the players' positions, but they are not totally dominated by these positions. For instance, cluster 1 mainly contains defensive midfielders, but some players are in different positions, such as Banega. Although he is usually deployed as a central midfielder, he is well capable to play as an attacking one. Banega was engaged as defensive midfielder in Boca Juniors, but his technical skills, such as dribbling ability, quick feet, vision and accurate passing enabled him to play as a attacking midfielder \citep{everbanega1}. His historical background and his playing style placed him in cluster 1. Another example is Carrick, who is a midfielder, but his style of play relies on defensive roles, such as tackles, stamina, physical attributes, etc. \citep{michealcarrick}. These kinds of playing characteristics put him into cluster 3, which mainly contains central defenders. Samuel Eto'o is a forward player and could as such be expected in cluster 5, but his playing style rather fits in cluster 4, which mostly includes attacking midfielders. During Inter's 2009–10 treble-winning season, Eto'o played an important role in the squad, and was utilised as a winger or even as an attacking midfielder on the left flank in Mourinho's 4–2–3–1 formation, where he was primarily required to help his team creatively and defensively with his link-up play and work-rate off the ball, which frequently saw him tracking back \citep{samueletoo}.
\subsection{A composite index for smaller clusters based on expert assessments}
The second clustering is computed with the aim of having smaller homogeneous clusters that unite players with very similar characteristics. These can be used by managers for finding players that have a very similar profile to a given player, and for characterising the team composition at a finer scale. Larger numbers of clusters become computationally cumbersome for assessing stability and for the resampling scheme introduced in Section \ref{subsec:aggregation_indexes}. For this reason the maximum investigated number of clusters is 150; we assume that clusters with 10 players on average deliver a fine enough partition. In fact very small clusters with, say, 1-3 players, may not be very useful for the given aim, or only for very exceptional players.
In order to find a suitable weighting for a composite index we conducted a survey of 13 football experts. The idea of the survey was to have several questions, in which alternatives are offered to group a small set of famous players. The experts were then asked to rank these groupings according to plausibility. The groupings were chosen in order to distinguish between different candidate clusterings from the methods listed in Section \ref{sec:cmethods} between 100 and 150 clusters (single linkage and spectral clustering were not involved due to obvious unsuitability, in line with their low value on the resulting composite index).
More precisely, different clustering solutions correspond to the multiple choices in each question, and each selection is based on a different clustering solution. For the selected players for the survey, these groupings do not change over ranges of numbers of clusters; e.g., for PAM with $K \in \{100,\ldots,113\}$, see Table~\ref{tab:survey_res_cluster_selections}. The respondents answer each question by ranking different groupings in order of plausibility from 1 to the number of multiple choices of that question. The questions are presented in the Appendix.
We have collaborated with The İstanbul Başakşehir football club. The survey questions were asked to 13 football experts including the head coach, the assistant coaches, the football analysts and the scouts of this club, and some journalists who are experienced with European football.
For the ranking responses of the survey questions we assigned scores for each rank in each question, where the score assignment was made in a balanced way, because each question has a different number of possible choices. Table~\ref{tab:point_assignment} shows the assignment of the scores. The idea behind the scoring system is that a question with five choices gives more differentiated information; the score difference between the first rank and the last rank is therefore bigger than for questions with fewer choices, however the difference between first and second rank should be bigger for a lower number of choices, as with five choices the quality of the best two is more likely assessed as similar, as both of these are ranked ahead of further choices, whereas with two choices overall this is not the case. Apart from these considerations, as we were interested in the comparison between all choices by the experts rather than focusing on their favourites, score differences between adjacent ranks have been chosen as constant given the same number of choices in the question.
\begin{table}[htbp]
\caption{Clustering selections with the clustering algorithms and their number of clusters range }
\begin{tabular}{l | ll}
\thickhline
\textbf{Selections} & \textbf{Clustering Algorithms} & \textbf{Number of clusters range} \\
\hline
Selection 1 & PAM & $K \sim [100:113]$ \\
Selection 2 & PAM & $K \sim [114:118]$ \\
Selection 3 & PAM & $K \sim [119:129, 134:136, 147:150]$ \\
Selection 4 & PAM & $K \sim [130:133, 137:146]$ \\
Selection 5 & Ward's method & $K \sim [100:147]$ \\
Selection 6 & Ward's method & $K \sim [148:150]$ \\
Selection 7 & Complete linkage & $K \sim [100:150]$ \\
Selection 8 & Average linkage & $K \sim [100:150]$ \\
\thickhline
\end{tabular}
\label{tab:survey_res_cluster_selections}
\end{table}
\begin{table}[htbp]
\caption{Score assignment for the survey questions}
\begin{tabular}{c | c c c c c}
\thickhline
\textbf{The selection of multiple choices} & \textbf{1. Rank} & \textbf{2. Rank} & \textbf{3. Rank} & \textbf{4. Rank} & \textbf{5. Rank} \\
\hline
5 choices & 30 & 24 & 18 & 12 & 6 \\
3 choices & 30 & 20 & 10 & - & - \\
2 choices & 30 & 15 & - & - & - \\
\thickhline
\end{tabular}
\label{tab:point_assignment}
\end{table}
\begin{table}[tbp]
\caption{Total scores of the seven survey questions for different clustering selections from each of the 13 football experts.}
\begin{tabular}{l | cccccccc}
\thickhline
\multirow{2}{*}{Respondents} & \multicolumn{8}{c}{\underline{Selection}} \\
& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\
\hline
Head coach & 138 & 138 & 162 & 162 & 148 & 160 & 109 & 125 \\
Assistant coach - 1 & 138 & 138 & 144 & 144 & 144 & 166 & 109 & 137 \\
Assistant coach - 2 & 125 & 115 & 127 & 137 & 109 & 121 & 136 & 134 \\
Goalkeeping coach & 148 & 118 & 130 & 160 & 152 & 176 & 109 & 125 \\
Individual performance coach & 166 & 136 & 148 & 178 & 146 & 152 & 109 & 119 \\
Physical performance coach & 159 & 149 & 119 & 129 & 125 & 137 & 116 & 168 \\
Football Analyst & 132 & 132 & 144 & 144 & 166 & 154 & 123 & 139 \\
Chief Scout & 176 & 166 & 166 & 176 & 134 & 128 & 117 & 155 \\
Scout - 1 & 144 & 144 & 150 & 150 & 154 & 148 & 99 & 97 \\
Scout - 2 & 113 & 143 & 155 & 125 & 133 & 145 & 142 & 168 \\
Scout - 3 & 148 & 118 & 100 & 130 & 132 & 126 & 115 & 129 \\
Journalist - 1 & 129 & 149 & 161 & 141 & 95 & 123 & 150 & 156 \\
Journalist - 2 & 154 & 134 & 116 & 166 & 136 & 160 & 117 & 145 \\
\hline
TOTAL & 1870 & 1780 & 1822 & 1942 & 1774 & 1896 & 1531 & 1797 \\
\thickhline
\end{tabular}
\label{tab:survey_res}
\end{table}
Table~\ref{tab:survey_res} shows the result of the survey based on the responses from each expert. It shows substantial variation between the experts. As a validation, we conducted a test of the null hypothesis $H_0$ of randomness of the experts' assessments. The $H_0$ was that the experts assigned ranks to the alternative choices randomly and independently of each other. The test statistic was the resulting variance of the sum scores of the eight selections listed in Table \ref{tab:survey_res_cluster_selections}. In case that there is some agreement among the experts about better and worse selections, the variance of the sum scores should be large, as higher ratings will concentrate on the selections agreed as better, and lower ratings will concentrate on the selections agreed as worse. The test is therefore one-sided. The distribution of the test statistic under $H_0$ was approximated by a Monte Carlo simulation of 2000 data sets
\citep{Marriott79}, in which for each expert random rankings for all the survey questions were drawn independently. This yielded $p=0.048$, just about significant at the 5\% level. Although not particularly convincing, this at least indicates some agreement between the experts.
According to the survey, the clusterings of Selection 4 are best, but due to the considerable disagreement between the experts and the limited coverage of the overall clusterings by the survey questions, we use the survey result in a different way rather than just taking Selection 4 as optimal. Instead, we choose a weighting for a composite index $\mathcal{A}_{2}$ that optimises the Spearman correlation between the value of $\mathcal{A}_{2}(\mathcal{C})$, for each selection maximised over the clusterings in that selection, and the selection's sum scores from the survey as listed in the last line of Table \ref{tab:survey_res}. We believe that the resulting composite index represents the experts' assessments better than just picking a clustering from Selection 4, particularly if applied to future data of the same kind, because it allows to generalise the assessments beyond the limited set of players used in the survey questions.
Although we did not run a formal optimisation, the best value of 0.524 that we found experimentally was achieved for $w_1=w_2=w_3=0,\ w_4=0.5,\ w_5=1$. $I^*_{Bootstab}$ is the only index to favour PAM solutions with large $K$, and these are ranked generally highly by the sum scores, so it is clear that $w_5$, the weight for $I^*_{Bootstab}$, must be high. In fact, using $I^*_{Bootstab}$ alone achieves the same Spearman correlation value of 0.524, but if $I^*_{Bootstab}$ is used on its own, useless single linkage solutions with 2 and 3 clusters are rated as better than the best PAM solutions with $K>100$, whereas the composite index with $w_4=0.5$ makes the latter optimal over the whole range of $K$. Spearman rather than Pearson correlation was used, because the Pearson correlation is dominated too strongly by the outlyingly bad rating for Selection 7. The majority of indexes, including all indexes proposed in the literature for stand-alone use presented in Table~\ref{tab:football_data_validitiy_index_comparison} (which includes the best results found by $\mathcal{A}_{2}(\mathcal{C})$), yield negative Spearman correlations with the expert's sum scores; entropy on its own achieves a value of 0.214.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\textwidth]{composite_index_a2.jpg}
\includegraphics[width=0.48\textwidth]{composite_index_a2_100_150.jpg}
\caption{Results for football data with the calibration index $\mathcal{A}_2(C)=0.5I_{entropy} + I_{Bootstab}$. Left side: full range of the number of clusters; right side: number of clusters in the range $[100:150]$.}
\label{fig:a2}
\end{figure}
According to $\mathcal{A}_{2}(\mathcal{C})$ with weights as above, the best clustering is PAM with $K=150$ from Selection 3. This has an ARI of 0.924 when compared with the PAM solution with $K=146$, which belongs to Selection 4, optimal according to the expert's sum score, so these clusterings are very similar (this is the highest ARI value among the ARIs between the best two clusterings of any two Selections).
\begin{figure}[tbp]
\centering
\includegraphics[width=1\textwidth]{football_mds_pam_150.jpg}
\caption{MDS plot of the football players data with three complete clusters of the PAM solution with $K=150$.}
\label{fig:football_mds_pam_150}
\end{figure}
Interpreting all 150 clusters is infeasible here, so we focus in just three clusters, see Figure~\ref{fig:football_mds_pam_150}.
The most obvious result is that some of the most well known forward players (Messi, Ronaldo, Neymar and Robben) are grouped in one cluster, no. 127. These players are in Figure~\ref{fig:football_mds_pam_150} well distanced from the other players. They stand out especially in attacking features, such as shot, goal, dribble, key pass, but are also, atypically for general forward players, strong at short passes, see Table~\ref{tab:top_level_summary} in the Appendix. The PAM objective function allows to group them together despite a considerable within-cluster variance, which is better in terms of entropy than isolating them individually as ``outliers'', as happened in some other clusterings with large $K$.
Cluster 12 has typical central defenders who are skilled in variables such as clearance and aerial duels, while the players in cluster 11 are strikers who are well characterised by seemingly more negative aspects such as offsides, dispossession and bad control. Regarding positive characteristics, they are strong regarding shots and goals, but not as strong as cluster 127. Compared with cluster 127, they are stronger in aerial duels and clearances, but despite well reputed players being in this cluster, it can be clearly seen that they are not as outstanding as those in cluster 127.
Finding the optimal clustering at the largest considered number of clusters $K=150$ suggests that even better results may be achieved at even larger $K$. Ultimately we do not believe that any single clustering, particularly at such fine granularity, can be justified as the objectively best one. $K=150$ is probably large enough in practice, but in principle, accepting a high computational burden, the methodology can be extended to larger $K$.
\subsection{Other indexes}
\label{sec:clustering_comparison}
On top of the results of $\mathcal{A}_1(\mathcal{C})$ and $\mathcal{A}_1(\mathcal{C})$, Table~\ref{tab:football_data_validitiy_index_comparison} also shows the best clusterings according to some validity indexes from the literature that are meant to measure the general quality of a clustering, as mentioned in Section \ref{sec:indexes}. The $K=2$ solutions for single linkage and spectral clustering marked as optimal by ASW, CH, Pearson$\Gamma$, and Bootstab, contain a very small cluster with outstanding players and do not differentiate between the vast majority of players. The complete linkage solution that is optimal according to Dunn's index belongs to Selection 7 that comes out worst in the survey of football experts, see Table \ref{tab:survey_res}. CVNN (run with tuning parameter $\kappa=10$, see \cite{liu2013understanding}) achieves best results for Ward's method with $K=4$ and $K=5$, which is reasonably in line with our $\mathcal{A}_1(\mathcal{C})$.
\begin{table}[tbp]
\tiny
\caption{Clustering validity index results for the football players data; note that for Bootstab and CVNN smaller values are better.}
\begin{tabular}{l|ccccc}
\multirow{2}{*}{\textbf{Validity Index}} & \multicolumn{5}{c}{\textbf{\underline{Best clusterings in order ($K$) with validity index values}}}\\
& \textbf{First} & \textbf{Second} & \textbf{Third}& \textbf{Fourth} & \textbf{Fifth} \\[0.25em]
\hline
$\mathcal{A}_{1}(\mathcal{C})$ & $Ward \,(5)$ & $Ward \,(6)$ & $PAM \,(6)$ & $PAM \,(5)$ & $Ward \,(4)$ \\
& 1.386 & 1.336 & 1.216 & 1.172 & 1.081 \\[0.25em]
$\mathcal{A}_{2}(\mathcal{C})$ & $PAM \,(150)$ & $PAM \,(149)$ & $PAM \,(148)$ & $PAM \,(147)$ & $PAM \,(146)$ \\
& 1.025 & 1.021 & 1.020 & 1.019 & 1.017 \\[0.25em]
\hline
$ASW$ & $Spectral \,(2)$ & $Average \,(2)$ & $Ward \,(2)$ & $PAM \,(2)$ & $Complete \,(2)$ \\
& $0.345$ & $0.344$ & $0.342$ & $0.340$ & $0.340$ \\[0.25em]
$CH$ & $Spectral \,(2)$ & $PAM \,(2)$ & $Complete \,(2)$ & $Average \,(2)$ & $Ward \,(2)$ \\
& $1038$ & $1027$ & $1013$ & $1006$ & $967$ \\[0.25em]
$Dunn$ & $Complete \,(145)$ & $Complete \,(144)$ & $Complete \,(143)$ & $Complete \,(142)$ & $Complete \,(141)$ \\
& $0.371$ & $0.371$ & $0.371$ & $0.370$ & $0.368$ \\[0.25em]
$Pearson \, \Gamma$ & $Spectral \,(2)$ & $Average \,(2)$ & $Ward \,(2)$ & $Average \,(4)$ & $Complete \,(2)$ \\
& $0.695$ & $0.693$ & $0.693$ & $0.692$ & $0.687$ \\[0.25em]
$CVNN$ & $Ward \,(4)$ & $Ward \,(5)$ & $PAM \,(4)$ & $Ward \,(3)$ & $PAM \,(5)$ \\
& $0.935$ & $0.965$ & $0.976$ & $0.988$ & $1.034$ \\[0.25em]
$Bootstab$ & $Single \,(2)$ & $Single \,(3)$ & $Single \,(4)$ & $Single \,(5)$ & $PAM \,(150)$ \\[0.25em]
& $0.0011$ & $0.0021$ & $0.0025$ & $0.0039$ & $0.0039$ \\[0.25em]
\end{tabular}
\label{tab:football_data_validitiy_index_comparison}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
We computed two different clusterings of football player performance data from the 2014-15 season. We believe that the considerations presented here are worthwhile also for analysing new data, in particular regarding dissimilarity construction, measuring desirable characteristics of a clustering, and using such measurement to select a specific clustering. Results from the approach taken here look more convincing than the assessments given by existing indexes from the literature that attempt to quantify clustering quality in a one-dimensional manner. The index combination from calibrated average within-cluster dissimilarities, Pearson-$\Gamma$, entropy, Bootstab stability, and (with half the weight) separation may generally be good for balancing within-cluster homogeneity and ``natural'' separation as far as it occurs in the data in situations where for interpretative reasons useful clusters should have roughly the same size. The focus of this combination is a bit stronger on within-cluster homogeneity than on separation. Chances are that natural variation between human beings implies that athletes' performance data will not normally be characterised by strong separation between different groups, particularly not if such groups are not very homogeneous. The involvement of stability should make sure that the found clusters are not spurious.
The second combination of indexes used here, Bootstab with full weight and entropy with half weight, was motivated by best agreement with football expert's assessments based on the specific data set analysed here. One may wonder whether this is a good combination also for different data for finding a clustering on a finer scale, i.e., with more and smaller clusters. Entropy is in all likelihood important for the use of such a clustering; endemic occurrence of clusters with one or two players should be avoided. Stability is certainly desirable in itself; it is also correlated over all involved clusterings strongly (0.629) with low average within-cluster dissimilarities, so it carries some information on within-cluster homogeneity, too. Strong between-cluster separation in absolute terms can hardly be expected with such a large number of clusters; these clusterings have a pragmatic use rather than referring to essential underlying differences between them. Although it is conceivable that this index combination works well also for new in some sense similar data, a wider investigation into which characteristics of clusterings correspond to expert assessments of their use and plausibility would surely be of interest.
The proposed methodology is implemented in the function clusterbenchstats in the R-package fpc \citep{fpc}.
\subsection*{Acknowledgments}
We are very thankful to İstanbul Başakşehir Football Club to give the opportunity for making this survey and provided us a network with other football experts. such as journalists.
\subsection*{Funding}
The work of the second author was supported by EPSRC grant EP/K033972/1.
| {
"attr-fineweb-edu": 1.876953,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdPw4eIZjlTb7H63P |
\section{Introduction}\label{sec:intro}
\begin{figure}[bt!]
\centering
\includegraphics[width=\linewidth]{img/annotation_examples.pdf}
\caption{\acrshort{dataset_provider}: Pass annotations from a \textcolor{set1_red}{data provider} vs. an \textcolor{set1_blue}{expert}. \acrshort{dataset_soccer} and \acrshort{dataset_handball}: Example annotations from experienced annotators~(\textcolor{set1_red}{red}, \textcolor{set1_blue}{blue}, \textcolor{set1_green}{green}) using our proposed taxonomy: Despite uncertainties regarding the concrete event type, the annotated timestamp often aligns. The mapping back to shared characteristics such as the motoric skill (e.g., ball release), leads to higher levels of agreement.
}
\label{fig:example_annotations}
\end{figure}
Events play an important role for the (automatic) interpretation of complex invasion games like soccer, handball, hockey, or basketball.
Over the last years, three fundamental perspectives emerged with regard to the analysis of sports games, which all value different characteristics of the respective sports:
(1)~The \emph{sports science} domain demands semantically precise descriptions of the individual developments to analyze success factors~\cite{lamas2014invasion}.
(2)~The \emph{machine learning} community aims to find automatic solutions for specific tasks~(often supervised).
(3)~\emph{Practitioners}~, i.e., coaches or analysts, show little interest in the description of the sport since they are rather interested in the immediate impact of specific modification of training or tactics.
While the general objective to understand and exploit the underlying concepts in the sports is common to all perspectives, synergistic effects are barely observed~\cite{rein2016big}.
Descriptive statistics such as possession or shot frequency rely on events that occur on the pitch.
However, collecting semantic and (spatio-)~temporal properties for events during matches is non-trivial, highly dependent on the underlying definitions, and is, in the case of (accurate)~manual annotations, very time-consuming and expensive~\cite{pappalardo2019public}.
While it is a common practice of data providers~\cite{wyscout, opta, stats} for (certain)~matches in professional sport to delegate the annotation of events to human annotators,
various approaches have been suggested to automate the process.
In this respect, the automatic detection of (spatio-)~temporal events has been addressed for (broadcast)~video data~\cite{giancola2018soccernet, giancola2021temporally, sarkar2019generation, vats2020event, sanford2020group, sorano2020automatic, hu2020hfnet, yu2019soccer, jiang2016automatic, liu2017soccer, tomei2021rms, karimi2021soccer, mahaseni2021spotting} and positional data~\cite{sanford2020group, xie2020passvizor, khaustov2020recognizing, chacoma2020modeling, richly2016recognizing, richly2017utilizing, morra2020slicing}.
The \textit{temporal event} localization is the task of predicting a semantic label of an event and assigning its start and end time, commonly approached in the domain of video understanding~\cite{caba2015activitynet}.
Despite a general success in other domains~\cite{lin2019bmn, feichtenhofer2019slowfast, caba2015activitynet, nguyen2018weakly}, it has already been observed that this definition can lead to ambiguous boundaries~\cite{sigurdsson2017actions}.
Sports events can also be characterized by a single representative time stamp~(\emph{event spotting}~\cite{giancola2018soccernet}) and recently there has been success in spotting \textit{low-level} events~\cite{giancola2021temporally, deliege2020soccernet, cioppa2020context} in soccer videos such as goals and cards.
In contrast, these data acquisition approaches lack more complex, ambiguous, and more frequent events like passes or dribblings that are not covered by existing publicly available~(video) datasets~\cite{feng2020sset, deliege2020soccernet}. Indeed, some definitions of \textit{high-level} events in soccer are provided in the literature~\cite{kim2019attacking, fernandes2019design}, but there is no global annotation scheme or even taxonomy that covers various events that can be evaluated with few meaningful metrics.
Although there are related events in other invasion games such as handball, neither a set of \textit{low-level} and \textit{high-level} events nor a taxonomy are defined in this domain.
A shared property for both tasks~(spotting and localization with start and end), regardless of the underlying event complexity, event property (temporal, spatial, or semantic), or data modality~(video or positional data), is the need for labeled event datasets to train and especially to evaluate machine learning approaches.
It is common to integrate~\cite{sanford2020group, fernandez2020soccermap} private event data from data-providers~(e.g., from \cite{wyscout, opta, stats}) of unknown~\cite{liu2013reliability} or moderate~(Figure~\ref{fig:example_annotations}~\acrshort{dataset_provider} as an example) quality.
In summary, we observe a lack of a common consensus for the majority of events in the sport.
Neither precise definitions of individual events nor the temporal annotation or evaluation process are consistent.
Publicly available datasets are uni-modal, focus on soccer, and often consider only a small subset of events that does not reflect the entire match.
These inconsistencies make it for all aforementioned three perspectives difficult to assess the performance of automatic systems and to identify state-of-the-art approaches for the real-world task of fine-grained and ball-centered event spotting from multimodal data sources.
In this paper, we target the aforementioned problems and present several contributions: 1) We propose a unified taxonomy for \textit{low-level}, and \textit{high-level} ball-centered events in invasion games and exemplary refine it to the specific requirements of soccer and handball. This is practicable as most invasion games involve various shared motoric tasks~(e.g., a ball catch), which are fundamental to describe semantic concepts~(involving intention and context from the game).
Hence, it incorporates various base events relating to \textit{game status}, \textit{ball possession}, \textit{ball release}, and \textit{ball reception}.
2) We release two multimodal benchmark datasets~(video and audio data for soccer~(\acrshort{dataset_soccer}), synchronized video, audio, and positional data for handball~(\acrshort{dataset_handball})) with gold-standard event annotations for a total of 125 minutes of playing time per dataset.
These datasets contain frame-accurate manual annotations by domain experts performed on the videos based on the proposed taxonomy~(see Figure~\ref{fig:example_annotations}).
In addition, appropriate metrics suitable for both benchmarking and useful interpretation of the results are reported.
Experiments on the human performance show the strengths of the \textit{hierarchical} structure, the successful applicability to two invasion games, and reveal the expected performance of automatic models for certain events.
With the increasing complexity of an event~(generally deeper in the \textit{hierarchy}), ambiguous and differing subjective judgments in the annotation process increases.
A case study demonstrates that the annotations from data providers should be reviewed carefully depending on the application.
3) Lastly an \emph{I3D}~\cite{carreira2017quo} model for video chunk classification is adapted for the spotting task using a sliding window and non-maximum suppression and is applied.
The remainder of this paper is organized as follows.
In Section~\ref{sec:rw}, existing definitions for several events and publicly available datasets are reviewed. The proposed universal taxonomy is presented in Section~\ref{sec:taxonomy}.
Section~\ref{sec:datasets} contains a description of the creation of the datasets along with the definition of evaluation metrics, while Section~\ref{sec:experiments} evaluates the proposed taxonomy, datasets, and baseline concerning annotation quality and uncertainty of specific events.
Section~\ref{sec:conclusion} concludes the paper and outlines areas of future work.
\section{Related Work}\label{sec:rw}
We discuss related work on events in invasion games~(Section~\ref{rw:event_types}) and review existing datasets~(Section~\ref{rw:datasets}).
\subsection{Events Covered in Various Invasion Games}\label{rw:event_types}
Common movement patterns have been identified in the analysis of spatio-temporal data~\cite{dodge2008towards} such as concurrence or coincidence.
While these concepts are generally applicable to invasion games,
our taxonomy and datasets focus on single actions of individuals~(players), which do not require a complex description of~(team)~movement patterns.
For the sport of handball, there are rarely studies on the description of game situations.
However, the influence of commonly understood concepts, such as shots and rebounds has been investigated~\cite{burger2013analysis}.
In contrast, for soccer, the description of specific game situations has been approached. \citet{kim2019attacking} focus on the attacking process in soccer.
\citet{fernandes2019design} introduce an observational instrument for defensive possessions. The detailed annotation scheme includes 14 criteria with 106 categories
and achieved sufficient agreement in expert studies. However, the obtained semantic description and subjective rating of defensive possessions largely differ from our fine-grained objective approach.
A common practice for soccer matches in top-flight leagues is to (manually) capture \textit{event data}~\cite{opta, pappalardo2019public}.
The acquired data describe the on-ball events on the pitch in terms of soccer-specific events with individual attributes.
While, in general, the inter-annotator agreement for
this kind of data has been validated~\cite{liu2013reliability}, especially the \textit{high-level} descriptions of events are prone to errors.
\citet{deliege2020soccernet} consider 17 well-defined categories which describe meta events, on-ball events, and semantic events during a soccer match. However, due to the focus of understanding a holistic video rather than a played soccer match, only 4 of the 17 event types describe on-ball actions, while more complex events, i.e., passes, are not considered.
\citet{sanford2020group} spot \textit{passes}, \textit{shots}, and \textit{receptions} in soccer using both positional and video data. However, no information regarding definitions and labels is provided.
\subsection{Datasets}\label{rw:datasets}
To the best of our knowledge, there is no publicly available real-world dataset including positional data, video, and corresponding events, not to mention shared events across several sports.
The majority of datasets for event detection rely on video data and an individual sport domain.
In this context, \emph{SoccerNetV2}~\cite{deliege2020soccernet, giancola2018soccernet} was released, which is a large-scale action spotting dataset for soccer videos.
However, the focus is on spotting general and rarely occurring events such as \textit{goals}, \textit{shots}, or cards.
\emph{SoccerDB}~\cite{jiang2020soccerdb} and \emph{SSET}~\cite{feng2020sset} cover a similar set of general events. Even though they relate the events to well-defined soccer rules, they only annotate temporal boundaries.
\citet{pappalardo2019public} present a large event dataset, but it lacks definitions of individual events or any other data such as associated videos.
For basketball, \citet{Ramanathan_2016_CVPR} generated a dataset comprising five types of \textit{shots}, their related outcome (successful), and the \emph{steal event} by using Amazon Mechanical Turk. Here, the annotators were asked to identify the end-point of these events since the definition of the start-point is not clear.
The \emph{SoccER} dataset~\cite{morra2020soccer} contains synthetically generated data~(positional data, video, and events) from a game engine.
The volleyball dataset~\cite{ibrahim2016hierarchical} contains short clips with eight group activity labels such as right set or right spike where the center frame of each clip is annotated with per-player actions like standing or blocking.
To summarize Section~\ref{rw:event_types} and~\ref{rw:datasets}, many studies consider only a subset of relevant~(\textit{low} and \textit{high-level}) events to describe a match.
The quality of both unavailable and available datasets is limited due to missing general definitions~(even spotting vs. duration) apart from well-defined~(per rule) events.
\section{General Taxonomy Design}\label{sec:taxonomy}
\begin{figure*}[tbh]
\centering
\includegraphics[width=\textwidth]{img/general_taxonomy_with_features.pdf}
\caption{Base taxonomy for invasion games and example refinements for soccer and handball. Starting with basic motoric \emph{individual ball events}, the finer the hierarchy level, the semantic and necessary context information increases.}
\label{fig:taxonomy}
\end{figure*}
In this section, we construct a unified taxonomy for invasion games that can be refined for individual sports and requirements~(Figure~\ref{fig:taxonomy}).
Initially, targeted sports and background from a sports science perspective are presented in Section~\ref{subsec:sports}. Preliminaries and requirements for our general taxonomy are listed in Section~\ref{subsec:characteristics}.
Finally, the proposed taxonomy, including concrete event types and design decisions, is addressed in Section~\ref{subsec:categories}.
\subsection{Targeted Sports \& Background}\label{subsec:sports}
Sports games share common characteristics and can be categorized in groups~(\emph{family resemblances})~\cite{Wittgenstein1999}.
Based on that idea,~\cite{Read1997, Hughes2002} structured sports games into three families: (1)~Net and wall games, which are score dependent (e.g., tennis, squash, volleyball), (2)~striking/fielding games, which are innings dependent (e.g., cricket, baseball), and (3)~invasion games, which are time-dependent (e.g., soccer, handball, basketball, rugby).
In this paper, we focus on the latter.
Invasion games all share a variation of the same objective: to send or carry an object (e.g., ball, frisbee, puck) to a specific target (e.g., in-goal or basket) and prevent the opposing team from reaching the same goal~\cite{Read1997}. The team that reaches that goal more often in a given time wins. Hence, we argue that the structure of our taxonomy can be applied to all invasion games with a sport-specific refinement of the base events. Please note that we refer to the object in the remainder of this work as a ball for clarity.
Basic motor skills required in all invasion games involve controlled receiving of, traveling with, and sending of the ball~\cite{Roth2015}, as well as intercepting the ball and challenging the player in possession~\cite{Read1997}.
Although different invasion games use different handling techniques, they all share the ball as an underlying characteristic.
Thus, we find that ball events are central for describing invasion games. Moreover, since complex sport-science-specific events such as counterattack, possession play, tactical fouls, or any group activities like pressing are rather sports-specific, we focus on on-ball-ball events in this paper and refer to non-on-ball events as future work.
\subsection{Characteristics \& Unification of Perspective}\label{subsec:characteristics}
We iteratively design the base taxonomy for invasion games to meet certain standards. To provide insights into this process, the following section details the underlying objectives.
\paragraph{Characteristics}
For the design of a unified taxonomy for invasion games, we view specific characteristics as favorable.
~(1)~A \textit{hierarchical} architecture, in general, is a prerequisite for a clear, holistic structure. We aim to incorporate a format that represents a broad (general) description of events at the highest level and increases in degree of detail when moving downwards in the \textit{hierarchy}. This enables, i.e., an uncomplicated integration of individual annotations with varying degrees of detail as different annotated events (e.g., \textit{shot} and \textit{pass}) can fall back on their common property (here \emph{intentional ball release}) during evaluation.
However, please note that there exists no cross-relation in the degree of detail between different paths~(colors in Figure~\ref{fig:taxonomy}). Events from the same \textit{hierarchical} level may obtain different degrees of detail when from different paths.
~(2)~We target our taxonomy to be \textit{minimal} and \textit{non-redundant} since these characteristics require individual categories to be well-defined and clearly distinguishable from others. In this context, a specific event in the match should not relate to more than one annotation category to support a clear, unambiguous description of the match.
~(3)~The taxonomy needs to enable an \emph{exact} description of the match. While the previously discussed \textit{minimal}, \textit{non-redundant} design is generally important, an overly focus on these properties may disallow the description of the \textit{exact} developments in a match. Thus, any neglecting or aggregation for individual categories is carefully considered in the design of the taxonomy.
~(4)~Finally, we aim for a \textit{modular expendable} taxonomy. This allows for a detailed examination of specific sports and concepts while still ensuring a globally valid annotation that is comparable (and compatible) with annotations regarding different sports and concepts.
\paragraph{Unification of Perspectives}
The targeted invasion games can generally be perceived from a variety of different perspectives. A mathematical view corresponds to a description of moving objects (players and the ball) with occasional stoppage and object resets (set-pieces reset the ball). On the other hand, a sport-scientist view interprets more complex concepts such as the mechanics of different actions or the semantics of specific situations of play.
To unify these perceptions into a global concept, different approaches such as the \emph{SportsML} language~\cite{SportsML} or \emph{SPADL}~\cite{decroos2019actions} previously targeted a universal description of the match. However, given that the formats originate from a journalist perspective~\cite{SportsML} or provide an integration tool~\cite{decroos2019actions} for data from event providers~(see Section~\ref{exp:data_prov_quality}), they do not pursue the definition of precise annotation guidelines.
In contrast, we aim to provide a universal and \textit{hierarchical} base taxonomy that can be utilized by different groups and communities for the targeted invasion games.
\subsection{Annotation Categories}\label{subsec:categories}
The iteratively derived base taxonomy for invasion games is illustrated in Figure~\ref{fig:taxonomy}. Its
categories and attributes comply with the previously discussed characteristics~(see Section~\ref{subsec:characteristics}) and are outlined in this section~(see Appendix for more detailed definitions).
\subsubsection{Game Status Changing Event}
We initialize the first path in our \textit{base taxonomy} such that it corresponds with the most elemental properties of invasion games. Thus, we avoid integrating any semantic information (tactics, mechanics) and regard the so-called, \textit{game status} which follows fixed game rules~\cite{IFAB, IHF}.
The \textit{game status} provides a deterministic division of any point in the match into either active (running) on inactive (paused) play.
In the sense of a \textit{minimal} taxonomy, we find that an \textit{exact} description of the current \textit{game status} is implicitly included by annotating only those events which cause changes to the current \textit{game status}~(see yellow fields in Figure~\ref{fig:taxonomy}).
Moreover, in all targeted invasion games, a shift of the \textit{game status} from active to inactive only occurs along with a rule-based \textit{referee's decision} (foul, ball moving out-of-bounds, game end, or sport-specific stoppage of play) while a shift from inactive to active only occurs along \textit{static-ball-action} (game start, ball in field, after foul, or sport-specific resumption of play). Thus, we discriminate between these two specifications in the path and maintain this \textit{hierarchical} structure.
\subsubsection{Ball Possession}
The following paths in our taxonomy comprise additional semantic context to enable a more detailed assessment of individual actions and situations. In this regard, we consider the concept of \textit{possession} (see purple field in Figure~\ref{fig:taxonomy}) as defined by~\citet{link2017individual}. Albeit generally not included in the set of rules of all targeted invasion games (exceptions, i.e., for basketball), the assignment of a team's \textit{possession} is a common practice, and its importance is indicated, for instance, by the large focus of the sports science community~\cite{camerino2012dynamics, casal2017possession, jones2004possession, lago2010game}.
Similar to the \textit{game status}, we only consider the changes to the \textit{possession} with respect to a \textit{minimal} design.
\subsubsection{Individual Ball Events}
Related to the concept of individual ball \textit{possession} are \textit{individual ball events}, defined as events within the sphere of an individual \textit{possession}~\cite{link2017individual}~(see green fields in Figure~\ref{fig:taxonomy}).
Along with the definition for an individual \textit{possession}, \citet{link2017individual} define individual \textit{ball control} as a concept requiring a certain amount of motoric skill.
This also involves a specific start and end time for \textit{ball control} which already enables a more precise examination of \textit{individual ball event}.
At the start point of individual \textit{possession}, the respective player gains (some degree of) \textit{ball control}. We refer to this moment as a \textit{ball reception}, describing the motoric skill of gaining \textit{ball control}.
Analogously, at the endpoint of \textit{possession}, the respective player loses \textit{ball control}. We specify this situation as a \textit{ball release}, independent of the related intention or underlying cause. Please note that for a case where a player only takes one touch during an individual \textit{ball control}, we only consider the \textit{ball release} as relevant for the description.
For time span between \textit{ball reception} and \textit{ball release}, in general, the semantic concept of a \textit{dribbling} applies. However, various definitions for (different types of) \textit{dribbling} depend on external factors such as the sport, the context, and the perspective. As this semantic ambiguity prevents an \textit{exact} annotation, we do not list \textit{dribbling} as a separate category in the taxonomy but refer this concept to sport-specific refinements.
At this point, we utilized the two concepts \textit{game status} and \textit{possession} in invasion games to design the initial \textit{hierarchical} levels for a total of three different paths within the taxonomy~(yellow boxes, purple boxes, and two highest \textit{hierarchical} levels of the green boxes). Accordingly, the required amount of semantic information for the presented levels is within these two concepts. Moreover, since an assessment of these concepts requires low semantic information, we think that the current representation is well-suited for providing annotations in close proximity to the previously presented mathematical perspective on the match.
However, we aim for a step-wise approach towards the previously presented sport-scientist perspective for the subsequent \textit{hierarchical} levels.
Therefore, we increase the amount of semantic interpretation by regarding additional concepts, i.e., the overall context within a situation or the intention of players.
To this end, we distinguish between two different subcategories for a \textit{ball release}: \textit{intentional} or \textit{unintentional ball release}.
Regarding an \textit{unintentional ball release}, we generally decide between the categories \textit{successful interference}~(from an opposing player) and \textit{self-induced}~(describing a loss of \textit{ball control} without direct influence of an opponent).
\newpage
In contrast, for an \textit{intentional ball release}, we further assess the underlying intention or objective of a respective event. We discriminate between a \textit{pass}, including the intention that a teammate receives the released ball, and a \textit{shot} related with an intention (drawn) towards the target.
In some rare cases, the assessment of this intention may be subjective and difficult to determine. However, specific rules in invasion games require such assessment, i.e., in soccer, the goalkeeper is not allowed to pick up a released ball from a teammate when it is "\emph{deliberately}" kicked towards him~\cite{IFAB}.
Please note that we define \textit{individual ball events} as mutually exclusive, i.e., only one event from that path can occur at a specific timestamp. However, a single point in time may generally include multiple events from different paths in the \textit{taxonomy}.
Examples for this, in particular, are set-pieces. Here, we annotated a \textit{static-ball event} such as \emph{ball in field}, indicating the previously discussed shift of the \textit{game status}, and an \textit{individual ball event} (e.g., a \textit{pass}) describing the concrete execution. This is necessary as the type of \textit{static-ball event} does not definitely determine the type of \textit{individual ball event}~(i.e., a free-throw in basketball can theoretically be played as a pass from the rim). Nevertheless, since each set piece involves some sort of \textit{ball release}~(per definition of rule, a set-piece permits a double contact of the executing player), an \textit{exact} annotation of set-pieces is provided by the implicit link of simultaneous~(or neighboring) \textit{ball release} and \textit{static ball events}.
\subsubsection{Attributes}
A global method to add semantic information to the annotation is provided by defining specific \textit{attributes} for certain events.
While not representing a specific path in the \textit{base taxonomy}, an \textit{attribute} is defined as a name or description like \emph{pixel location} of the event in the video~(Figure~\ref{fig:taxonomy} upper-right) and thus provides additional information to the respective event.
When an \textit{attribute} is defined for an event at a certain \textit{hierarchical} level, it is valid for all child events in lower levels.
\section{Events in Invasion Games Dataset}\label{sec:datasets}
\input{tables/dataset_stats}
The following Section~\ref{sec:dataset_description} describes our multimodal (video, audio, and positional data) and multi-domain (handball and soccer) dataset for ball-centered event spotting~(\acrshort{dataset}).
In Section~\ref{sec:metrics}, appropriate metrics for benchmarking are introduced.
\subsection{Data Source \& Description}\label{sec:dataset_description}
To allow a large amount of data diversity as well as a complete description of a match, we regard longer sequences from different matches and stadiums.
We select 5 sequences à 5 minutes from 5 matches resulting in 125 minutes of raw data, respectively, for handball and soccer.
\paragraph{Data Source}
For the handball subset, referred as \acrshort{dataset_handball}, synchronized video and positional data from the first German league from 2019 are kindly provided by the Deutsche Handball Liga and Kinexon\footnote{\url{https://kinexon.com/}} and contain HD~($1280 \times 720$ pixels) videos at 30\,fps and positional data for all players in 20\,Hz.
The videos include unedited recordings of the game from the main camera~(i.e., no replays, close-ups, overlays, etc.).
Some events are more easily identified from positional data, other events that require visual features can be extracted from video, making \acrshort{dataset_handball} interesting for multimodal event detection.
For the soccer dataset, referred to as \acrshort{dataset_soccer}, we collect several publicly available broadcast recordings of matches from the FIFA World Cup (2014, 2018) in 25\,fps due to licensing limitations without positional data.
A characteristic for annotation purpose is naturally given regarding the utilized videos. \acrshort{dataset_soccer} includes difficulties for the annotation, such as varying camera angles, long replays, or the fact that not all players are always visible in the current angle, making it challenging to capture all events for a longer sequence.
All events are included that are either visible or can be directly inferred by contextual information~(e.g., timestamp of a \emph{ball release} is not visible due to a cut from close-up to the main camera). However, with exceptions for \emph{replays} as these do not reflect the actual time of the game.
\paragraph{Annotation Process \& Dataset Properties}\label{sec:dataset:annotation_process}
To obtain the dataset annotations, the general task is to spot the events in the lowest \textit{hierarchy} level since the parent events~(from higher \textit{hierarchy} levels) are implicitly annotated. Therefore, the taxonomy~(Section~\ref{sec:taxonomy}) and a concrete annotation guideline including definitions for each event, examples, and general hints~(see Appendix) were used.
An example for \emph{unintentional ball release - self-induced} is this situation: A player releases the ball without a directly involved opponent, e.g., slips/stumble or has no reaction time for a controlled \textit{ball release}. For instance, after an \textit{intercepted/blocked pass} or \textit{shot} event. Timestamp: on \textit{ball release}.
We hired nine annotators~(sports scientists, sports students, video analysts).
Due to the complexity of soccer, four of them annotated each sequence of the \acrshort{dataset_soccer} test set.
Note, that two of the five matches are indicated as a test set~(\acrshort{dataset_soccer}-T, \acrshort{dataset_handball}-T), respectively.
In addition, one inexperienced person without a background in soccer annotated the \acrshort{dataset_soccer}-T.
For \acrshort{dataset_handball}, three experienced annotators also processed each sequence of the test set.
An experienced annotator has labeled the remaining data~(e.g., reserved for training).
The annotation time for a single video clip is about 30 minutes for both datasets.
The number of events given the entire dataset and one expert annotation is presented in Table~\ref{tab:stats} and we refer to Section~\ref{exp:aggreement} for the assessment of the annotation quality.
Figure~\ref{fig:example_annotations} shows two sequences with annotations from several persons as an example.
For each dataset, we assess the human performance~(see Section~\ref{exp:aggreement}) for each individual annotator. The annotation with the highest results is chosen for release and as reference in the evaluation of the baseline~(see Section~\ref{exp:baseline}).
\subsection{Metrics}\label{sec:metrics}
For events with a duration, it is generally accepted~\cite{lin2019bmn, caba2015activitynet} to measure the \acrfull{temporal_iou}.
This metric computes for a pair of annotations by dividing the intersection~(overlap), by the union of the annotations. The \acrshort{temporal_iou} for multiple annotations is given by the ratio of aggregated intersection and union.
Regarding events with a fixed timestamp, a comparison between annotations is introduced in terms of temporal tolerance. Thereupon, when given a predicted event, count a potentially corresponding ground-truth event within the respective class as a true-positive, if and only if it falls within a tolerance area~(in seconds or frames).
Yet, the definition of corresponding events from two different annotations is non-trivial~\cite{sanford2020group, deliege2020soccernet, giancola2018soccernet}, especially for annotations with different numbers of annotated events.
A common method to circumvent this task is to introduce a simplification step using a \acrfull{nnm} which considers events with the same description on the respective \textit{hierarchy} level.
After defining true positive, false positive, and false negative, this enables the computation of the \acrfull{temporal_ap}~(given by the average over multiple temporal tolerance areas~\cite{deliege2020soccernet, giancola2018soccernet}) or precision and recall for a fixed temporal tolerance area~\cite{sanford2020group}.
However, as the \acrshort{nnm} generally allows many-to-one mappings, a positive bias is associated with it. For instance, when multiple events from a prediction are assigned to the same ground-truth event~(e.g., \textit{shot}), they might all be counted as true positives~(if within the tolerance area), whereas the mismatch in the number of events is not further punished. This bias is particularly problematic for automatic solutions that rely on~(unbiased) objectives for training and evaluation.
Therefore, \citet{sanford2020group} apply a \acrfull{nms} which only allows for a single prediction within a respective \acrshort{nms} window. While this presents first step, a decision on the (hyper-parameter) \acrshort{nms} window length can be problematic. When chosen too large, the \acrshort{nms} does not allow for a correct prediction of temporally close events. In contrast, when chosen too small, the \acrshort{nms} only partially accounts for the issue at hand. Moreover, the lack of objectivity draws a hyper-parameter tuning, e.g., a grid search, towards favoring smaller window lengths for \acrshort{nms}.
To avoid these issues, we propose an \emph{additional} method to establish a one-to-one mapping for corresponding events from two annotations~(with possibly different numbers of events).
In theory, this mapping can only be established if the number of event types between the annotations is equal. However, in practice, this requirement is rarely fulfilled for the whole match. Moreover, even when fulfilled, possibly additional and missing events might cancel each other out.
Based on this, a division of the match into independent~(comparable) segments is a reasonable pre-processing step. Thus, we define a \textit{sequence} as the time of an active match between two \textit{game status-changing events}~(objectively determined by the set of rules~\cite{IFAB, IHF}). Then, (i)~we count the number of \textit{sequences} in two \textit{annotations} to verify that no \textit{game status changing events} were missed~(and adopt \textit{game status changing events} that were missed), (ii)~count the number of annotated events of the same category within a \textit{sequence}, and (iii)~assign the corresponding events relative to the order of occurrence within the \textit{sequence} only if the number of annotations matches.
If this number does not match, we recommend to either separately consider the \textit{sequence} or to fully discard the included \textit{annotations}.
In analogy to \acrshort{nnm}, we refer to this method as \acrfull{scm}.
Please note that, relative to the degree of detail within the compared \textit{annotations}, the contained additional information (for example player identities) can be used to increase the degree of detail in \acrshort{scm}.
\section{Experiments}\label{sec:experiments}
\input{tables/tiou}
\input{tables/eval_complete}
We assess the quality of our proposed dataset by measuring the expected human performance~(Section~\ref{exp:aggreement}) and present a baseline classifier that only utilizes visual features~(Section~\ref{exp:baseline}).
The
quality of annotations from an official data provider is evaluated in Section~\ref{exp:data_prov_quality}.
\subsection{Assessment of Human Performance}\label{exp:aggreement}
Despite we aim to provide as clear as possible definitions for the annotated events, the complex nature of invasion games might lead to uncertain decisions during the annotation process.
According to common practice, we assess the annotation quality and, hence, expected performance of automatic solutions by measuring the average human performance on several evaluation metrics~(Section~\ref{sec:metrics}).
In this respect, one annotator is treated as a predictor and compared to each other annotator, respectively, considered as reference.
Consequently, the average over all reference annotators represents the individual performance of one annotator while the average across all individual performances corresponds to the average human performance. We report the average performance for experienced annotators for \acrshort{dataset_handball}-T and \acrshort{dataset_soccer}-T while we additionally assess the generality of our taxonomy by comparing the individual performance of domain experts and an inexperienced annotator for \acrshort{dataset_soccer}-T.
For events with a duration (\emph{game status}, \emph{possession}), we report the \acrshort{temporal_iou}.
To evaluate the event spotting task, a sufficient assessment of human performance requires a multitude of metrics. Similar to~\citet{sanford2020group}, we report the precision and recall by applying the \acrshort{nnm} for individual events at different levels of our proposed \textit{hierarchy}.
We define strict but meaningful tolerance areas for each event to support the general interpretability of the results.
Additionally, we apply the \acrshort{scm} where we compensate for a possible varying number of sequences by adopting the sequence borders in case of a possible mismatch.
We report precision and recall for events from consistent sequences along with the percentage of events from consistent sequences. The Appendix provides a detailed overview of each individual annotator performance.
\paragraph{Results \& Findings}
The overall results for events with a duration~(Table~\ref{tab:tiou}) and events with a timestamp~(Table~\ref{tab:aggreement_nn_cm}) indicate a general agreement for the discussed concepts.
Moreover, the minor discrepancies in the performance of the experienced and the inexperienced annotator for \acrshort{dataset_soccer}-T also indicate that a sufficient annotation of our base taxonomy does generally not require expert knowledge. This observation shows the low amount of semantic interpretation included in our proposed taxonomy. Please note that due to the asymmetry in the comparison (one inexperienced annotator as prediction and four experienced annotators as reference), for this case, the precision and recall differ in Table~\ref{tab:aggreement_nn_cm}.
In Table~\ref{tab:tiou}, the agreement for \textit{game status} in soccer is significantly higher than the agreement in \textit{possession}. For handball, while the results for \textit{possession} are comparable to soccer, the agreement for \textit{game status} is significantly lower. This likely originates from the rather fluent transitions between active and inactive play which complicate a clear recognition of \textit{game status change events} in handball.
In contrast, general similarities in the annotations for \acrshort{dataset_soccer}-T and \acrshort{dataset_handball}-T can be found in agreement for individual ball events~(Table~\ref{tab:aggreement_nn_cm}). Beneath the previously discussed differences in the ambiguity of \textit{game status}, reflected in inferior agreement of \textit{game status change events}, similar trends are observable in both sports~(limitations, i.e., for infrequent events in handball such as \textit{unintentional ball release} or \textit{successful interference}).
For both datasets, the \textit{hierarchical} structure positively influences the results where the highest level shows a high overall agreement which decreases when descending in the \textit{hierarchy}. This relates to the similarly increasing level of included semantic information~(see Section~\ref{subsec:characteristics}) complicating the annotation. However, this general observation does not translate to each particular event in the taxonomy.
The results for \acrshort{scm} provide a valuable extension to the informative value of \acrshort{nnm}, i.e., to detect the positive bias~(Section~\ref{sec:metrics}). For instance, the \textit{successful pass} for \acrshort{dataset_handball}-T shows a general high agreement. However, a positive bias in this metric can be recognized regarding the comparatively low amount of sequence-consistent events~(in brackets). These differences are probably caused by the high frequency of \textit{successful passes} in handball and the connected issues with assignment, detailed in Section~\ref{sec:metrics}.
Typical misclassifications are often related to the assignment of intention. For ambiguous situations~(see Figure~\ref{fig:example_annotations}), this assignment can be difficult and largely depending on the outcome. For instance, if a played ball lands close to a teammate the situation will rather be annotated as \textit{intentional ball release}. However, this does not comply with the concept of intention that needs to be distinguished in the moment of the execution. Yet, due to the complex nature of invasion games, even the player who played the ball might not give a definite answer.
A different type of error are temporal mismatches~(such as delays). While generally excluded from the annotation, still, a common source for these temporal differences are cuts, replays, or close-ups in the video data. As we aim to include the majority of events
if the action on the pitch can be derived from the general situation~(i.e., a replay only overlaps with a small fraction of an event), a common source of error are different event times. This is especially relevant for \textit{game status change events} where cuts and replays commonly occur.
\subsection{Vision-based Baseline}\label{exp:baseline}
To present some potential outputs of an automated classifier model, we create a baseline that only uses visual features from the raw input video to spot events.
Due to the lack of existing end-to-end solutions for event spotting and density of events (approx. each second in \acrshort{dataset_handball}), we follow common practice, where first a classifier is trained on short clips and then a sliding window is applied to produce frame-wise output~(e.g., feature vectors or class probabilities).
We follow \cite{sanford2020group} and directly apply \acrshort{nms} to the predicted class probabilities to suppress several positive predictions around the same event.
\subsubsection{Setup for Video Chunk Classification}
For the model, we choose an Inflated 3D ConvNet~\emph{I3D}~\cite{carreira2017quo, NonLocal2018} with a \emph{ResNet-50} as backbone which is pre-trained on \emph{Kinetics400}~\cite{kay2017kinetics}.
We select three classes~(\emph{reception}, \emph{successful pass}, and \emph{shot})~(plus a background event). We train one model for \acrshort{dataset_handball} on the entire~(spatial) visual content with fixed input dimension of $Tx3x256x456$.
Short clips~($T=32$ frames), centered around the annotation, are created to cover temporal context. For the background event, all remaining clips are taken with a stride of 15 frames.
Temporal resolution is halved to during training~(i.e., 15\,fps for \acrshort{dataset_handball}).
For remaining details we refer to the Appendix.
The model with the lowest validation loss is selected for the event spotting task.
\subsubsection{Evaluating the Event Spotting Task}
We collect all predicted probabilities at each frame using a sliding window and apply \acrshort{nms} on validated event-specific filter lengths~$w^e_\text{nms}$.
As several events can occur at the same time, for each event~$e$ a confidence threshold~$\tau_e$ is estimated.
Both hyper-parameters are optimized for each event on the $F_1$ score with \acrshort{nnm} using a grid search on the training dataset. We use the same search space as \citet{sanford2020group}.
Results are reported in Table~\ref{tab:aggreement_nn_cm} where precision and recall are calculated considering the expert annotation with the highest human performance as ground-truth.
Despite the limited amount of training data, the baseline demonstrates that our proposed datasets are suitable for benchmarking on-ball events.
We qualitatively observe that an excessive number of positive predictions in spite of \acrshort{nms} causes bad performance using \acrshort{scm}, which is only partly visible when using \acrshort{nnm}.
This confirms the need for the proposed metric and identifies the error cases of the baseline.
The model achieves sufficiently robust recognition performance with temporal centered ground-truth events, it predicts the actual event with high confidence when an ground-truth event in the sliding window is not centered.
We refer to future work~(1)~to improve the visual model for instance with hard-negative sample mining, or temporal pooling~\cite{giancola2021temporally} and~(2)~for the usage of multimodal data~(e.g., \cite{vanderplaetse2020improved}).
\subsection{Annotation Quality of Data Providers}\label{exp:data_prov_quality}
As previously discussed, annotations (in soccer) are frequently obtained from data providers that are not bound to fulfill any requirements or to meet a common gold standard.
To this end, we explore the quality of a data provider on the exemplary \acrfull{dataset_provider} which contains four matches of a first European soccer league from the 2014/2015 season. Here, we avoid an examination of semantically complex events like \textit{successful interference}
and perform an examination of the \textit{successful pass}, \textit{shot}, and \textit{game status changing events} where we find the largest compliance with the data-provider event catalog ~\cite{liu2013reliability}. To obtain a reference, we instruct a domain expert to acquire frame-wise accurate annotations by watching unedited recordings of the matches.
Similar to the previous experiments, we compute precision and recall while we account for differences in the number of total annotated events by application of \acrshort{scm}~(with specific consideration of passing player identities for passes). The results are given in Table~\ref{tab:aggreement_nn_cm} and a representative example is displayed in Figure~\ref{fig:example_annotations}.
We observe a low agreement between the precise expert and the data-provider annotation~(compared to results for~\acrshort{dataset_soccer}). While due to the consideration of player identities, slightly more \textit{successful pass} events are consistent, the agreement for \acrshort{scm} is also poor.
This is caused by a general imprecision in the data-provider annotation. This imprecision likely originates from the real-time manual annotation which data providers demand. The human annotators are instructed to collect temporal (and spatial) characteristics of specific events while simultaneously deciding on a corresponding event type from a rather large range of \textit{high-level} event catalog ~\cite{liu2013reliability}.
These results reveal the need for exact definitions and annotation guidelines and emphasize the value of automatic solutions.
We intend to show with this exploratory experiment that, the quality of the annotations provided should be taken into account depending on the targeted application.
Of course, we cannot draw conclusions about quality of other data and seasons based on this case study.
\section{Conclusions}\label{sec:conclusion}
In this paper, we have addressed the real-world task of fine-grained event detection and spotting in invasion games.
While prior work already presented automatic methods for the detection of individual sport-specific events with focus on soccer, they lacked objective event definitions and complete description for invasion games.
Despite the wide range of examined events, their complexity, and ambiguity, the data quality had not been investigated making the assessment of automatic approaches difficult. Even current evaluation metrics are inconsistent.
Therefore, we have contributed a \textit{hierarchical} taxonomy that enables a \textit{minimal} and \textit{objective} annotation and is \textit{modular expendable} to fit the needs of various invasion games.
In addition, we released two multimodal datasets with gold standard event annotations~(soccer and handball).
Extensive evaluation have validated the taxonomy as well as the quality of our two benchmark datasets while a comparison with data-provider annotations revealed advantages in annotation quality.
The results have shown that high agreement can be achieved even without domain knowledge. In addition, the \textit{hierarchical} approach demonstrates that (semantically) complex events can be propagated to a shared parent event to reach an increase in agreement.
With the presented taxonomy, datasets, and baseline, we create a foundation for the design and the benchmarking of upcoming automatic approaches for the spotting of on-ball events.
Also, other domains that work with video, positional, and event data, could benefit from the taxonomy and the datasets introduced in this paper.
In the future, we plan to integrate non-on-ball events into the taxonomy and to exploit \textit{hierarchical} information and attention to the ball position during training of a deep model.
\section*{Acknowledgement}
This project has received funding from the German Federal Ministry of Education and Research (BMBF -- Bundesministerium für Bildung und Forschung) under 01IS20021A, 01IS20021B, and 01IS20021C.
This research was supported by a grant from the German Research Council (DFG, Deutsche Forschungsgemeinschaft) to DM (grant ME~2678/30.1).
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 1.936523,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUa53xK6Ot9Pm3tP8- | \section{Introduction}
The international rankings for both male and female tennis players are based on a rolling $52$-week, cumulative system, where ranking points are earned from players' performances at tournaments. However, due to the limited observation window, such a ranking system is not sufficient if one would like to compare dominant players over a long period (say $10$ years) as players peak at different times. The ranking points that players accrue depend only on the stage of the tournaments reached by him or her. Unlike the well-studied Elo rating system for chess~\cite{EloBook}, one opponent's ranking is not taken into account, i.e., one will not be awarded with bonus points by defeating a top player. Furthermore, the current ranking system does not take into account the players' performances under different conditions (e.g., surface type of courts). We propose a statistical model to ameliorate the above-mentioned shortcomings by (i) understanding the relative ranking of players over a longitudinal period and (ii) discovering the existence of any latent variables that influence players' performances.
The statistical model we propose is an amalgamation of two well-studied models in the ranking and dictionary learning literatures, namely, the {\em Bradley-Terry-Luce} (BTL) model~\cite{bradleyterry, Luce} for ranking a population of items (in this case, tennis players) based on pairwise comparisons and {\em nonnegative matrix factorization} (NMF)~\cite{LS99,CichockiBook}. The BTL model posits that given a pair of players $(i,j)$ from a population of players $\{1,\ldots, N\}$, the probability that the pairwise comparison ``$i$ beats $j$'' is true is given by
\begin{equation}
\Pr(i \;\text{beats}\; j) = \frac{\lambda_{i}}{\lambda_{i}+\lambda_{j}} .\label{eqn:btl}
\end{equation}
Thus, $\lambda_i\in\mathbb{R}_+ :=[0,\infty)$ can be interpreted as the {\em skill level} of player $i$. The row vector ${\bm \lambda}=(\lambda_1,\ldots, \lambda_N)\in\mathbb{R}_+^{1\times N}$ thus parametrizes the BTL model. Other more general ranking models are discussed in \cite{MardenBook} but the BTL model suffices as the outcomes of tennis matches are binary.
NMF consists in the following problem. Given a nonnegative matrix $\mathbf{\Lambda} \in\mathbb{R}_+^{M\times N}$, one would like to find two matrices $\mathbf{W}\in\mathbb{R}_+^{M\times K}$ and $\mathbf{H}\in\mathbb{R}_+^{K\times N}$ such that their product $\mathbf{W}\mathbf{H}$ serves as a good low-rank approximation to $\mathbf{\Lambda}$. NMF is a linear dimensionality reduction technique that has seen a surge in popularity since the seminal papers by Lee and Seung~\cite{LS99,leeseung2000}. Due to the non-subtractive nature of the decomposition, constituent parts of objects can be extracted from complicated datasets. The matrix $\mathbf{W}$, known as the {\em dictionary matrix}, contains in its columns the parts, and the matrix $\mathbf{H}$, known as the {\em coefficient matrix}, contains in its rows activation coefficients that encode how much of each part is present in the columns of the data matrix $ \mathbf{\Lambda}$. NMF has also been used successfully to uncover latent variables with specific interpretations in various applications, including audio signal processing~\cite{fevotte2009}, text mining analysis \cite{berry2005}, and even analyzing soccer players' playing style~\cite{geerts2018}. We combine this framework with the BTL model to perform a {\em sports analytics} task on top tennis players.
\begin{figure}[t]
\centering
\setlength{\unitlength}{.43mm}
\begin{picture}(300, 80)
\put(0,10){\tiny tournaments}
\put(12,18){\small $M$}
\put(30,18){\line(1,0){4}}
\put(34,0){\line(0,1){36}}
\put(34,36){\line(1,0){3}}
\put(34,0){\line(1,0){3}}
\put(50,32){\tiny Wimbledon}
\put(36,26){\tiny Australian Open}
\put(46,20){\tiny French Open}
\put(78,15){.}
\put(78,13){.}
\put(78,11){.}
\put(82,0){\line(0,1){36}}
\put(82,36){\line(1,0){70}}
\put(82,0){\line(1,0){70}}
\put(152,0){\line(0,1){36}}
\put(113,15){\small $\mathbf{\Lambda}$}
\put(83,42){\line(1,0){69}}
\put(83,42){\line(0,-1){3}}
\put(152,42){\line(0,-1){3}}
\put(118,42){\line(0,1){4}}
\put(118,56){\small $N$}
\put(112,50){\tiny players}
\put(83,44){\rotatebox{90}{\tiny Rafal Nadal}}
\put(88,44){\rotatebox{90}{\tiny Novak Djokovic}}
\put(93,44){\rotatebox{90}{\tiny Roger Federer}}
\put(100,44){.}
\put(102,44){.}
\put(104,44){.}
\put(155, 15){\small $\approx$}
\put(165, 15){\small $M$}
\put(175, 0){\line(0,1){36}}
\put(175, 0){\line(1,0){20}}
\put(175,36){\line(1,0){20}}
\put(195, 0){\line(0,1){36}}
\put(181,39){\small $K$}
\put(180,15){\small $\mathbf{W}$}
\multiput(204,0)(5,0){14}{\line(1,0){2}}
\multiput(204,0)(0,4.5){8}{\line(0,1){2}}
\multiput(204,36)(5,0){14}{\line(1,0){2}}
\multiput(274,0)(0,4.5){8}{\line(0,1){2}}
\put(234, 68){\small $N$}
\put(204, 44){\line(1,0){70}}
\put(204, 44){\line(0,1){20}}
\put(204, 64){\line(1,0){70}}
\put(274, 44){\line(0,1){20}}
\put(193, 52){\small $K$}
\put(233, 52){\small $\mathbf{H}$}
\end{picture}
\caption{The BTL-NMF Model}
\label{fig:nmf_btl}
\end{figure}
\subsection{Main Contributions}
\paragraph{Model:} In this paper, we amalgamate the aforementioned models to rank tennis players and uncover latent factors that influence their performances. We propose a hybrid {\em BTL-NMF} model (see Fig.~\ref{fig:nmf_btl}) in which there are $M$ different skill vectors $\bm{\lambda}_m, m \in \{1,\ldots, M\}$, each representing players' relative skill levels in various tournaments indexed by $m$. These row vectors are stacked into an $M\times N$ matrix $\mathbf{\Lambda}$ which is the given input matrix in an NMF model.
\paragraph{Algorithms and Theory:} We develop computationally efficient and numerically stable majorization-minimization (MM)-based algorithms \cite{hunterLange2004} to obtain a decomposition of $\mathbf{\Lambda}$ into $\mathbf{W}$ and $\mathbf{H}$ that maximizes the likelihood of the data. Furthermore, by using ideas from~\cite{zhao2017unified,Raz}, we prove that not only is the objective function monotonically non-decreasing along iterations, additionally, every limit point of the sequence of iterates of the dictionary and coefficient matrices is a {\em stationary point} of the objective function.
\paragraph{Experiments:} We collected rich datasets of pairwise outcomes of $N=20$ top male and female players and $M=14$ (or $M=16$) top tournaments over $10$ years. Based on these datasets, our algorithm yielded factor matrices $\mathbf{W}$ and $\mathbf{H}$ that allowed us to draw interesting conclusions about the existence of latent variable(s) and relative rankings of dominant players over the past $10$ years. In particular, we conclude that male players' performances are influenced, to a large extent, by the surface of the court. In other words, the surface turns out to be the pertinent latent variable for male players. This effect is, however, less pronounced for female players. Interestingly, we are also able to validate via our model, datasets, and algorithm that Nadal is undoubtedly the ``King of Clay''; Federer, a precise and accurate server, is dominant on grass (a non-clay surface other than hard court) as evidenced by his winning of Wimbledon on multiple occasions; and Djokovic is a more ``balanced'' top player regardless of surface. Conditioned on playing on a clay court, the probability that Nadal beats Djokovic is larger than $1/2$. Even though the results for the women are less pronounced, our model and longitudinal dataset confirms objectively that S.~Williams, Sharapova, and Azarenka (in this order) are consistently the top three players over the past $10$ years. Such results (e.g., that Sharapova is so consistent that she is second best over the past $10$ years) are not directly deducible from official rankings because these rankings are essentially instantaneous as they are based on a rolling $52$-week cumulative system.
\subsection{Related Work}
Most of the works that incorporate latent factors in statistical ranking models (e.g., the BTL model) make use of mixture models. See, for example, \cite{Oh2014,NiharSimpleRobust, Suh17}. While such models are able to take into account the fact that subpopulations within a large population possess different skill sets, it is difficult to make sense of what the underlying latent variable is. In contrast, by merging the BTL model with the NMF framework---the latter encouraging the extraction of {\em parts} of complex objects---we are able to observe latent features in the learned dictionary matrix $\mathbf{W}$ (see Table~\ref{tab:temp}) and hence to extract the semantic meaning of latent variables. In our particular application, it is the surface type of the court for male tennis players. See Sec.~\ref{sec:comp} where we also show that our solution is more stable and robust (in a sense to be made precise) than that of the mixture-BTL model.
The paper most closely related to the present one is~\cite{ding2015} in which a topic modelling approach was used for ranking. However, unlike our work in which continuous-valued skill levels in $\mathbf{\Lambda}$ are inferred, {\em permutations} (i.e., discrete objects) and their corresponding mixture weights were learned. We opine that our model and results provide a more {\em nuanced} and {\em quantitative} view of the relative skill levels between players under different latent conditions.
\subsection{Paper Outline}
In Sec.~\ref{sec:framework}, we discuss the problem setup, the statistical model, and its associated likelihood function. In Sec.~\ref{sec:alg}, we derive efficient MM-based algorithms to maximize the likelihood. In Sec.~\ref{sec:expt}, we discuss numerical results of extensive experiments on real-world tennis datasets. We conclude our discussion in Sec.~\ref{sec:con}.
\section{Problem Setup, Statistical Model, and Likelihood }\label{sec:framework}
\subsection{Problem Definition and Model} \label{PD}
Given $N$ players and $M$ tournaments over a fixed number of years (in our case, this is $10$), we consider a dataset $\mathcal{D} := \big\{ b_{ij}^{(m)} \in\{0,1,2,\ldots\} : (i,j) \in \mathcal{P}_{m} \big\}_{m=1}^M$, where $\mathcal{P}_{m}$ denotes the set of games between pairs of players that have played at least once in tournament $m$, and $b_{ij}^{(m)}$ is the number of times that player $i$ has beaten player $j$ in tournament $m$ over the fixed number of year.
To model the skill levels of each player, we consider a nonnegative matrix $\mathbf{\Lambda}$ of dimensions $M \times N$. The $(m,i)^{\text{th}}$ element $[\mathbf{\Lambda}]_{mi}$ represents the skill level of player $i$ in tournament $m$. Our goal is to design an algorithm to find a factorization of $\mathbf{\Lambda}$ into two nonnegative matrices $\mathbf{W} \in\mathbb{R}_+^{M\times K}$ and $\mathbf{H}\in\mathbb{R}_+^{K\times N}$ such that the likelihood of $\mathcal{D}$ under the BTL model in~\eqref{eqn:btl} is maximized; this is the so-called {\em maximum likelihood framework}. Here $K\le\min\{M,N\}$ is a small integer so the factorization is low-rank. In Sec.~\ref{sec:norm}, we discuss different strategies to normalize $\mathbf{W}$ and $\mathbf{H}$ so that they are easily interpretable, e.g., as probabilities.
Roughly speaking, the eventual interpretation of $\mathbf{W}$ and $\mathbf{H}$ is as follows. Each column of the dictionary matrix $\mathbf{W}$ encodes the ``likelihood'' that a certain tournament $m \in \{1,\ldots, M\}$ belongs to a certain latent class (e.g., type of surface). Each row of the coefficient matrix encodes the player's skill level in a tournament of a certain latent class. For example, referring to Fig.~\ref{fig:nmf_btl}, if the latent classes indeed correspond to surface types, the $(1,1)$ entry of $\mathbf{W}$ could represent the likelihood that Wimbledon is a tournament that is played on clay. The $( 1,1)$ entry of $\mathbf{H}$ could represent Nadal's skill level on clay.
\subsection{Likelihood of the BTL-NMF Model}
According to the BTL model and the notations above, the probability that player $i$ beats player $j$ in tournament $m$ is
\begin{equation*}
\Pr(i \;\text{beats}\; j \; \text{in tournament}\; m) = \frac{[\mathbf{\Lambda}]_{mi}}{[\mathbf{\Lambda}]_{mi} + [\mathbf{\Lambda}]_{mj}}.
\end{equation*}
We expect that $\mathbf{\Lambda}$ is close to a low-rank matrix as the number of latent factors governing players' skill levels is small. We would like to exploit the ``mutual information'' or ``correlation'' between tournaments of similar characteristics to
find a factorization of $\mathbf{\Lambda}$. If $\mathbf{\Lambda}$ were unstructured, we could solve $M$ independent, tournament-specific problems to learn $(\bm{\lambda}_1,\ldots,\bm{\lambda}_M)$. We replace $\mathbf{\Lambda}$ by $\mathbf{W}\mathbf{H}$ and the {\em likelihood} over all games in all tournaments (i.e., of the dataset $\mathcal{D}$), assuming conditional independence across tournaments and games, is
\begin{equation*}
p(\mathcal{D}|\mathbf{W},\mathbf{H})=\prod\limits_{m=1}^{M}\prod\limits_{(i,j) \in \mathcal{P}_{m}} \bigg( \frac{[\mathbf{W}\mathbf{H}]_{mi}}{[\mathbf{W}\mathbf{H}]_{mi} + [\mathbf{W}\mathbf{H}]_{mj}}\bigg)^{b_{ij}^{(m)}}.
\end{equation*}
It is often more tractable to minimize the {\em negative log-likelihood}. In the sequel, we regard this as our objective function which can be expressed as
\begin{align}
& f( \mathbf{W},\mathbf{H}) := -\log p(\mathcal{D}|\mathbf{W},\mathbf{H}) \nonumber\\
&\;\;= \sum\limits_{m=1}^{M}\sum\limits_{(i,j) \in \mathcal{P}_{m}} b_{ij}^{(m)}\Big[ -\log\big([\mathbf{W}\mathbf{H}]_{mi}\big) +\log\big([\mathbf{W}\mathbf{H}]_{mi} + [\mathbf{W}\mathbf{H}]_{mj}\big) \Big]. \label{eqn:neg_ll}
\end{align}
\section{Algorithms and Theoretical Guarantees} \label{sec:alg}
In this section, we describe the algorithm to optimize \eqref{eqn:neg_ll}, together with accompanying theoretical guarantees. We also discuss how we ameliorate numerical problems while maintaining the desirable guarantees of the algorithm.
\subsection{Majorization-Minimization (MM) Algorithm} \label{sec:MM}
We now describe how we use an MM algorithm~\cite{hunterLange2004} to optimize~\eqref{eqn:neg_ll}. The MM framework iteratively solves the problem of minimizing a certain function $f(x)$, but its utility is most evident when the direct of optimization of $f(x)$ is difficult. One proposes an {\em auxiliary function} or {\em majorizer} $u(x,x')$ that satisfies the following two properties: (i) $f(x) = u(x,x) ,\forall\, x$ and (ii) $f(x)\le u(x,x'),\forall\, x,x'$ (majorization). In addition for a fixed value of $x'$, the minimization of $u(\cdot, x')$ is assumed to be tractable (e.g., there exists a closed-form solution for $x^*=\argmin_x u(x,x')$). Then, one adopts an iterative approach to find a sequence $\{ x^{(l)} \}_{l =1}^\infty$. One observes that if $x^{ (l+1) } = \argmin_{x} u(x,x^{(l )})$ is a minimizer at iteration $l+1$, then
\begin{equation}
f(x^{(l+1)}) \stackrel{ \text{(ii)} }{\le} u(x^{(l+1)} , x^{(l)}) \le u(x^{(l )} , x^{(l)}) \stackrel{\text{(i)}}{=} f(x^{(l)}).\label{eqn:mm}
\end{equation}
Hence, if such an auxiliary function $u(x,x')$ can be found, it is guaranteed that the sequence of iterates results in a sequence of non-increasing objective values.
Applying MM to our model is slightly more involved as we are trying to find {\em two} nonnegative matrices $\mathbf{W} $ and $\mathbf{H}$. Borrowing ideas from using MM in NMFs problems (see for example the works~\cite{fevotte2011algorithms,tan2013automatic}), the procedure first updates $\mathbf{W}$ by keeping $\mathbf{H}$ fixed, then updates $\mathbf{H}$ by keeping $\mathbf{W}$ fixed to its previously updated value. We will describe, in the following, how to optimize the original objective in \eqref{eqn:neg_ll} with respect to $\mathbf{W}$ with fixed $\mathbf{H}$ as the other optimization proceeds in an almost\footnote{The updates for $\mathbf{W}$ and $\mathbf{H}$ are not completely symmetric because the data is in the form of a $3$-way tensor $\{b_{ij}^{(m)}\}$; this is also apparent in the objective in~\eqref{eqn:neg_ll} and the updates in~\eqref{eqn:ori_update}.} symmetric fashion since $\mathbf{\Lambda}^T=\mathbf{H}^T\mathbf{W}^T$. As mentioned above, the MM algorithm requires us to construct an auxiliary function $u_{1}(\mathbf{W},\tilde{\mathbf{W}}|\mathbf{H})$ that majorizes $-\log p(\mathcal{D}|\mathbf{W}, \mathbf{H})$.
The difficulty in optimizing the original objective function in \eqref{eqn:neg_ll} is twofold. The first concerns the coupling of the two terms $[\mathbf{W}\mathbf{H}]_{mi}$ and $[\mathbf{W}\mathbf{H}]_{mj}$ inside the logarithm function. We resolve this using a technique introduced by Hunter in~\cite{hunter2004mm}. It is known that for any concave function $f$, its first-order Taylor approximation overestimates it, i.e., $f(y) \leq f(x) + \nabla f(x)^{T}(y-x)$. Since the logarithm function is concave, we have the inequality $\log y \leq \log x + \frac{1}{x}(y-x)$ which is an equality when $x=y$. These two properties mean that the following is a majorizer of the term $\log ([\mathbf{W}\mathbf{H}]_{mi} + [\mathbf{W}\mathbf{H}]_{mj} )$ in~\eqref{eqn:neg_ll}:
\begin{equation*}
\log \big([\mathbf{W}^{(l)}\mathbf{H}]_{mi} + [\mathbf{W}^{(l)}\mathbf{H}]_{mj}\big) + \frac{[\mathbf{W}\mathbf{H}]_{mi} + [\mathbf{W}\mathbf{H}]_{mj}}{[\mathbf{W}^{(l)}\mathbf{H}]_{mi} + [\mathbf{W}^{(l)}\mathbf{H}]_{mj}} -1.
\end{equation*}
The second difficulty in optimizing~\eqref{eqn:neg_ll} concerns the term $\log ([\mathbf{W}\mathbf{H}]_{mi})=\log ( \sum_{k}w_{mk}h_{ki})$. By introducing the terms $\gamma_{mki}^{(l)} :=w_{mk}^{(l)}h_{ki}/[\mathbf{W}^{(l)}\mathbf{H}]_{mi}$ for $ k \in\{1,\ldots, K\}$ (which have the property that $\sum_k\gamma_{mki}^{(l)}=1$) to the sum in $\log ( \sum_{k}w_{mk}h_{ki})$ as was done by F\'evotte and Idier in~\cite[Theorem~1]{fevotte2011algorithms}, and using the convexity of $-\log x$ and Jensen's inequality, we obtain the following majorizer of the term $-\log ([\mathbf{W}\mathbf{H}]_{mi})$ in~\eqref{eqn:neg_ll}:
\begin{equation*}
-\sum\limits_{k}\frac{w_{mk}^{(l)}h_{ki}}{[\mathbf{W}^{(l)}\mathbf{H}]_{mi}}\log \bigg(\frac{w_{mk}}{w_{mk}^{(l)}}[\mathbf{W}^{(l)}\mathbf{H}]_{mi}\bigg).
\end{equation*}
The same procedure can be applied to find an auxiliary function $u_{2}(\mathbf{H},\tilde{\mathbf{H}}|\mathbf{W})$ for the optimization for $\mathbf{H}$. Minimization of the two auxiliary functions with respect to $\mathbf{W}$ and $\mathbf{H}$ leads to the following MM updates:
{
\small
\begin{subequations} \label{eqn:ori_update}
\begin{align}
\tilde{w}_{mk}^{(l+1)} &\leftarrow \ddfrac{\sum\limits_{(i,j) \in \mathcal{P}_{m}} b_{ij}^{(m)}\frac{w_{mk}^{(l)}h_{ki}^{(l)}}{[\mathbf{W}^{(l)}\mathbf{H}^{(l)}]_{mi}}}{\sum\limits_{(i,j) \in \mathcal{P}_{m}} b_{ij}^{(m)}\frac{h_{ki}^{(l)}+h_{kj}^{(l)}}{[\mathbf{W}^{(l)}\mathbf{H}^{(l)}]_{mi}+[\mathbf{W}^{(l)}\mathbf{H}^{(l)}]_{mj}}} \label{eqn:updateW},\\
\tilde{h}_{ki}^{(l+1)} &\leftarrow \ddfrac{\sum\limits_{m} \sum\limits_{j \neq i:(i,j) \in \mathcal{P}_{m}} b_{ij}^{(m)}\frac{w_{mk}^{(l+1)}h_{ki}^{(l)}}{[\mathbf{W}^{(l+1)}\mathbf{H}^{(l)}]_{mi}}}{\sum\limits_{m}\sum\limits_{j \neq i: (i,j) \in \mathcal{P}_{m}} \big( b_{ij}^{(m)} + b_{ji}^{(m)} \big)\frac{w_{mk}^{(l+1)}}{[\mathbf{W}^{(l+1)}\mathbf{H}^{(l)}]_{mi} + [\mathbf{W}^{(l+1)}\mathbf{H}^{(l)}]_{mj}}}\label{eqn:updateH} .
\end{align}
\end{subequations}
}%
Note that since we first update $\mathbf{W}$, $\mathbf{H}$ is given and fixed which means that it is indexed by the previous iteration $l$; as for the update of $\mathbf{H}$, the newly calculated $\mathbf{W}$ at iteration $l+1$ will be used.
\subsection{Resolution of Numerical Problems}\label{sec:num}
While the above updates guarantee that the objective function does not decrease, numerical problems may arise in the implementation of~\eqref{eqn:ori_update}. Indeed, it is possible that $[\mathbf{W}\mathbf{H}]_{mi}$ becomes extremely close to zero for some $(m,i)$.
To prevent such numerical problems from arising, our strategy is to add a small number $\epsilon>0$ to every element of $\mathbf{H}$ in~\eqref{eqn:neg_ll}. The intuitive explanation that justifies this is that we believe that each player has some default skill level in every type of tournament. By modifying $\mathbf{H}$ to $\mathbf{H}+\epsilon\mathds{1}$, where $\mathds{1}$ is the $K \times N$ all-ones matrix, we have the following new objective function:
\begin{align}
f_\epsilon(\mathbf{W},\mathbf{H}) &:=
\sum\limits_{m=1}^{M}\sum\limits_{(i,j) \in \mathcal{P}_{m}} b_{ij}^{(m)} \Big[ - \log\big([\mathbf{W}(\mathbf{H}+\epsilon\mathds{1})]_{mi}\big)\nonumber\\
&\qquad\qquad + \log\big([\mathbf{W}(\mathbf{H}+\epsilon\mathds{1})]_{mi} + [\mathbf{W}(\mathbf{H}+\epsilon\mathds{1})]_{mj}\big) \Big]\label{eqn:new_ll_func} .
\end{align}
Note that $f_0(\mathbf{W},\mathbf{H})=f(\mathbf{W},\mathbf{H})$, defined in \eqref{eqn:neg_ll}. Using the same ideas involving MM to optimize $f(\mathbf{W},\mathbf{H})$ as in Sec.~\ref{sec:MM}, we can find new auxiliary functions, denoted similarly as $u_{1}(\mathbf{W},\tilde{\mathbf{W}}|\mathbf{H})$ and $u_{2}( \mathbf{H} , \tilde{\mathbf{H}}|\mathbf{W})$, leading to following updates
{
\small
\begin{subequations} \label{eqn:new_update}
\begin{align}
\tilde{w}_{mk}^{(l+1)} &\leftarrow \ddfrac{\sum\limits_{(i,j) \in\mathcal{P}_m} b_{ij}^{(m)} \frac{w_{mk}^{(l)}(h_{ki}^{(l)}+\epsilon)}{[\mathbf{W}^{(l)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mi}}}{\sum\limits_{(i,j)\in\mathcal{P}_m} b_{ij}^{(m)} \frac{h_{ki}^{(l)} + h_{kj}^{(l)} + 2\epsilon}{[\mathbf{W}^{(l)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mi} + [\mathbf{W}^{(l)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mj}}}\label{eqn:updateW_new},\\
\tilde{h}_{ki}^{(l+1)} &\leftarrow\ddfrac{\sum\limits_{m}\sum\limits_{j \neq i: (i,j) \in \mathcal{P}_{m}} b_{ij}^{(m)} \frac{w_{mk}^{(l+1)}(h_{ki}^{(l)}+\epsilon)}{[\mathbf{W}^{(l+1)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mi}}}{\sum\limits_{m}\sum\limits_{j \neq i: (i,j) \in \mathcal{P}_{m}} \frac{(b_{ij}^{(m)} + b_{ji}^{(m)})w_{mk}^{(l+1)}}{[\mathbf{W}^{(l+1)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mi} + [\mathbf{W}^{(l+1)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mj}}} - \epsilon.\label{eqn:updateH_new}
\end{align} \end{subequations}
}%
Notice that although this solution successfully prevents division by zero (or small numbers) during the iterative process, for the new update of $\mathbf{H}$, it is possible $h_{ki}^{(l+1)}$ becomes negative because of the subtraction by $\epsilon$ in \eqref{eqn:updateH_new}. To ensure $h_{ki}$ is nonnegative as required by the nonnegativity of NMF, we set
\begin{equation}
\tilde{h}_{ki}^{(l+1)} \leftarrow \max \big\{\tilde{h}_{ki}^{(l+1)}, 0\big\}. \label{eqn:trunc}
\end{equation}
After this truncation operation, it is, however, unclear whether the likelihood function is non-decreasing, as we have altered the vanilla MM procedure.
We now prove that $f_\epsilon$ in \eqref{eqn:new_ll_func} is non-increasing as the iteration count increases. Suppose for the $(l+1)^{\text{st}}$ iteration for $\tilde{\mathbf{H}}^{(l+1)}$, truncation to zero only occurs for the $(k,i)^{\text{th}}$ element and and all other elements stay unchanged, meaning $\tilde{h}_{ki}^{(l+1)} = 0$ and $\tilde{h}_{k', i'}^{(l+1)} = \tilde{h}_{k' ,i ' }^{(l)}$ for all $(k',i') \neq (k, i)$. We would like to show that $f_\epsilon(\mathbf{W},\tilde{\mathbf{H}}^{(l+1)}) \leq f_\epsilon( \mathbf{W},\tilde{\mathbf{H}}^{(l)})$. It suffices to show $u_{2}(\tilde{\mathbf{H}}^{(l+1)}, \tilde{\mathbf{H}}^{(l)}|\mathbf{W}) \leq f_\epsilon(\mathbf{W},\tilde{\mathbf{H}}^{(l)})$, because if this is true, we have the following inequality
\begin{equation}
f_\epsilon(\mathbf{W},\tilde{\mathbf{H}}^{(l+1)})
\leq u_{2}(\tilde{\mathbf{H}}^{(l+1)}, \tilde{\mathbf{H}}^{(l)}|\mathbf{W}) \leq f_\epsilon(\mathbf{W},\tilde{\mathbf{H}}^{(l)}), \label{eqn:follow_ineq}
\end{equation}
where the first inequality holds as $u_2$ is an auxiliary function for $\mathbf{H}$. The truncation is invoked only when the update in~\eqref{eqn:updateH_new} becomes negative, i.e., when
\begin{equation*}
\frac{\sum\limits_{m}\sum\limits_{j \neq i:(i,j) \in \mathcal{P}_{m}} b_{ij}^{(m)} \frac{w_{mk}^{(l+1)}(h_{ki}^{(l)}+\epsilon)}{[\mathbf{W}^{(l+1)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mi}}}{\sum\limits_{m}\sum\limits_{j \neq i:(i,j) \in \mathcal{P}_{m}}\frac{ (b_{ij}^{(m)} + b_{ji}^{(m)}) w_{mk}^{(l+1)}}{[\mathbf{W}^{(l+1)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mi} + [\mathbf{W}^{(l+1)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mj}}} \leq \epsilon.
\end{equation*}
Using this inequality and performing some straightforward but tedious algebra as shown in Sec. S-1 in the supplementary material~\cite{xia2019}, we can justify the second inequality in~\eqref{eqn:follow_ineq} as follows
{\small\begin{align*}
&f_\epsilon(\mathbf{W},\tilde{\mathbf{H}}^{(l)}) - u_{2}(\tilde{\mathbf{H}}^{(l+1)}, \tilde{\mathbf{H}}^{(l)}|\mathbf{W}) \nonumber\\
&\geq \sum\limits_{m}\sum\limits_{j \neq i :(i,j) \in \mathcal{P}_{m}} \frac{(b_{ij}^{(m)}+b_{ji}^{(m)})w_{mk}}{[\mathbf{W}(\mathbf{H}^{(l)}\!+\!\epsilon\mathds{1})]_{mi} \!+\! [\mathbf{W}(\mathbf{H}^{(l)}\!+\!\epsilon\mathds{1})]_{mj}} \bigg[h_{ki}^{(l)} \!-\!\epsilon \log\Big(\frac{h_{ki}^{(l)}\!+\!\epsilon}{\epsilon}\Big) \bigg] \!\geq\! 0.
\end{align*}}%
The last inequality follows because $b_{ij}^{(m)}$, $\mathbf{W}$ and $\mathbf{H}^{(l)}$ are nonnegative, and $h_{ki}^{(l)}-\epsilon \log(\frac{h_{ki}^{(l)}+\epsilon}{\epsilon}) \geq 0$ since $x \geq \log(x+1)$ for all $x \geq 0$ with equality at $x=0$. Hence, the likelihood is non-decreasing during the MM update even though we included an additional operation that truncates $\tilde{h}_{ki}^{(l+1)}<0$ to zero in \eqref{eqn:trunc}.
\subsection{Normalization} \label{sec:norm}
It is well-known that NMF is not unique in the general case, and it is characterized by a scale and permutation indeterminacies~\cite{CichockiBook}. For the problem at hand, for the learned $\mathbf{W}$ and $\mathbf{H}$ matrices to be interpretable as ``skill levels'' with respect to different latent variables, it is imperative we consider {\em normalizing} them appropriately after every MM iteration in \eqref{eqn:new_update}. However, there are different ways to normalize the entries in the matrices and one has to ensure that after normalization, the likelihood of the model stays unchanged. This is tantamount to keeping the ratio $\frac{[\mathbf{W}(\mathbf{H}+\epsilon\mathds{1}) ]_{mi}}{[\mathbf{W}(\mathbf{H}+\epsilon\mathds{1})]_{mi}+[\mathbf{W}(\mathbf{H}+\epsilon\mathds{1})]_{mj}}$ unchanged for all $(m,i,j)$. The key observations here are twofold: First, concerning $\mathbf{H}$, since terms indexed by $(m,i)$ and $(m,j)$ appear in the denominator but only $(m,i)$ appears in the numerator, we can normalize over all elements of $\mathbf{H}$ to keep this fraction unchanged. Second, concerning $\mathbf{W}$, since only terms indexed by $m$ term appear both in numerator and denominator, we can normalize either rows or columns as we will show in the following.
\subsubsection{Row Normalization of $\mathbf{W}$ and Global Normalization of $\mathbf{H}$}\label{sec:row_norm}
Define the row sums of $\mathbf{W}$ as $r_{m} := \sum_{k}\tilde{w}_{mk}$ and let $\alpha := \frac{\sum_{k,i}\tilde{h}_{ki}+KN\epsilon}{1+KN\epsilon}$. Now consider the following operations:
\begin{equation*}
w_{mk} \leftarrow \frac{\tilde{w}_{mk}}{r_{m}},\quad\mbox{and}\quad h_{ki}\leftarrow \frac{\tilde{h}_{ki}+(1-\alpha)\epsilon}{\alpha}.
\end{equation*}
The above update to obtain $h_{ki}$ may result in it being negative; however, the truncation operation in~\eqref{eqn:trunc} ensures that $h_{ki}$ is eventually nonnegative.\footnote{One might be tempted to normalize $\mathbf{H}+\epsilon\mathds{1}\in\mathbb{R}_{+}^{K\times N}$. This, however, does not resolve numerical issues (analogous to division by zero in~\eqref{eqn:ori_update}) as some entries of $\mathbf{H}+\epsilon\mathds{1}$ may be zero. } See also the update to obtain $\tilde{h}_{ki}^{(l+1)}$ in Algorithm~\ref{alg:btl_nmf}.
The operations above keep the likelihood unchanged and achieve the desired row normalization of $\mathbf{W}$ since
{
\small
\begin{align*}
& \frac{\sum_{k} \tilde{w}_{mk} (\tilde{h}_{ki} + \epsilon)}{\sum_{k} \tilde{w}_{mk} (\tilde{h}_{ki} + \epsilon) + \sum_{k} \tilde{w}_{mk} (\tilde{h}_{kj} + \epsilon)}
= \frac{\sum_{k} \frac{\tilde{w}_{mk}}{r_{m}} (\tilde{h}_{ki} + \epsilon)}{\sum_{k} \frac{\tilde{w}_{mk}}{r_{m}} (\tilde{h}_{ki} + \epsilon) + \sum_{k} \frac{\tilde{w}_{mk}}{r_{m}} (\tilde{h}_{kj} + \epsilon)}\\
&= \frac{\sum_{k} w_{mk} \frac{(\tilde{h}_{ki} + \epsilon)}{\alpha}}{\sum_{k} w_{mk} \frac{(\tilde{h}_{ki} + \epsilon)}{\alpha} + \sum_{k} w_{mk} \frac{(\tilde{h}_{ki} + \epsilon)}{\alpha}} = \frac{\sum_{k} w_{mk} (h_{ki} + \epsilon)}{\sum_{k} w_{mk} (h_{ki} + \epsilon) + \sum_{k} w_{mk} (h_{kj} + \epsilon)}.
\end{align*}
}%
\begin{algorithm}[t]
\caption{MM Alg.\ for BTL-NMF model with column normalization of~$\mathbf{W}$}
\begin{algorithmic}
\STATE \textbf{Input:} $M$ tournaments; $N$ players; number of times player $i$ beats player $j$ in tournament $m$ in dataset
$\mathcal{D} = \big\{ b_{ij}^{(m)}: i,j \in \{1,...,N\}, m \in \{1,...,M\}\big\}$
\STATE \textbf{Init:} Fix $K \in\mathbb{N}$, $\epsilon>0$, $\tau>0$ and initialize $\mathbf{W}^{(0)} \in \mathbb{R}_{++}^{M \times K}, \mathbf{H}^{(0)} \in \mathbb{R}_{++}^{K \times N}$.
\WHILE {diff $\geq \tau>0$}
\STATE
\begin{enumerate}[(1)]
\item \textbf{Update} $\forall m \in \{1,...,M\}, \forall k \in \{1,...,K\}, \forall i \in \{1,...,N\}$\\
$\tilde{w}_{mk}^{(l+1)} = \frac{\sum\limits_{ i,j } b_{ij}^{(m)} \frac{w_{mk}^{(l)}(h_{ki}^{(l)}+\epsilon)}{[\mathbf{W}^{(l)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mi}}}{\sum\limits_{ i,j } b_{ij}^{(m)} \frac{h_{ki}^{(l)} + h_{kj}^{(l)} + 2\epsilon}{[\mathbf{W}^{(l)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mi} + [\mathbf{W}^{(l)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mj}}}$\\
$\tilde{h}_{ki}^{(l+1)} = \max \Bigg\{ \frac{\sum\limits_{m}\sum\limits_{j \neq i} b_{ij}^{(m)} \frac{w_{mk}^{(l+1)}(h_{ki}^{(l)}+\epsilon)}{[\mathbf{W}^{(l+1)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mi}}}{\sum\limits_{m}\sum\limits_{j \neq i} \frac{(b_{ij}^{(m)} + b_{ji}^{(m)}) w_{mk}^{(l+1)}}{[\mathbf{W}^{(l+1)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mi} + [\mathbf{W}^{(l+1)}(\mathbf{H}^{(l)}+\epsilon\mathds{1})]_{mj}}} - \epsilon,0\Bigg\}$
\item \textbf{Normalize} $\forall\, m \in \{1,...,M\}, \forall\, k \in \{1,...,K\}, \forall\, i \in \{1,...,N\}$\\
$w_{mk}^{(l+1)} \leftarrow \frac{\tilde{w}_{mk}^{(l+1)}}{\sum\limits_{m}\tilde{w}_{mk}^{(l+1)}}$; \
$\hat{h}_{ki}^{(l+1)} \leftarrow \tilde{h}_{ki}^{(l+1)}\sum\limits_{m}\tilde{w}_{mk}^{(l+1)} + \epsilon\Big(\sum\limits_{m}\tilde{w}_{mk}^{(l+1)}-1\Big)$\\
Calculate $\beta = \frac{\sum_{k,i}\hat{h}_{ki}^{(l+1)}+KN\epsilon}{1+KN\epsilon}$, $h_{ki}^{(l+1)} \leftarrow \frac{\hat{h}_{ki}^{(l+1)}+(1-\beta)\epsilon}{\beta}$
\item diff $\leftarrow \max\Big\{\max\limits_{ m,k }\big|w_{mk}^{(l+1)}-w_{mk}^{(l)}\big|,\max\limits_{ k,i }\big|h_{ki}^{(l+1)}-h_{ki}^{(l)}\big|\Big\}$
\end{enumerate}
\ENDWHILE
\RETURN $(\mathbf{W}, \mathbf{H})$ that forms a local maximizer of the likelihood $ p(\mathcal{D}|\mathbf{W},\mathbf{H})$
\end{algorithmic}
\label{alg:btl_nmf}
\end{algorithm}
\subsubsection{Column Normalization of $\mathbf{W}$ and Global Normalization of $\mathbf{H}$}\label{sec:col_norm}
Define the column sums of $\mathbf{W}$ as $c_{k} := \sum_{m}\tilde{w}_{mk}$ and let $\beta := \frac{\sum_{k,i}\hat{h}_{ki}+KN\epsilon}{1+KN\epsilon}$. Now consider the following operations:
\begin{equation*}
w_{mk} \leftarrow \frac{\tilde{w}_{mk}}{c_{k}},\quad \hat{h}_{ki}\leftarrow \tilde{h}_{ki}c_{k} + \epsilon(c_{k}-1),\quad\mbox{and}\quad h_{ki}\leftarrow \frac{\hat{h}_{ki}+(1-\beta)\epsilon}{\beta}.
\end{equation*}
This would keep the likelihood unchanged and achieve the desired column normalization of $\mathbf{W}$ since
{\small \begin{align*}
& \frac{\sum_{k} \tilde{w}_{mk} (\tilde{h}_{ki} + \epsilon)}{\sum_{k} \tilde{w}_{mk} (\tilde{h}_{ki} + \epsilon) + \sum_{k} \tilde{w}_{mk} (\tilde{h}_{kj} + \epsilon)}
= \frac{\sum_{k} \frac{\tilde{w}_{mk}}{c_{k}} (\tilde{h}_{ki} + \epsilon)c_{k}}{\sum_{k} \frac{\tilde{w}_{mk}}{c_{k}} (\tilde{h}_{ki} + \epsilon)c_{k} + \sum_{k} \frac{\tilde{w}_{mk}}{c_k} (\tilde{h}_{kj} + \epsilon)c_{k}}\\
&= \frac{\sum_{k} w_{mk} \frac{(\hat{h}_{ki} + \epsilon)}{\beta}}{\sum_{k} w_{mk} \frac{(\hat{h}_{ki} + \epsilon)}{\beta} + \sum_{k} w_{mk} \frac{(\hat{h}_{ki} + \epsilon)}{\beta}} = \frac{\sum_{k} w_{mk} (h_{ki} + \epsilon)}{\sum_{k} w_{mk} (h_{ki} + \epsilon) + \sum_{k} w_{mk} (h_{kj} + \epsilon)}.\vspace{-.1in}
\end{align*}}
\noindent Using this normalization strategy, it is easy to verify that all the entries of $\mathbf{\Lambda} = \mathbf{W}\mathbf{H}$ sum to one.\footnote{We have $\sum_{m}\sum_{i}[\mathbf{\Lambda}]_{mi} = \sum_{m}\sum_{i}\sum_{k}w_{mk}h_{ki} = \sum_{i}\sum_{k}h_{ki}\!\!\sum_{m}\!\!w_{mk} = \sum_{k,i}\!\!h_{ki}= 1$.} This allows us to interpret the entries of $\mathbf{\Lambda}$ as ``conditional probabilities''.
\subsection{Summary of Algorithm}
Algorithm \ref{alg:btl_nmf} presents pseudo-code for optimizing \eqref{eqn:new_ll_func} with columns of $\mathbf{W}$ normalized. The algorithm when the rows of $\mathbf{W}$ are normalized is similar; we replace the normalization step with the procedure outlined in Sec.~\ref{sec:row_norm}. In summary, we have proved that the sequence of iterates $\{ (\mathbf{W}^{(l )} , \mathbf{H}^{(l )} ) \}_{l =1}^\infty$ results in the sequence of objective functions $\{f_\epsilon(\mathbf{W}^{(l )} , \mathbf{H}^{(l )} ) \}_{l =1}^\infty$ being non-increasing. Furthermore, if $\epsilon>0$, numerical problems do not arise and with the normalization as described in Sec.~\ref{sec:col_norm},
the entries in $\mathbf{\Lambda}$ can be interpreted as ``conditional probabilities'' as we will further illustrate in Sec.~\ref{sec:men}.
\subsection{Convergence of Matrices $\{(\mathbf{W}^{(l )}, \mathbf{H}^{(l )})\}_{l=1}^\infty$ to Stationary Points}
While we have proved that the sequence of objectives $\{f_\epsilon(\mathbf{W}^{(l )} , \mathbf{H}^{(l )} ) \}_{l =1}^\infty$ is non-increasing (and hence it converges because it is bounded), it is not clear as to whether the sequence of {\em iterates} generated by the algorithm $\{ (\mathbf{W}^{(l )} , \mathbf{H}^{(l )} ) \}_{l =1}^\infty$ converges and if so to what. We define the {\em marginal functions} $f_{1,\epsilon}(\mathbf{W} | \overline{\mathbf{H}}) := f_\epsilon(\mathbf{W}, \overline{\mathbf{H}})$ and $f_{2,\epsilon} (\mathbf{H}| \overline{\mathbf{W}}) := f_\epsilon(\overline{\mathbf{W}}, \mathbf{H})$. For any function $g:\mathcal{D}\to\mathbb{R}$, we let $g'( x; d) := \liminf_{\lambda\downarrow 0} (g( x+\lambda d)-g(x))/\lambda$ be the {\em directional derivative} of $g$ at point $x$ in direction $d$. We say that $(\overline{\mathbf{W}}, \overline{\mathbf{H}})$ is a {\em stationary point} of the minimization problem
\begin{equation}
\min_{ \mathbf{W} \in\mathbb{R}_{+}^{M\times K}, \mathbf{H} \in\mathbb{R}_{+}^{K\times N}} f_\epsilon (\mathbf{W},\mathbf{H}) \label{eqn:min}
\end{equation}
if the following two conditions hold:
\begin{alignat*}{2}
f_{1,\epsilon}'(\overline{\mathbf{W}};\mathbf{W} - \overline{\mathbf{W}} | \overline{\mathbf{H}}) &\ge 0, \qquad\forall\,& \mathbf{W}&\,\in\mathbb{R}_+^{M\times K},\\
f_{2,\epsilon}'(\overline{\mathbf{H}};\mathbf{H} - \overline{\mathbf{H}} | \overline{\mathbf{W}}) &\ge 0, \qquad\forall\,& \mathbf{H}&\in\mathbb{R}_+^{K\times N}.
\end{alignat*}
This definition generalizes the usual notion of a stationary point when the function is differentiable and the domain is unconstrained (i.e., $\overline{x}$ is a stationary point if $\nabla f (\overline{x})=0$). However, in our NMF setting, the matrices are constrained to be nonnegative, hence the need for this generalized definition.
If the matrices are initialized to some $\mathbf{W}^{(0)}$ and $\mathbf{H}^{(0)}$ that are (strictly) positive and $\epsilon>0$, then we have the following desirable property.
\begin{theorem} \label{thm:conv}
If $\mathbf{W}$ and $ \mathbf{H}$ are initialized to have positive entries (i.e., $\mathbf{W}^{(0)} \in \mathbb{R}^{M \times K}_{++} = (0,\infty)^{M \times K}$ and $\mathbf{H}^{(0)} \in \mathbb{R}^{K \times N}_{++}$) and $\epsilon>0$, then every limit point of $\{(\mathbf{W}^{(l )}, \mathbf{H}^{(l )})\}_{l=1}^{\infty}$ generated by Algorithm \ref{alg:btl_nmf} is a stationary point of~\eqref{eqn:min}.
\end{theorem}
Thus, apart from ensuring that there are no numerical errors, another reason why we incorporate $\epsilon>0$ in the modified objective function in~\eqref{eqn:new_ll_func} is because a stronger convergence guarantee can be ensured. The proof of this theorem, provided in Sec. S-2 of~\cite{xia2019}, follows along the lines of the main result in Zhao and Tan~\cite{zhao2017unified}, which itself hinges on the convergence analysis of block successive minimization methods provided by Razaviyayn, Hong, and Luo~\cite{Raz}. In essence, we need to verify that $f_{1,\epsilon}$ and $f_{2,\epsilon}$ together with their auxiliary functions $u_1$ and $u_2$ satisfy the five regularity conditions in Definition 3 of~\cite{zhao2017unified}. However, there are some important differences vis-\`a-vis~\cite{zhao2017unified} (e.g., analysis of the normalization step in Algorithm~\ref{alg:btl_nmf}) which we describe in detail in Remark~1 of~\cite{xia2019}.
\section{Numerical Experiments and Discussion}\label{sec:expt}
In this section, we describe how the datasets are collected and provide interesting and insightful interpretations of the numerical results. All datasets and code can be found at the following GitHub repository~\cite{xia2019}.
\begin{table}[t]
\centering
\caption{Partial men's dataset for the French Open}\label{tab:partial}
\small
\begin{tabular}{|c||c|c|c|c|}
\hline
Against & R. Nadal & N. Djokovic & R. Federer & A. Murray \\
\hline
R. Nadal &0&5&3&2\\
\hline
N. Djokovic & 1&0&1&2\\
\hline
R. Federer & 0&1&0&0\\
\hline
A. Murray & 0&0&0&0\\
\hline
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{Sparsity of datasets $\{ b_{ij}^{(m)} \}$}\label{tab:sparsity}
\begin{tabular}{|c||c|c|c|c|}
\hline
& \multicolumn{2}{c|}{ \textbf{Male}} &\multicolumn{2}{c|}{ \textbf{Female}}\\
\hline
\textbf{Total Entries} & \multicolumn{2}{c|}{$14\times 20\times 20=5600$} & \multicolumn{2}{c|}{$16\times 20\times 20=6400$}\\
\hline
& \small Number & \small Percentage & \small Number & \small Percentage\\
\hline
\textbf{Non-zero} & 1024 & 18.30\% & 788 & 12.31\%\\
\hline
\textbf{Zeros on the diagonal} & 280 & 5.00\% & 320 & 5.00\%\\
\hline
\textbf{Missing data} & 3478 & 62.10\% & 4598 & 71.84\%\\
\hline
\textbf{True zeros} & 818 & 14.60\% & 694 & 10.85\%\\
\hline
\end{tabular}
\end{table}
\subsection{Details on the Datasets Collected}
The Association of Tennis Professionals (ATP) is the main governing body for male tennis players. The official ATP website contains records of all matches played on tour. The tournaments of the ATP tour belong to different categories; these include the four Grand Slams, the ATP Masters 1000, etc. The points obtained by the players that ultimately determine their ATP rankings and qualification for entry and seeding in following tournaments depend on the categories of tournaments that they participate or win in.
We selected the most important $M=14$ tournaments for men's dataset, i.e., tournaments that yield the most ranking points which include the four Grand Slams, ATP World Tour Finals and nine ATP Masters 1000, listed in the first column of Table~\ref{tab:temp}. After determining the tournaments, we selected $N=20$ players. We wish to have as many matches as possible between each pair of players, so that the matrices $\{b_{ij}^{(m)}\}, m\in\{1,\ldots,M\}$ would not be too sparse and the algorithm would thus have more data to learn from. We chose players who both have the highest amount of participation in the $M=14$ tournaments from $2008$ to $2017$ and also played the most number of matches played in the same period. These players are listed in the first column of Table~\ref{tab:continue}.
For each tournament $m$, we collected an $N\times N$ matrix $\{b_{ij}^{(m)}\}$, where $b_{ij}^{(m)}$ denotes the number of times player $i$ beat player $j$ in tournament $m$. A submatrix consisting of the statistics of matches played between Nadal, Djokovic, Federer, and Murray at the French Open is shown in Table~\ref{tab:partial}. We see that over the $10$ years, Nadal beat Djokovic three times and Djokovic beat Nadal once at the French Open.
The governing body for women's tennis is the Women's Tennis Association (WTA) instead of the ATP. As such, we collected data from WTA website. The selection of tournaments and players is similar to that for the men. The tournaments selected include the four Grand Slams, WTA Finals, four WTA Premier Mandatory tournaments, and five Premier 5 tournaments. However, for the first ``Premier 5'' tournament of the season, the event is either held in Dubai or Doha, and the last tournament was held in Tokyo between 2009 and 2013; this has since been replaced by Wuhan. We decide to treat these two events as four distinct tournaments held in Dubai, Doha, Tokyo and Wuhan. Hence, the number of tournaments chosen for the women is $M=16$.
After collecting the data, we checked the sparsity level of the dataset $\mathcal{D}=\{b_{ij}^{(m)}\}$. The zeros in $\mathcal{D}$ can be categorized into three different classes.
\begin{enumerate}\itemsep0em
\item (Zeros on the diagonal) By convention, $b_{ii}^{(m)} = 0$ for all $(i,m)$;
\item (Missing data) By convention, if player $i$ and $j$ have never played with each other in tournament $m$, then $b_{ij}^{(m)} = b_{ij}^{(m)} = 0$;
\item (True zeros) If player $i$ has played with player $j$ in tournament $m$ but lost every such match, then $b_{ij}^{(m)} =0$ and $b_{ji}^{(m)}>0$.
\end{enumerate}
The distributions over the three types of zeros and non-zero entries for male and female players are presented in Table~\ref{tab:sparsity}. We see that there is more missing data in the women's dataset. This is because there has been a small set of dominant male players (e.g., Nadal, Djokovic, Federer) over the past $10$ years but the same is not true for women players.
For the women, this means that the matches in the past ten years are played by a more diverse set of players, resulting in the number of matches between the top $N=20$ players being smaller compared to the top $N=20$ men, even though we have selected the same number of top players.
\subsection{Running of the Algorithm}
\newcommand*{\MinNumber}{0.0}%
\newcommand*{\RowMaxNumber}{1.0}%
\newcommand*{\ColOneMaxNumber}{1.34E-01}%
\newcommand*{\ColTwoMaxNumber}{1.50E-01}%
\newcommand{\ApplyGradient}[1]{%
\pgfmathsetmacro{\PercentColor}{70.0*(#1)/\RowMaxNumber} %
\hspace{-0.33em}\colorbox{red!\PercentColor!white}{#1}
}
\newcommand{\ColOneApplyGradient}[1]{%
\pgfmathsetmacro{\PercentColor}{70.0*(#1)/\ColOneMaxNumber} %
\hspace{-0.33em}\colorbox{orange!\PercentColor!white}{#1}
}
\newcommand{\ColTwoApplyGradient}[1]{%
\pgfmathsetmacro{\PercentColor}{70.0*(#1)/\ColTwoMaxNumber} %
\hspace{-0.33em}\colorbox{orange!\PercentColor!white}{#1}
}
\newcolumntype{R}{>{\collectcell\ApplyGradient}c<{\endcollectcell}}%
\newcolumntype{C}{>{\collectcell\ColOneApplyGradient}c<{\endcollectcell}}%
\newcolumntype{T}{>{\collectcell\ColTwoApplyGradient}c<{\endcollectcell}}%
\begin{table}[t]
\small
\centering
\caption{Learned dictionary matrix $\mathbf{W}$ for the men's dataset}
\label{tab:temp}
\begin{tabular}{|c|*{1}{R}|*{1}{R}||*{1}{C}|*{1}{T}|}
\hline
Tournaments & \multicolumn{2}{c||}{ Row Normalization } & \multicolumn{2}{c|}{Column Normalization} \\
\hline
Australian Open & 5.77E-01 &4.23E-01 & 1.15E-01 & 7.66E-02\\
Indian Wells Masters & 6.52E-01 &3.48E-01 &1.34E-01 &6.50E-02\\
Miami Open & 5.27E-01 &4.73E-01 &4.95E-02 &4.02E-02\\
\cellcolor{gray!40}Monte-Carlo Masters & 1.68E-01 &8.32E-01 &2.24E-02 &1.01E-01\\
\cellcolor{gray!40}Madrid Open & 3.02E-01 &6.98E-01 &6.43E-02 &1.34E-01\\
\cellcolor{gray!40}Italian Open & 0.00E-00 &1.00E-00 &1.82E-104 &1.36E-01\\
\cellcolor{gray!40}French Open & 3.44E-01 &6.56E-01 &8.66E-02 &1.50E-01\\
Wimbledon & 6.43E-01 &3.57E-01 & 6.73E-02 &3.38E-02\\
Canadian Open & 1.00E-00 &0.00E-00 &1.28E-01 &1.78E-152\\
Cincinnati Masters & 5.23E-01 &4.77E-01 &1.13E-01 &9.36E-02\\
US Open & 5.07E-01 &4.93E-01 &4.62E-02 &4.06E-02\\
Shanghai Masters & 7.16E-01 &2.84E-01 &1.13E-01 &4.07E-02\\
Paris Masters & 1.68E-01 &8.32E-01 &1.29E-02 &5.76E-02\\
ATP World Tour Finals & 5.72E-01 &4.28E-01 &4.59E-02 &3.11E-02\\
\hline
\end{tabular}
\end{table}
The number of latent variables is expected to be small and we set $K$ to be~$2$ or~$3$. We only present results for $K=2$ in the main paper; the results corresponding to Tables~\ref{tab:temp} to~\ref{tab:2} for $K=3$ are displayed in Tables S-1 to S-4 in the supplementary material~\cite{xia2019}. We also set $\epsilon=10^{-300}$ which is close to the smallest positive value in the Python environment. The algorithm terminates when the difference of every element of $\mathbf{W}$ and $\mathbf{H}$ between in the successive iterations is less than $\tau=10^{-6}$. We checked that the $\epsilon$-modified algorithm in Sec.~\ref{sec:num} results in non-decreasing likelihoods. See Fig. S-1 in the supplementary material~\cite{xia2019}.
Since~\eqref{eqn:new_ll_func} is non-convex, the MM algorithm can be trapped in local minima. Hence, we considered $150$ different random initializations for $\mathbf{W}^{(0)}$ and $\mathbf{H}^{(0)}$ and analyzed the result that gave the maximum likelihood among the $150$ trials. Histograms of the negative log-likelihoods are shown in Figs.~\ref{fig:sub}(a) and \ref{fig:sub}(b) for $K=2$ and $K=3$ respectively. We observe that the optimal value of the log-likelihood for $K=3$ is higher than that of $K=2$ since the former model is richer. We also observe that the $\mathbf{W}$'s and $\mathbf{H}$'s produced over the $150$ runs are roughly the same up to permutation of rows and columns, i.e., our solution is {\em stable} and {\em robust} (cf.\ Theorem~\ref{thm:conv} and Sec.~\ref{sec:comp}).
\begin{figure}[t]
\subfloat[$K=2$ \label{fig:sub1}]{
\includegraphics[width=0.475\linewidth]{p1.png}
}
\hfill
\subfloat[$K=3$ \label{fig:sub2}]{
\includegraphics[width=0.475\linewidth]{p2.png}
}
\caption{Histogram of negative log-likelihood in the $150$ trials}
\label{fig:sub}
\end{figure}
\subsection{Results for Men Players}\label{sec:men}
\newcommand*{\PlayerMaxNumber}{1.56E-01}%
\newcommand{\PlayerApplyGradient}[1]{%
\pgfmathsetmacro{\PercentColor}{70.0*(#1)/\PlayerMaxNumber} %
\hspace{-0.33em}\colorbox{blue!\PercentColor!white}{#1}
}
\newcolumntype{V}{>{\collectcell\PlayerApplyGradient}c<{\endcollectcell}}%
\begin{table}[t]
\small
\centering
\caption{Learned transpose $\mathbf{H}^T$ of the coefficient matrix for the men's dataset} \label{tab:continue}
\begin{tabular}{|c|*{2}{V|}c|}
\hline
Players & \multicolumn{2}{c|}{matrix $\mathbf{H}^{T}$} & Total Matches \\
\hline
Novak Djokovic& 1.20E-01& 9.98E-02& 283\\
Rafael Nadal& 2.48E-02& 1.55E-01& 241 \\
Roger Federer& 1.15E-01& 2.34E-02& 229\\
Andy Murray& 7.57E-02& 8.43E-03& 209\\
Tomas Berdych& 0.00E-00& 3.02E-02& 154\\
David Ferrer& 6.26E-40& 3.27E-02& 147\\
Stan Wawrinka& 2.93E-55& 4.08E-02& 141\\
Jo-Wilfried Tsonga& 3.36E-02& 2.71E-03& 121\\
Richard Gasquet& 5.49E-03& 1.41E-02& 102\\
Juan Martin del Potro& 2.90E-02& 1.43E-02& 101\\
Marin Cilic& 2.12E-02& 0.00E-00& 100\\
Fernando Verdasco& 1.36E-02& 8.79E-03& 96\\
Kei Nishikori& 7.07E-03& 2.54E-02& 94\\
Gilles Simon& 1.32E-02& 4.59E-03& 83\\
Milos Raonic& 1.45E-02& 7.25E-03& 78\\
Philipp Kohlschreiber& 2.18E-06& 5.35E-03& 76\\
John Isner& 2.70E-03& 1.43E-02& 78\\
Feliciano Lopez& 1.43E-02& 3.31E-03& 75\\
Gael Monfils& 3.86E-21& 1.33E-02& 70\\
Nicolas Almagro& 6.48E-03& 6.33E-06& 60\\
\hline
\end{tabular}
\end{table}
The learned dictionary matrix $\mathbf{W}$ is shown in Table~\ref{tab:temp}.
In the ``Tournaments'' column, those tournaments whose surface types are known to be clay are highlighted in gray. For ease of visualization, higher values are shaded darker. If the rows of $\mathbf{W}$ are normalized, we observe that for clay tournaments, the value in the second column is always larger than that in the first, and vice versa. The only exception is the Paris Masters.\footnote{This may be attributed to its position in the seasonal calendar. The Paris Masters is the last tournament before ATP World Tour Finals. Top players often choose to skip this tournament to prepare for ATP World Tour Finals which is more prestigious. This has led to some surprising results, e.g., David Ferrer, a strong clay player, won the Paris Masters in 2012 (even though the Paris Masters is a hard court indoor tournament).} Since the row sums are equal to $1$, we can interpret the values in the first and second columns of a fixed row as the probabilities that a particular tournament is being played on non-clay or clay surface respectively. If the columns of $\mathbf{W} $ are normalized, it is observed that the tournaments with highest value of the second column are exactly the four tournaments played on clay.
From $\mathbf{W}$, we learn that surface type---in particular, whether or not a tournament is played on clay---is a germane latent variable that influences the performances of men players.
\newcommand*{\AllMaxNumber}{2.95E-02}%
\newcommand{\AllPlayerApplyGradient}[1]{%
\pgfmathsetmacro{\PercentColor}{70.0*(#1)/\AllMaxNumber} %
\hspace{-0.33em}\colorbox{red!\PercentColor!white}{#1}
}
\newcolumntype{Q}{>{\collectcell\AllPlayerApplyGradient}c<{\endcollectcell}}%
\begin{sidewaystable}
\smallskip
\caption{Learned $\mathbf{\Lambda} = \mathbf{W}\mathbf{H}$ matrix for first $10$ men players} \label{tab:1}
\begin{tabular}{|c||*{10}{Q|}}
\hline
Tournament & \multicolumn{1}{p{1.5cm}|}{\tiny Novak Djokovic} & \multicolumn{1}{p{1.5cm}|}{\tiny Rafael Nadal} & \multicolumn{1}{p{1.5cm}|}{\tiny Roger Federer} & \multicolumn{1}{p{1.5cm}|}{\tiny Andy Murray} & \multicolumn{1}{p{1.5cm}|}{\tiny Tomas Berdych} & \multicolumn{1}{p{1.5cm}|}{\tiny David Ferrer} & \multicolumn{1}{p{1.5cm}|}{\tiny Stan Wawrinka} & \multicolumn{1}{p{1.5cm}|}{\tiny Jo-Wilfried Tsonga} & \multicolumn{1}{p{1.5cm}|}{\tiny Richard Gasquet} & \multicolumn{1}{p{1.5cm}|}{\tiny Juan Martin del Potro}\\
\hline
Australian Open & 2.16E-02 & 1.54E-02 & 1.47E-02 & 9.13E-03 & 2.47E-03 & 2.67E-03 & 3.34E-03 & 3.97E-03 & 1.77E-03 & 4.41E-03 \\
Indian Wells Masters & 2.29E-02 & 1.42E-02 & 1.68E-02 & 1.06E-02 & 2.13E-03 & 2.30E-03 & 2.88E-03 & 4.63E-03 & 1.72E-03 & 4.84E-03 \\
Miami Open & 2.95E-02 & 2.30E-02 & 1.90E-02 & 1.17E-02 & 3.80E-03 & 4.12E-03 & 5.15E-03 & 5.07E-03 & 2.55E-03 & 5.89E-03 \\
\cellcolor{gray!40}Monte-Carlo Masters & 1.19E-02 & 1.53E-02 & 4.46E-03 & 2.27E-03 & 2.90E-03 & 3.14E-03 & 3.92E-03 & 9.12E-04 & 1.46E-03 & 1.94E-03 \\
\cellcolor{gray!40}Madrid Open & 1.38E-02 & 1.51E-02 & 6.63E-03 & 3.75E-03 & 2.75E-03 & 2.97E-03 & 3.72E-03 & 1.57E-03 & 1.50E-03 & 2.45E-03 \\
\cellcolor{gray!40}Italian Open & 1.19E-02 & 1.84E-02 & 2.78E-03 & 1.00E-03 & 3.59E-03 & 3.89E-03 & 4.87E-03 & 3.23E-04 & 1.68E-03 & 1.71E-03 \\
\cellcolor{gray!40}French Open & 1.39E-02 & 1.43E-02 & 7.12E-03 & 4.11E-03 & 2.57E-03 & 2.79E-03 & 3.48E-03 & 1.74E-03 & 1.45E-03 & 2.52E-03 \\
Wimbledon & 2.63E-02 & 1.66E-02 & 1.91E-02 & 1.20E-02 & 2.50E-03 & 2.71E-03 & 3.39E-03 & 5.27E-03 & 2.00E-03 & 5.54E-03 \\
Canadian Open & 1.16E-02 & 2.40E-03 & 1.11E-02 & 7.32E-03 & 0.00E+00 & 1.26E-39 & 2.42E-51 & 3.25E-03 & 5.31E-04 & 2.81E-03 \\
Cincinnati Masters & 1.82E-02 & 1.43E-02 & 1.17E-02 & 7.17E-03 & 2.36E-03 & 2.56E-03 & 3.20E-03 & 3.10E-03 & 1.58E-03 & 3.62E-03 \\
US Open & 1.17E-02 & 9.42E-03 & 7.38E-03 & 4.51E-03 & 1.58E-03 & 1.71E-03 & 2.13E-03 & 1.95E-03 & 1.03E-03 & 2.31E-03 \\
Shanghai Masters & 8.12E-03 & 4.38E-03 & 6.29E-03 & 4.01E-03 & 6.09E-04 & 6.59E-04 & 8.24E-04 & 1.76E-03 & 5.64E-04 & 1.76E-03 \\
Paris Masters & 7.29E-03 & 9.37E-03 & 2.73E-03 & 1.39E-03 & 1.77E-03 & 1.92E-03 & 2.40E-03 & 5.58E-04 & 8.94E-04 & 1.19E-03 \\
ATP World Tour Finals & 1.13E-02 & 8.13E-03 & 7.63E-03 & 4.74E-03 & 1.31E-03 & 1.41E-03 & 1.77E-03 & 2.06E-03 & 9.29E-04 & 2.30E-03 \\
\hline
\end{tabular}
\bigskip\bigskip
\caption{Learned $\mathbf{\Lambda} = \mathbf{W}\mathbf{H}$ matrix for last $10$ men players} \label{tab:2}
\begin{tabular}{|c||*{10}{Q|}}
\hline
Tournament & \multicolumn{1}{p{1.5cm}|}{\tiny Marin Cilic} & \multicolumn{1}{p{1.5cm}|}{\tiny Fernando Verdasco} & \multicolumn{1}{p{1.5cm}|}{\tiny Gilles Simon}& \multicolumn{1}{p{1.5cm}|}{\tiny Milos Raonic} & \multicolumn{1}{p{1.5cm}|}{\tiny John Isner} & \multicolumn{1}{p{1.5cm}|}{\tiny Philipp Kohlschreiber} & \multicolumn{1}{p{1.5cm}|}{\tiny John Isner} & \multicolumn{1}{p{1.5cm}|}{\tiny Feliciano Lopez} & \multicolumn{1}{p{1.5cm}|}{\tiny Gael Monfils} & \multicolumn{1}{p{1.5cm}|}{\tiny Nicolas Almagro}\\
\hline
Australian Open & 2.36E-03 & 2.24E-03 & 2.87E-03 & 1.84E-03 & 2.21E-03 & 4.38E-04 & 1.47E-03 & 1.86E-03 & 1.09E-03 & 7.23E-04 \\
Indian Wells Masters & 2.79E-03 & 2.42E-03 & 2.72E-03 & 2.06E-03 & 2.42E-03 & 3.77E-04 & 1.37E-03 & 2.12E-03 & 9.39E-04 & 8.56E-04 \\
Miami Open & 2.98E-03 & 3.02E-03 & 4.20E-03 & 2.43E-03 & 2.95E-03 & 6.75E-04 & 2.18E-03 & 2.43E-03 & 1.68E-03 & 9.12E-04 \\
\cellcolor{gray!40}Monte-Carlo Masters & 4.10E-04 & 1.11E-03 & 2.58E-03 & 6.96E-04 & 9.77E-04 & 5.14E-04 & 1.43E-03 & 5.95E-04 & 1.28E-03 & 1.26E-04 \\
\cellcolor{gray!40}Madrid Open & 8.34E-04 & 1.34E-03 & 2.59E-03 & 9.37E-04 & 1.23E-03 & 4.87E-04 & 1.41E-03 & 8.64E-04 & 1.21E-03 & 2.56E-04 \\
\cellcolor{gray!40}Italian Open & 0.00E+00 & 1.05E-03 & 3.03E-03 & 5.47E-04 & 8.63E-04 & 6.38E-04 & 1.71E-03 & 3.95E-04 & 1.59E-03 & 7.68E-07 \\
\cellcolor{gray!40}French Open & 9.48E-04 & 1.36E-03 & 2.49E-03 & 9.82E-04 & 1.27E-03 & 4.57E-04 & 1.34E-03 & 9.22E-04 & 1.14E-03 & 2.91E-04 \\
Wimbledon & 3.17E-03 & 2.77E-03 & 3.17E-03 & 2.36E-03 & 2.77E-03 & 4.45E-04 & 1.59E-03 & 2.42E-03 & 1.11E-03 & 9.72E-04 \\
Canadian Open & 2.05E-03 & 1.32E-03 & 6.84E-04 & 1.27E-03 & 1.40E-03 & 2.26E-07 & 2.62E-04 & 1.38E-03 & 2.46E-19 & 6.27E-04 \\
Cincinnati Masters & 1.82E-03 & 1.86E-03 & 2.60E-03 & 1.49E-03 & 1.81E-03 & 4.20E-04 & 1.35E-03 & 1.49E-03 & 1.04E-03 & 5.58E-04 \\
US Open & 1.14E-03 & 1.19E-03 & 1.71E-03 & 9.49E-04 & 1.16E-03 & 2.80E-04 & 8.94E-04 & 9.42E-04 & 6.97E-04 & 3.49E-04 \\
Shanghai Masters & 1.08E-03 & 8.69E-04 & 8.72E-04 & 7.62E-04 & 8.82E-04 & 1.08E-04 & 4.26E-04 & 7.92E-04 & 2.69E-04 & 3.29E-04 \\
Paris Masters & 2.51E-04 & 6.78E-04 & 1.58E-03 & 4.26E-04 & 5.97E-04 & 3.14E-04 & 8.72E-04 & 3.64E-04 & 7.82E-04 & 7.73E-05 \\
ATP World Tour Finals & 1.22E-03 & 1.17E-03 & 1.51E-03 & 9.61E-04 & 1.15E-03 & 2.32E-04 & 7.76E-04 & 9.70E-04 & 5.77E-04 & 3.75E-04 \\
\hline
\end{tabular}
\end{sidewaystable}
Table~\ref{tab:continue} displays the transpose of $\mathbf{H}$ whose elements sum to one. Thus, if the column $k\in\{1,2\}$ represents the surface type, we can treat $h_{ki}$ as the skill of player $i$ conditioned on him playing on surface type $k$. We may regard the first and second columns of $\mathbf{H}^T$ as the skill levels of players on non-clay and clay respectively. We observe that Nadal, nicknamed the ``King of Clay'', is the best player on clay among the $N=20$ players, and as an individual, he is also much more skilful on clay compared to non-clay. Djokovic, the first man in the ``Open era'' to hold all four Grand Slams on three different surfaces (hard court, clay and grass) at the same time (between Wimbledon 2015 to the French Open 2016, also known as the Nole Slam), is more of a balanced top player as his skill levels are high in both columns of $\mathbf{H}^T$.
Federer won the most titles on tournaments played on grass and, as expected, his skill level in the first column is indeed much higher than the second. As for Murray, the $\mathbf{H}^T$ matrix also reflects his weakness on clay. Wawrinka, a player who is known to favor clay has skill level in the second column being much higher than that in the first. The last column of Table~\ref{tab:continue} lists the total number of matches that each player participated in (within our dataset). We verified that the skill levels in $\mathbf{H}^T$ for each player are not strongly correlated to how many matches are being considered in the dataset. Although Berdych has data of more matches compared to Ferrer, his scores are not higher than that of Ferrer. Thus our algorithm and conclusions are not skewed towards the availability of data.
The learned skill matrix $\mathbf{\Lambda} = \mathbf{W}\mathbf{H}$ with column normalization of $\mathbf{W}$ is presented in Tables~\ref{tab:1} and~\ref{tab:2}. As mentioned in Sec.~\ref{PD}, $[\mathbf{\Lambda}]_{mi}$ denotes the skill level of player $i$ in tournament $m$. We observe that Nadal's skill levels are higher than Djokovic's only for the French Open, Madrid Open, Monte-Carlo Masters, Paris Masters and Italian Open, which are tournaments played on clay except for the Paris Masters. As for Federer, his skill level is highest for Wimbledon, which happens to be the only tournament on grass; here, it is known that he is the player with the best record in the ``Open era''. Furthermore, if we consider Wawrinka, the five tournaments in which his skill levels are the highest include the four clay tournaments. These observations again show that our model has learned interesting latent variables from $\mathbf{W}$. It has also learned players' skills on different types of surfaces and tournaments from $\mathbf{H}$ and $\mathbf{\Lambda}$ respectively.
\subsection{Results for Women Players}
\newcommand*{\WRowMaxNumber}{1.0}%
\newcommand*{\WColOneMaxNumber}{1.41E-01}%
\newcommand*{\WColTwoMaxNumber}{1.57E-01}%
\newcommand{\WApplyGradient}[1]{%
\pgfmathsetmacro{\PercentColor}{70.0*(#1)/\WRowMaxNumber} %
\hspace{-0.33em}\colorbox{red!\PercentColor!white}{#1}
}
\newcommand{\WColOneApplyGradient}[1]{%
\pgfmathsetmacro{\PercentColor}{70.0*(#1)/\WColOneMaxNumber} %
\hspace{-0.33em}\colorbox{orange!\PercentColor!white}{#1}
}
\newcommand{\WColTwoApplyGradient}[1]{%
\pgfmathsetmacro{\PercentColor}{70.0*(#1)/\WColTwoMaxNumber} %
\hspace{-0.33em}\colorbox{orange!\PercentColor!white}{#1}
}
\newcolumntype{W}{>{\collectcell\WApplyGradient}c<{\endcollectcell}}%
\newcolumntype{X}{>{\collectcell\WColOneApplyGradient}c<{\endcollectcell}}%
\newcolumntype{U}{>{\collectcell\WColTwoApplyGradient}c<{\endcollectcell}}%
\begin{table}[t]
\small
\centering
\caption{Learned dictionary matrix $\mathbf{W}$ for the women's dataset}
\label{tab:resultwwomen}
\begin{tabular}{|c|*{1}{W}|*{1}{W}||*{1}{X}|*{1}{U}|}
\hline
Tournaments & \multicolumn{2}{c||}{ {Row Normalization}} & \multicolumn{2}{c|}{{Column Normalization}} \\
\hline
Australian Open & 1.00E-00 &3.74E-26& 1.28E-01 &3.58E-23\\
Qatar Open & 6.05E-01 &3.95E-01 &1.05E-01 &4.94E-02\\
Dubai Tennis Championships & 1.00E-00& 1.42E-43 &9.47E-02 &3.96E-39\\
Indian Wells Open &5.64E-01 &4.36E-01& 8.12E-02 &4.51E-02 \\
Miami Open & 5.86E-01 &4.14E-01 &7.47E-02 &3.79E-02\\
\cellcolor{gray!40}Madrid Open & 5.02E-01 &4.98E-01 &6.02E-02 &4.29E-02\\
\cellcolor{gray!40}Italian Open & 3.61E-01& 6.39E-01 &5.22E-02 &6.63E-02\\
\cellcolor{gray!40}French Open & 1.84E-01 &8.16E-01 &2.85E-02 &9.04E-02\\
Wimbledon & 1.86E-01 &8.14E-01 &3.93E-02 &1.24E-01\\
Canadian Open & 4.59E-01 &5.41E-01 &5.81E-02 &4.92E-02\\
Cincinnati Open & 9.70E-132 &1.00E-00 &5.20E-123 &1.36E-01\\
US Open & 6.12E-01 &3.88E-01 &8.04E-02 &3.66E-02\\
Pan Pacific Open & 1.72E-43 &1.00E-00 &7.82E-33 &1.57E-01\\
Wuhan Open & 1.00E-00 &6.87E-67&1.41E-01 &1.60E-61\\
China Open & 2.26E-01 &7.74E-01 &4.67E-02 &1.15E-01\\
WTA Finals & 1.17E-01& 8.83E-01 &9.30E-03 &5.03E-02\\
\hline
\end{tabular}
\end{table}
\newcommand*{\WPlayerMaxNumber}{1.44E-01}%
\newcommand{\WPlayerApplyGradient}[1]{%
\pgfmathsetmacro{\PercentColor}{70.0*(#1)/\WPlayerMaxNumber} %
\hspace{-0.33em}\colorbox{blue!\PercentColor!white}{#1}
}
\newcolumntype{Y}{>{\collectcell\WPlayerApplyGradient}c<{\endcollectcell}}%
\begin{table}[t]
\small
\centering
\caption{Learned transpose $\mathbf{H}^T$ of coefficient matrix for the women's dataset}
\label{tab:resulthwomen}
\begin{tabular}{|c|*{2}{Y|}c|}
\hline
Players & \multicolumn{2}{c|}{matrix $\mathbf{H}^{T}$} & Total Matches \\
\hline
Serena Williams& 5.93E-02& 1.44E-01& 130\\
Agnieszka Radwanska& 2.39E-02& 2.15E-02& 126\\
Victoria Azarenka& 7.04E-02& 1.47E-02& 121\\
Caroline Wozniacki& 3.03E-02& 2.43E-02& 115\\
Maria Sharapova& 8.38E-03 &8.05E-02& 112\\
Simona Halep& 1.50E-02 &3.12E-02 &107\\
Petra Kvitova& 2.39E-02 &3.42E-02& 99\\
Angelique Kerber& 6.81E-03& 3.02E-02& 96\\
Samantha Stosur& 4.15E-04& 3.76E-02& 95\\
Ana Ivanovic& 9.55E-03 &2.60E-02& 85\\
Jelena Jankovic& 1.17E-03& 2.14E-02& 79\\
Anastasia Pavlyuchenkova& 6.91E-03 &1.33E-02& 79\\
Carla Suarez Navarro& 3.51E-02& 5.19E-06& 75\\
Dominika Cibulkova& 2.97E-02& 1.04E-02& 74\\
Lucie Safarova& 0.00E+00 &3.16E-02 &69\\
Elina Svitolina& 5.03E-03 &1.99E-02& 59\\
Sara Errani& 7.99E-04 &2.69E-02 &58\\
Karolina Pliskova& 9.92E-03 &2.36E-02& 57\\
Roberta Vinci& 4.14E-02 &0.00E+00 &53\\
Marion Bartoli& 1.45E-02 &1.68E-02 &39\\
\hline
\end{tabular}
\end{table}
We performed the same experiment for the women players except that we now consider $M=16$ tournaments. The factor matrices $\mathbf{W}$ and $\mathbf{H}$ (in its transpose form) are presented in Tables~\ref{tab:resultwwomen} and~\ref{tab:resulthwomen} respectively.
It can be seen from $\mathbf{W}$ that, unlike for the men players, the surface type is not a pertinent latent variable since there is no correlation between the values in the columns and the surface type. We suspect that the skill levels of top women players are not as heavily influenced by the surface type compared to the men. However, the tournaments in Table \ref{tab:resultwwomen} are ordered in chronological order and we notice that there is a slight correlation between the values in the column and the time of the tournament (first half or second half of the year). Any latent variable would naturally be less pronounced, due to the sparser dataset for women players (cf.\ Table~\ref{tab:sparsity}). A somewhat interesting observation is that the values in $\mathbf{W}$ obtained using the row normalization and the column normalization methods are similar. This indicates that the latent variables, if any, learned by the two methods are the same, which is a reassuring conclusion.
By computing the sums of the skill levels for each female player (i.e., row sums of $\mathbf{H}^T$), we see that S.~Williams is the most skilful among the $20$ players over the past $10$ years. She is followed by Sharapova and Azarenka. As a matter of fact, S.~Williams and Azarenka have been year-end number one $4$ times and once, respectively, over the period $2008$ to~$2017$. Even though Sharapova was never at the top at the end of any season (she was, however, ranked number one several times, most recently in 2012), she had been consistent over this period such that the model and the longitudinal dataset allow us to conclude that she is ranked second. In fact, she is known for her unusual longevity being at the top of the women's game. She started her tennis career very young and won her first Grand Slam at the age of $17$. Finally, the model groups S.~Williams, Sharapova, Stosur together, while Azarenka, Navarro, and Vinci are in another group. We believe that there may be some similarities between players who are clustered in the same group. The $\mathbf{\Lambda}$ matrix for women players can be found in Tables S-5 and S-6 in the supplementary material~\cite{xia2019}.
\newcommand*{\BPlayerMaxNumber}{1.35E-01}%
\newcommand*{\BOPlayerMaxNumber}{2.14E-01}%
\newcommand{\BPlayerApplyGradient}[1]{%
\pgfmathsetmacro{\PercentColor}{70.0*(#1)/\BPlayerMaxNumber} %
\hspace{-0.33em}\colorbox{red!\PercentColor!white}{#1}
}
\newcolumntype{D}{>{\collectcell\BPlayerApplyGradient}c<{\endcollectcell}}%
\newcommand{\BOPlayerApplyGradient}[1]{%
\pgfmathsetmacro{\PercentColor}{70.0*(#1)/\BOPlayerMaxNumber} %
\hspace{-0.33em}\colorbox{orange!\PercentColor!white}{#1}
}
\newcolumntype{I}{>{\collectcell\BOPlayerApplyGradient}c<{\endcollectcell}}%
\begin{table}[t]
\small
\centering
\caption{Learned $\bm{\lambda}$'s for the BTL ($K=1$) and mixture-BTL ($K=2$) models}
\label{tab:mixturebtl}
\begin{tabular}{|c|*{1}{I}|*{1}{D}|*{1}{D}|}
\hline
\textbf{Players} & \multicolumn{1}{c|}{$K=1$} & \multicolumn{2}{c|}{$K=2$} \\
\hline
Novak Djokovic & 2.14E-01 & 7.14E-02 & 1.33E-01 \\
Rafael Nadal & 1.79E-01 & 1.00E-01 & 4.62E-02 \\
Roger Federer & 1.31E-01 & 1.35E-01 & 1.33E-02 \\
Andy Murray & 7.79E-02 & 6.82E-02 & 4.36E-03 \\
Tomas Berdych & 3.09E-02 & 5.26E-02 & 2.85E-04 \\
David Ferrer & 3.72E-02 & 1.79E-02 & 4.28E-03 \\
Stan Wawrinka & 4.32E-02 & 2.49E-02 & 4.10E-03 \\
Jo-Wilfried Tsonga & 2.98E-02 & 3.12E-12 & 1.08E-01 \\
Richard Gasquet & 2.34E-02 & 1.67E-03 & 2.97E-03 \\
Juan Martin del Potro & 4.75E-02 & 8.54E-05 & 4.85E-02 \\
Marin Cilic & 1.86E-02 & 3.37E-05 & 2.35E-03 \\
Fernando Verdasco & 2.24E-02 & 5.78E-02 & 8.00E-09 \\
Kei Nishikori & 3.43E-02 & 5.37E-08 & 3.58E-02 \\
Gilles Simon & 1.90E-02 & 7.65E-05 & 5.16E-03 \\
Milos Raonic & 2.33E-02 & 2.61E-04 & 6.07E-03 \\
Philipp Kohlschreiber & 7.12E-03 & 1.78E-25 & 3.55E-03 \\
John Isner & 1.84E-02 & 2.99E-02 & 1.75E-08 \\
Feliciano Lopez & 1.89E-02 & 1.35E-02 & 3.10E-04 \\
Gael Monfils & 1.66E-02 & 5.38E-10 & 6.53E-03 \\
Nicolas Almagro & 7.24E-03 & 1.27E-15 & 1.33E-03 \\
\hline
\textbf{Mixture weights} & \multicolumn{1}{c|}{1.00E+00} & \multicolumn{1}{c|}{4.72E-01} & \multicolumn{1}{c|}{5.28E-01} \\
\hline
\textbf{Log-likelihoods} & \multicolumn{1}{c|}{-682.13} & \multicolumn{2}{c|}{-657.56} \\
\hline
\end{tabular}
\end{table}
\subsection{Comparison to BTL and mixture-BTL} \label{sec:comp}
Finally, we compared our approach to the BTL and mixture-BTL~\cite{Oh2014,NiharSimpleRobust} approaches for the male players. To learn these models, we aggregated our dataset $\{ b_{ij}^{(m)}\}$ into a single matrix $\{b_{ij} = \sum_m b_{ij}^{ (m) }\}$. For the BTL model, we maximized the likelihood to find the optimal parameters. For the mixture-BTL model with $K=2$ components, we ran an Expectation-Maximization (EM) algorithm~\cite{Demp} to find approximately-optimal values of the parameters and the mixture weights. Note that the BTL model corresponds to a mixture-BTL model with $K=1$.
The learned skill vectors are shown in Table~\ref{tab:mixturebtl}. Since EM is susceptible to being trapped in local optima and is sensitive to initialization, we ran it $100$ times and reported the solution with likelihood that is close to the highest one.\footnote{The solution with the highest likelihood is shown in Trial 2 of Table S-7 but it appears that the solution there is degenerate.} The solution for mixture-BTL is not stable; other solutions with likelihoods that are very close to the maximum one result in significantly different parameter values. Two other solutions with similar likelihoods are shown in Table S-7 in the supplementary material~\cite{xia2019}. As can be seen, some of the solutions are far from representative of the true skill levels of the players (e.g., in Trial 2 of Table S-7, Tsonga has a very high score in the first column and the skills of the other players are all very small in comparison) and they are vastly different from one another. This is in stark contrast to our BTL-NMF model and algorithm in which Theorem~\ref{thm:conv} states that the limit of $\{ (\mathbf{W}^{(l)}, \mathbf{H}^{(l)})\}_{l=1}^\infty$ is a stationary point of~\eqref{eqn:min}. We numerically verified that the BTL-NMF solution is stable, i.e., different runs yield $(\mathbf{W},\mathbf{H})$ pairs that are approximately equal up to permutation of rows and columns.\footnote{Note, however, that stationary points are not necessarily equivalent up to permutation or rescaling.} As seen from Table~\ref{tab:mixturebtl}, for mixture-BTL, neither tournament-specific information nor semantic meanings of latent variables can be gleaned from the parameter vectors. The results of BTL are reasonable and expected but also lack tournament-specific information.
\section{Conclusion and Future Work}\label{sec:con}
We proposed a ranking model combining the BTL model with the NMF framework as in Fig.~\ref{fig:nmf_btl}. We derived MM-based algorithms to maximize the likelihood of the data. To ensure numerical stability, we ``regularized'' the MM algorithm and proved that desirable properties, such as monotonicity of the objective and convergence of the iterates to stationary points, hold. We drew interesting conclusions based on longitudinal datasets for top male and female players. A latent variable in the form of the court surface was also uncovered in a principled manner. We compared our approach to the mixture-BTL approach~\cite{Oh2014,NiharSimpleRobust} and concluded that the former is advantageous in various aspects (e.g., stability of the solution, interpretability of latent variables).
In the future, we plan to run our algorithm on a larger longitudinal dataset consisting of pairwise comparison data from more years (e.g., the past $50$ years) to learn, for example, who is the ``best-of-all-time'' male or female player. In addition, it would be desirable to understand if there is a natural Bayesian interpretation~\cite{tan2013automatic,CaronDoucet} of the $\epsilon$-modified objective function in~\eqref{eqn:new_ll_func}.
\paragraph{Acknowledgements} This work was supported by a Singapore Ministry of Education Tier 2 grant (R-263-000-C83-112), a Singapore National Research Foundation (NRF) Fellowship (R-263-000-D02-281), and by the European Research Council (ERC FACTORY-CoG-6681839).
\bibliographystyle{unsrt}
| {
"attr-fineweb-edu": 1.676758,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUblc241xg-CaAFIlJ | \section*{NOTICE}
This is the author's version of a work that was accepted for publication in the \textit{Italian Journal of Applied Statistics}. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication.
\section{Introduction} \label{sec:intro}
Traditionally, algorithms for ranking sports teams and predicting sporting outcomes utilize either the observed margin of victory (MOV) \citep{henderson75} or the binary win/loss information \citep{mease,football}, along with potential covariates such as game location (home, away, neutral).
In contrast, we jointly model either MOV or win/loss along with a separate game-level response, which is shown to improve predictions under certain model specifications. We present a set of non-nested generalized linear mixed models to jointly model the MOV or win/loss along with a game outcome, such as penalty yards or number of penalties, shots on goal, turnover margin.
Multiple response distributions are necessary to model a variety of sporting outcomes and are available in the model and presented R package for executing the model. For example, the normal distribution is most suitable to model the score in a high scoring sport such as basketball, where as a Poisson model may be more appropriate for scores in hockey or soccer (football).
In this paper, we explore the benefits of modeling these responses jointly, assuming conditional independence given correlation between distinct team effects for each response. For some responses, the joint models benefit from significantly improved median log-loss and absolute residuals of cross-validation predictions. Furthermore, the joint model provides the ability to test for significant relationships between high-level hierarchical effects (e.g. random team effects) since significant predictors for outcomes at the game level may not be important at the team level. We have published our R \citep{R} code for these models on CRAN (\url{http://cran.r-project.org/}) via the package mvglmmRank: the appendix provides a demonstration of the package. The data used to produce the results in this paper are made available at \burl{github.com/10-01/NCAA-Football-Analytics}.
Previous works have also considered the joint modeling of team ratings and outcome prediction.
\citet{annis} present a two-stage, hierarchical ``hybrid ranking system'' that can be considered an average of win/loss and point-scoring models, focusing on the prediction of NCAA football rankings.
In stage 1, the win/loss indicator is modeled. In stage 2, the scores are predicted conditioned on the win/loss outcome in stage 1. Each team is modeled with an offensive (fixed) effect and a defensive (fixed) effect. Model estimation relies on generalized estimating equations (GEE). The win/loss indicators are modeled by comparing the ``merit'' of each team, which is defined as the sum of the offensive and defensive ratings. \citet{baio} use a similar point-scoring model (in a Bayesian framework), fitting separate ``attack'' and ``defense'' values for each team.
Several other papers have considered modeling separate offense and defense effects \citep{karlis,baio,ruiz}.
While \citet{annis} use the sum of a team's offensive and defensive effects to represent their winning propensity in a logistic regression, we build upon the Poisson-binary model proposed by \citet{karlcgs} and fit a separate win-propensity random effect for each team. This effect is correlated with, rather than determined by, the offensive and defensive effects from the point-scoring (yards-recorded, etc) model. These three team-level effects are modeled as random effects in a multivariate generalized linear mixed model (GLMM). This allows us to measure and compare the relationships between offensive/defensive ability (with respect to a variety of responses) and winning propensity. The binary win/loss indicators are jointly modeled with the team scores or other responses by allowing the random team effects in the models for each response to be correlated. Assuming a normal distribution for the team effects imposes a form of regularization, allowing the binary model to be fit in the presence of undefeated or winless teams \citep{football}.
Suppose a binary win/loss indicator is jointly modeled, then for the binary win/loss indicators, an underlying latent trait or ``win-propensity'' rating is assigned to each team.
These ratings are fit simultaneously with two game level response-propensity ratings: offensive and defensive.
The model presented allows for potential correlation between all three ratings by fitting them using random effects assuming a multivariate normal distribution.
To illustrate, this paper examines how the joint modeling of win/loss indicators and four different game-level responses in American football (yards per play, sacks, fumbles, and score described in section \ref{sec:joint})
may lead to an improvement in the cross validation predictions of both responses versus the traditional model.
The results indicate that a higher correlation between the win/loss response and the game-level response lead to improved cross-validated predictions.
Section~\ref{sec:model} describes a set of multivariate generalized linear mixed models for predicting game outcomes, and section~\ref{sec:computation} describes the computational approach used by the mvglmmRank package. Section~\ref{sec:ncaaf} compares cross-validation prediction accuracy of several game-level responses across three American college football seasons. Section~\ref{sec:ncaapred} evaluates the performance of the joint model across nineteen college basketball (NCAA) tournaments. Appendix~\ref{sec:bb} provides a demonstration of the implementation of the joint model in the mvglmmRank R package.
\section{The Model}\label{sec:model}
When modeling $n$ games, let $r_i$ be a binary indicator for the outcome of the $i$-th game for $i=1,\ldots,n$, taking the value 1 with a home team ``win'' and 0 with a visiting team ``win'', where ``win'' can be defined to be outscoring the opponent, receiving fewer penalties than the opponent, etc. A neutral-site indicator is used to indicate that the home team was designated arbitrarily. The home-win indicators are concatenated into the vector $\boldsymbol{r}=(r_1,\ldots,r_n)^{\prime}$. We will use $y_{ih}$ to denote the score (or penalties, yards-per-play, etc.) of the home team in the $i$-th game, and $y_{ia}$ to denote the score of the away team, letting $\boldsymbol{y}_i=(y_{ih},y_{ia})^{\prime}$. The scores are concatenated into the vector $\boldsymbol{y}=(\boldsymbol{y}_1^{\prime},\ldots,\boldsymbol{y}_n^{\prime})^{\prime}$. We will assume separate parametric models for $\boldsymbol{y}$ and $\boldsymbol{r}$; however, these models will be related by allowing correlation between the random team effects present in each model.
Suppose we wish to model the outcome of a game between the home team, H, and the away team, A. We assume that each team may be described by three potentially related characteristics: their offensive rating ($\textrm{b}^o$), defensive rating ($\textrm{b}^d$), and a rating ($\textrm{b}^w$) that quantifies their winning propensity. Heuristically, we want to find the ratings that satisfy
\begin{align*}
\te{E}[y_{ih}]=&f_1(\textrm{b}^o_h-\textrm{b}^d_a)\\
\te{E}[y_{ia}]=&f_1(\textrm{b}^o_a-\textrm{b}^d_h)\\
P(r_i=1)=&f_2\left(\textrm{b}^w_h-\textrm{b}^w_a\right)
\end{align*}
for some functions $f_1$ and $f_2$. To do this, we will specify the functions $f_1$ and $f_2$, the assumed distribution of $\boldsymbol{y}$ conditional on the offensive and defensive ratings, the assumed distribution of $\boldsymbol{r}$ conditional on the win propensity ratings, and the assumed distribution of (and relationship between) the ratings. Due to the binary nature of $r_i$, $f_2$ will necessarily be a nonlinear function. The offense and defense ratings for each team are calculated while controlling for the quality of opponent, implicitly considering strength of schedule as in \citet{harville77}, \citet{annis}, and \citet{football}. By contrast, raw offensive and defensive totals inflate the ranking of teams that play a set of easy opponents and penalize those that play a difficult schedule.
We model the offensive, defensive, and win propensity ratings of the $j$-th team for $j=1,\ldots,p$ with random effects $\textrm{b}_j^o$, $\textrm{b}_j^d$, and $\textrm{b}_j^w$ assuming $\textbf{\textrm{b}}_j=(\textrm{b}_j^o,\textrm{b}_j^d,\textrm{b}_j^w)^{\prime}\sim N_3(\bds{0},\boldsymbol{G}^*)$, where $\boldsymbol{G}^*$ is an unstructured covariance matrix and $p$ represents the number of teams being ranked. In addition, $\textbf{\textrm{b}}\sim N(\bds{0},\boldsymbol{G})$, where $\textbf{\textrm{b}}=(\textbf{\textrm{b}}_1^{\prime},\ldots,\textbf{\textrm{b}}_p^{\prime})^{\prime}$ and $\boldsymbol{G}$ is block diagonal with $p$ copies of $\boldsymbol{G}^*$. We allow $\boldsymbol{y}|\textbf{\textrm{b}}$ to follow either a normal or a Poisson distribution. While we use $\boldsymbol{y}|\textbf{\textrm{b}}$ as a notational convenience, we do not condition $\boldsymbol{y}$ on $\textbf{\textrm{b}}^w$. Likewise, we will use $\boldsymbol{r}|\textbf{\textrm{b}}$ when we may more explicitly write $\boldsymbol{r}|\textbf{\textrm{b}}^w$.
\subsection{Bivariate Normal Outcomes}\label{sec:bivariate}
We may assume a bivariate normal distribution for the outcomes (e.g. scores) of the $i$-th game $\boldsymbol{y}_i|\textbf{\textrm{b}}\sim N_2(\boldsymbol{X}_i\boldsymbol{\beta}+\boldsymbol{Z}_i\textbf{\textrm{b}},\boldsymbol{R}^*)$. In the error covariance matrix, $\boldsymbol{R}^*$, we model the potential intra-game correlation between the responses of opposing teams: the (1,1) term models the conditional variance of the home team responses, the (2,2) term models the conditional variance of the away team responses, and the (1,2)=(2,1) term models the conditional covariance of the home and away team responses. $\boldsymbol{y}|\textbf{\textrm{b}}\sim N_{2n}(\boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{Z}\textbf{\textrm{b}},\boldsymbol{R})$, where $\boldsymbol{R}$ is block diagonal with $n$ copies of $\boldsymbol{R}^*$, and $\boldsymbol{X}$ and $\boldsymbol{Z}$ are the concatenation of the $\boldsymbol{X}_i$ and $\boldsymbol{Z}_i$, which are defined below. $\boldsymbol{\beta}$ may be used to model any fixed effect covariates, though we only consider a parsimonious model with a mean and a home field effect, that is, $\boldsymbol{\beta}=(\beta_h,\beta_a,\beta_n)^{\prime}$ where $\beta_h$ is the mean home response, $\beta_a$ is the mean away response, and $\beta_n$ is the mean neutral site response. The design matrix $\boldsymbol{X}_i$ is a $2\times 3 $ matrix with an indicator for the ``home'' team in the first row and for the ``away'' team in the second row. If the home and away teams were designated arbitrarily for a neutral site game, then \[\boldsymbol{X}_i=\left(\begin{array}{ccc} 0&0&1\\0&0&1 \end{array}\right).\] The error terms of the arbitrarily designated teams are still modeled with the corresponding ``home'' and ``away'' components of $\boldsymbol{R}^*$, but the relative infrequency of neutral site games in most applications minimizes any impact this may have. Even if every game in the data set is a neutral site game, $\widehat{\boldsymbol{R}^*}$ will still be unbiased (since the selection of the ``home'' team is randomized), though inefficient (since two parameters are being used to estimate the same quantity, halving the sample size used to estimate each parameter). In such situations, the two diagonal components of $\boldsymbol{R}^*$ should be constrained to be equal.
$\boldsymbol{Z}_i$ is a $2 \times 3p$ matrix that indicates which teams competed in game $i$. If team $k$ visits team $l$ in game $i$, then in its first row, $\boldsymbol{Z}_i$ contains a 1 in the position corresponding to the position of the offensive effect of team $l$, $\textrm{b}^o_l$, in $\textbf{\textrm{b}}$, and a $-1$ in the position corresponding to the position of the defensive effect, $\textrm{b}^d_k$, of team $k$. In its second row, $\boldsymbol{Z}_i$ contains a 1 in the position corresponding to the position of the offensive effect of team $k$, $\textrm{b}^o_k$, in $\textbf{\textrm{b}}$, and a $-1$ in the position corresponding to the position of the defensive effect, $\textrm{b}^d_l$, of team $l$. This is a multiple membership design \citep{browne01} since each game belongs to multiple levels of the same random effect. As a result, $\boldsymbol{Z}$ does not have a patterned structure and may not be factored for more efficient optimization, as it could be with nested designs. The likelihood function for the scores under the normally distributed model is
\footnotesize
\begin{align}\label{eq:normal}
f(\boldsymbol{y}|\textbf{\textrm{b}})=&\prod_{i=1}^n \left[(2\pi)^{-1}|\boldsymbol{R}^*|^{-1/2}\te{exp}\left\{-\frac{1}{2}(\boldsymbol{y}_{i}-\boldsymbol{X}_{i}\boldsymbol{\beta}+\boldsymbol{Z}_{i}\textbf{\textrm{b}})^{\prime}{\boldsymbol{R}^*}^{-1}(\boldsymbol{y}_{i}-\boldsymbol{X}_{i}\boldsymbol{\beta}+\boldsymbol{Z}_{i}\textbf{\textrm{b}})\right\}\right].
\end{align}
\normalsize
This is a generalization of the mixed model proposed by \citep{harville77} for rating American football teams.
\subsection{Two Poisson Outcomes}
We may alternatively assume a Poisson distribution for the conditional responses (e.g. scores, turnovers). When modeling $\boldsymbol{y}|\textbf{\textrm{b}}$ using a GLMM with a Poisson distribution and the canonical log link, it is not possible to model the intra-game correlation with an error covariance matrix since the variance of a Poisson distribution is determined by its mean. Instead, we may optionally add an additional game-level random effect, $\textrm{a}_i$, and thus an additional variance component, $\sigma^2_g$, to $\boldsymbol{G}$. In this case, we recast $\textbf{\textrm{b}}$ as $\textbf{\textrm{b}}=(\textbf{\textrm{b}}_1,\ldots,\textbf{\textrm{b}}_p,\textrm{a}_1,\ldots,\textrm{a}_n)^{\prime}$ and $\boldsymbol{G}=\te{block diag}(\boldsymbol{G}^*,\ldots,\boldsymbol{G}^*,\sigma^2_g I_n)$.
\begin{align*}
y_{i*}|\textbf{\textrm{b}}&\sim \te{Poisson}(\mu_{i*})\\
\log(\mu_{i*})&=\boldsymbol{X}_{i*}\boldsymbol{\beta}+\boldsymbol{Z}_{i*}\textbf{\textrm{b}}
\end{align*}
where $*$ may be replaced by $h$ or $a$. Regardless of whether or not the game-level effect is included, the likelihood function may be written as
\begin{equation}\label{eq:Poisson}
f(\boldsymbol{y}|\textbf{\textrm{b}})=\prod_{i=1}^n \prod_{*\in\{a,h\}} \left[\frac{1}{y_{i*}!}\te{ exp}\left\{y_{i*}(\boldsymbol{X}_{i*}\boldsymbol{\beta}+\boldsymbol{Z}_{i*}\textbf{\textrm{b}})\right\}\te{exp}\left\{-\te{exp}\left[\boldsymbol{X}_{i*}\boldsymbol{\beta}+\boldsymbol{Z}_{i*}\textbf{\textrm{b}}\right]\right\}\right].
\end{equation}
{ For high-scoring sports such as basketball, the Poisson distribution is well approximated by the normal. However, the option to fit Poisson scores will remain useful when modeling low-scoring sports (e.g. soccer, baseball, hockey) or low-count outcomes such as number of penalties, as discussed in section~\ref{sec:ncaaf}}.
\subsection{Binary Outcomes}
Rather than modeling the team scores resulting from each contest, we may model the binary win/loss indicator for the ``home'' team. Predictions for future outcomes are presented as the probability of Team H defeating Team A, as opposed to the score predictions for each team that are available when modeling the scores directly. \citet{football} considers multiple formulations of a multiple membership generalized linear mixed model for the binary outcome indicators: we will focus on one of those. Letting $\pi_i=P(r_i=1)$, we model the probability of a home win with a GLMM assuming a Bernoulli conditional distribution and use a probit link,
\begin{align*}\label{eq:binary}
r_i|\textbf{\textrm{b}}&\sim \te{Bin}(1,\pi_i)\\
\Phi^{-1}(\pi_i)&=W_i\alpha+\boldsymbol{S}_i\textbf{\textrm{b}}
\end{align*}
where $\Phi$ denotes the normal cumulative distribution function. Ties are handled by awarding a win (and thus a loss) to each team.
The home field effect is measured by $\alpha$, with a coefficient vector $\boldsymbol{W}$. $W_i$ takes the value 0 if the $i$-th game was played at a neutral site and 1 otherwise. The design matrix $\boldsymbol{S}$ for the random effects contains rows $\boldsymbol{S}_i$ that indicate which teams competed in game $i$. If team $k$ visits team $l$ in game $i$, then $\boldsymbol{S}_i$ is a vector of zeros with a $1$ in the component corresponding to the position of $\textrm{b}_l^w$ in $\textbf{\textrm{b}}$ and a $-1$ in the component corresponding to $\textrm{b}_k^w$. Note that $\boldsymbol{r}$ is conditioned only on $\textbf{\textrm{b}}^w$, and not on $(\textbf{\textrm{b}}^o,\textbf{\textrm{b}}^d)$. Pragmatically, all of the components in the columns of $\boldsymbol{S}$ corresponding to the positions of $\textbf{\textrm{b}}^o$ and $\textbf{\textrm{b}}^d$ in $\textbf{\textrm{b}}$ are 0. The likelihood function for the binary indicators is
\begin{equation}\label{eq:binary}
f(\boldsymbol{r}|\textbf{\textrm{b}})=\prod_{i=1}^n \left[\Phi\left\{\left(-1\right)^{1-r_i}\left[W_i\boldsymbol{\alpha}+\boldsymbol{S}_i\textbf{\textrm{b}}\right]\right\}\right].
\end{equation}
\subsection{The Joint Model}\label{sec:joint}
Traditionally, teams ratings have been obtained by maximizing only one of the likelihoods (\ref{eq:normal}), (\ref{eq:Poisson}), or (\ref{eq:binary}). { \citet{karlcgs} propose the joint Poisson-binary model for team scores and game outcomes, focusing on the derivation of computational details, which are summarized in the next section. In this paper, we consider more general applications to other game-level responses.} The joint likelihood function \ref{eq:joint}
\begin{align}\label{eq:joint}
L(\boldsymbol{\beta},\boldsymbol{G},\boldsymbol{R})&=\idotsint f(\boldsymbol{y}|\textbf{\textrm{b}}) f(\boldsymbol{r}|\textbf{\textrm{b}}) f(\textbf{\textrm{b}}) \mathrm{d} \textbf{\textrm{b}}
\end{align}
simultaneously maximizes (\ref{eq:binary}) along with a choice of either (\ref{eq:normal}) or (\ref{eq:Poisson})
where $f(\textbf{\textrm{b}})$ is the density of $\textbf{\textrm{b}}\sim N(\bds{0},\boldsymbol{G})$.
The key feature of the joint model is the pair of off-diagonal covariance terms between $(\textbf{\textrm{b}}^o,\textbf{\textrm{b}}^d)^{\prime}$ and $\textbf{\textrm{b}}^w$ in the $\boldsymbol{G}$ matrix. If these covariance terms were constrained to 0 then the resulting model fit would be equivalent to that obtained by modeling the two responses independently.
Thus, the joint model contains the individual normal/Poisson and binary models as a special case: the additional flexibility afforded by Model (\ref{eq:joint}) may lead to improved predictions for both responses when team win-propensities are correlated with their offensive and defensive capabilities. A similiar normal-binary correlated random effects model was employed by \citet{karlcpm} in order to jointly model student test scores in a value-added model with binary attendance-indicators in order to explore sensitivity to the assumption that data were missing at random.
In addition to fitting each of the response types described in the previous subsections individually, the mvglmmRank package offers options to fit the joint normal-binary and Poisson-binary models.
Just as the individual score and outcome models may make opposite predictions about the game outcome, the joint model occasionally will predict a team to outscore its opponent in the score model while also predicting less than a 50\% chance of that team winning. This is a result of modeling distinct team rather than constraining them to be equal to the sum of offensive and defensive ratings, as done by \citet{annis}. The benefit of this approach is that the relative strength of the defense/win-propensity and offense/win-propensity correlations may be compared. The outcomes predicted by the binary component of the joint model focus on the observed win/loss outcomes while allowing the team win-propensity ratings to be influenced by the team offensive and defensive ratings. On the other hand, the outcomes predicted by the score component (checking which team has a larger predicted score) give a relatively larger weight to the observed scores, making the predictions susceptible to teams running up the score on weak opponents \citep{harville03}. As demonstrated in section~\ref{sec:ncaapred}, the joint model tends to produce improved probability estimates over those produced by the binary model.
\section{Computation}\label{sec:computation}
The likelihood functions in Equations (\ref{eq:Poisson}), (\ref{eq:binary}) and (\ref{eq:joint}) contain intractable integrals because the random effects enter the model through a nonlinear link function. Furthermore, the $p$-dimensional integral in each equation may not be factored as a product of one-dimensional integrals. Such a factorization occurs in longitudinal models involving nested random effects. However, the multiple membership random effects structure of our model results in a likelihood that may not be factored. It is possible to fit multiple membership models in SAS, using the EFFECT statement of PROC GLIMMIX. \citet{football} provides code for fitting the binary model in GLIMMIX. There are, however, advantages to using custom-written software instead. Building the model fitting routine into an R package makes the models available to readers who do not have access to SAS.
Secondly, GLIMMIX does not currently account for the sparse structure of the random effects design matrices, resulting in exponentially higher memory and computational costs than are required when that structure is accounted for \citep{karlem}. Thirdly, the EM algorithm may be used to provide stable estimation in the presence of a near-singular $\bds{G}$ matrix \citep{karlem,karlcgs}, whereas GLIMMIX relies on a Newton-Raphson routine that tends to step outside of the parameter space in such situations.
Finally, we are able to use more accurate approximations than the default pseudo-likelihood approximation \citep{wolfinger93} of GLIMMIX, including first-order and fully exponential Laplace approximations \citep{tierney89,karlcgs}. (GLIMMIX is capable of using the first-order Laplace approximation, but we have not had success using it with the EFFECT statement). In line with the theory and simulations presented by \citet{karlcgs}, 17 of the 18 basketball tournaments in section~\ref{sec:ncaapred} are modeled more accurately in the binary model as fully exponential corrections are applied to the random effects vector. In those same seasons, the predictions show further improvement with the addition of fully exponential corrections to the random effects covariance matrix.
\citet{karlcgs} describe the estimation of multiple response generalized linear mixed models with non-nested random effects structures and derive the computational steps required to estimate the Poisson-binary model with an EM algorithm. The models presented here are special cases of that class of models. The exact maximum likelihood estimates are obtained for the normal model when team scores are modeled alone. The mvglmmRank package implements these methods without requiring end-user knowledge of the estimation routine.
Section~\ref{sec:bb} demonstrates the use of the package in the context of modeling college football yards-per-game with home-win indicators.
The mvglmmRank package reports the Hessian of the parameter estimates. The inverse of this matrix is an estimate for the asymptotic covariance matrix of the parameter estimates, and it ought to be positive-definite \citep{demidenko}. A singular Hessian suggests that the model is empirically underidentified with the current data set \citep{rabe01}. This can be caused by a solution on the boundary of the parameter space (e.g. zero variance components, linear dependence among the random effects), by multicollinearity among the fixed effects, convergence at a saddle point, or by too loose of a convergence criterion. \citet{rabe01} recommend checking the condition number (the square root of the largest to the smallest eigenvalue) of the Hessian. However, the Hessian is sensitive to the scaling of the responses, while the correlation matrix of the inverse Hessian (if it exists) is invariant. As such, we prefer to check the condition number of this correlation matrix. While the joint model for scores and win/loss outcomes for the data set presented by \citet{karlcgs} does not show signs of empirical underidentification, this model does show such signs for other data sets when modeling scores and win/loss outcomes. This seems reasonable, since the win/loss indicators are simply a discretized difference of the team scores. While the model parameters are unstable in the presence of empirical underidentification, the predictions produced by the model remain useful as evidenced by improvement in cross validation error rates, a point discussed in section~\ref{sec:ncaaf}. Joint models for other responses (e.g. fumbles) with the win/loss indicators do not typically show symptoms of underidentification.
\section{American College Football Outcomes}\label{sec:ncaaf}
This section considers several different game-level outcomes from the 2005--2013 American College Football seasons. The models presented here are fit independently across each of the nine seasons.
The data were originally furnished under an open source license from \url{cfbstats.com}, and are now maintained at \url{https://github.com/10-01/NCAA-Football-Analytics}. As mentioned in section \ref{sec:intro}, the standard modeling outcomes margin of victory (MOV) \citep{henderson75} and the binary win/loss information \citep{mease,football}, along with potential covariates such as game location (home, away, neutral) will be used.
To illustrate the joint model, we will use recordings for game-level responses: sacks, yards per play, and fumbles in addition to the game-scores.
For those unfamiliar with American Football, a ``sack'' is recorded when a defensive player ``sacks'' the quarterback, who receives the ball to begin a play, before they are able to make a positive move forward toward the goal.
A ``sack'' has a positive impact on the defensive ability of a team.
Sacks are relatively infrequent. The leading American Football teams average about 3 sacks per game.
``Yards per play'' is calculated by an offensive move towards the goal, regardless of type of play (e.g. run or pass).
A football field is 100 yards, and a team has 4 attempts to move the football 10 yards down the field at a time.
A higher value of ``yards per play'' would indicate a higher offensive ability.
A ``fumble'' is recorded when a player loses the ball on the ground and either team is able to pick it up.
A ``fumble lost'' would indicate a turnover to the other team.
This paper will use ``fumbles'' rather than ``fumbles lost'' to demonstrate the effectiveness of the joint model when a low correlated or irrelevant response is used.
We have made the processed data for each season available.
To compare the effectiveness of the various models, the predictions for the home-win indicator for game $i$ are scored against the actual game outcomes using a log-loss function:
\begin{equation}
\te{log-loss}_{i}=-y_i\log\left(\hat{y}_i\right)-\left(1-y_i\right)\log\left(1-\hat{y}_i\right)
\end{equation}
where $\hat{y}_i$ is the predicted probability of a home-team win in game $i$, $y_i$ is the outcome of game $i$ (taking the value 1 with a home-team victory and 0 otherwise). A smaller value of log-loss represents a more accurate prediction.
Using 10-fold cross-validation for each of the seasons, we compare the log-loss of predictions from a traditional binary model for home-win indicators to those from the proposed binary-normal model (jointly modeling home-win and yards-per-play) and to those from two binary-Poisson models (home-win and sacks, home-win and fumbles) using a sign test. The sign test allows us to measure whether a significant ($\alpha=0.05$ in this section) majority of games experience improved prediction under an alternative joint model. Likewise, a sign test is used to compare the absolute residuals for score, fumble, and sack predictions to measure improvement due to the joint modeling of the binary outcome with these responses. Pragmatically, a significant sign test on the median difference between the log-losses from two models indicates that wagers based on the preferred model would be expected to perform significantly better when equal wagers are placed for all games.
As discussed in section~\ref{sec:computation}, there are some cases in which the Hessian of the model parameters is not positive-definite at convergence. This can indicate instability in the parameter estimates; however, the predictions resulting from these models are still useful, as demonstrated by the improved performance on (hold-out) test data. This is a generalization of the behavior of linear regression models in the presence of multicollinearity.
In this section, we refer to the (bivariate) normal-binary model as NB. PB0 refers to the Poisson-binary model with no game-level random effect, while PB1 indicates the Poisson-binary model with a game-level random effect. B, N, P0, and P1 refer to the individual binary, normal, Poisson with no game effect, and Poisson with a game effect models, respectively. We report whether there is a significant difference between the home and away mean values from the individual models N, P0, and P1 (the home-field effect is significant in all years for the home-win outcomes in model B). These results are interesting since the multiple membership models account for the quality of the opponents that these values were recorded against. The contrasts between the home and away parameters in the mean vector are tested using the estimated Hessian.
This section does not account for the multiple comparisons that are performed when declaring significance across seasons; however, the p-values are reported in
the tables. Using the tables,
it is informative to compare the optimal model identified across seasons. For example, the sacks/home-win model shows improved predictions for sacks over the individual model for sacks in each of the eight seasons, even though only two of those improvements are significant. We would expect to see a preference for the individual model in cross-validation if the jointly modeled response were irrelevant, and a uniform distribution on the resulting p-values, as is the case in Table~\ref{tab:fumbles} for the fumble models.
\subsection{Yards per Play and Outcomes}
When jointly modelling the yards per play and outcome, the joint normal-binary (NB) model provides significantly better predictions for Win/Loss outcomes than individual binary model (B) in all years (see Table~\ref{tab:ypp}). There is slightly weaker evidence of improvement in the fit of the yards per play in the joint model over the individual normal model: comparing the absolute residuals from each model, there is a significant preference (via the sign test) for the joint model in all but one of the eight years (2006). In all eight seasons presented, there is a significant game location effect: home teams record more yards per play than visiting teams (p-value for all years $<0.0001$). There is a weak correlation between yards per play recorded by opponents within a game, ranging from 0.05 to 0.15.
\begin{table}[htbp]
\centering
\caption{Yards per play (YPP) and binary home-win indicators are modelled both individually (N and B respectively) and jointly (NB).
``Best'' Model for YPP indicates which model, N or NB, provided the best YPP prediction measured by the minimum absolute residual for the majority of games in each year.
``Best'' Model for W/L indicates which model produces the best prediction, B or NB, measured by log-loss on the predicted win-probabilities from each model for the majority of games in each year. *Indicates a significant preference over comparison model(s). }
\begin{tabular}{lll}\\
\toprule
&``Best'' & ``Best'' \\
& Model & Model \\
Year & for YPP & for W/L \\
\midrule
2005 & NB & NB* \\
2006 & N & NB* \\
2007 & NB* & NB* \\
2008 & NB & NB* \\
2009 & NB & NB* \\
2010 & NB* & NB* \\
2011 & NB & NB* \\
2012 & NB & NB* \\
2013 & NB*& NB* \\
\bottomrule
\end{tabular}%
\label{tab:ypp}%
\end{table}%
\subsection{Sacks and Outcomes}
The joint model PB0 (Poison-Binary with no game-level random effects) for sacks and home-win indicators significantly outperforms the individual model B with respect to log-loss for the home-win indicators in each year (see Table~\ref{tab:sacks}). Likewise, PB0 outperforms the individual sack model P0 in each year (significantly so in two years). While the joint modeling of sacks and outcomes improves the predictions of both responses, the inclusion of a game-level effect in the sack model (PB1) leads to worse predictions in each year (significant in all years for log-loss for the outcomes and in three years for the absolute residuals of the sacks). This indicates that there is no intra-game correlation in the number of sacks recorded by opponents. There was a larger frequency of sacks recorded by the home team in each year (significant in four of the eight years: 2007, 2008, 2009, 2011).
\begin{table}[htbp]
\centering
\caption{Sacks and binary home-win indicators are modelled both individually with a Poisson model (P0 and B respectively) and jointly (PB0 or PB1).
``Best'' Model for Sacks indicates which model, P0, PB0, or PB1, provided the best sack prediction measured by the minimum absolute residual for the majority of games in each year.
``Best'' Model for W/L indicates which model produces the best prediction, B, PB0, or PB1, measured by log-loss on the predicted win-probabilities from each model for the majority of games in each year. *Indicates a significant preference over comparison model(s).
}
\begin{tabular}{lll}\\
\toprule
&``Best'' & ``Best'' \\
& Model & Model \\
Year & for Sacks & for W/L \\
\midrule
2005 & PB0 & PB0* \\
2006 & PB0 & PB0* \\
2007 & PB0 & PB0* \\
2008 & PB0 & PB0* \\
2009 & PB0 & PB0* \\
2010 &PB0* & PB0* \\
2011 & PB0 & PB0* \\
2012 &PB0* & PB0* \\
2013 & PB0 & PB0* \\
\bottomrule
\end{tabular}%
\label{tab:sacks}%
\end{table}%
\subsection{Fumbles and Outcomes}\label{sec:fumb.out}
Joint modeling of fumbles per game along with the game outcome did not lead to significant differences in log-loss for the outcome predictions in any season, nor did it provide any improvement in the predictive accuracy for the number of fumbles (see Table~\ref{tab:fumbles}). Furthermore, model P0 outperformed model P1 in every season (significantly so in three seasons), suggesting that there is not a substantial correlation between the number of fumbles recorded by opponents within a game. The home-field effect is not significant in any of the years for P0. This suggests that there is not a tendency for teams to fumble more or less often while traveling.
\begin{table}[htbp]
\centering
\caption{Fumbles and binary home-win indicators are modelled both individually with a Poisson model (P0 or P1 and B respectively) and jointly (PB0 or PB1).
``Best'' Model for Fumbles indicates which model, P0, P1, PB0, or PB1, provided the best fumble prediction measured by the minimum absolute residual for the majority of games in each year.
``Best'' Model for W/L indicates which model produces the best prediction, B, PB0, or PB1, measured by log-loss on the predicted win-probabilities from each model for the majority of games in each year. *Indicates a significant preference over comparison model(s).
}
\begin{tabular}{lll}\\
\toprule
&``Best'' & ``Best'' \\
& Model & Model \\
Year & for Fumbles & for W/L \\
\midrule
2005 & P0 & PB0 \\
2006 & PB0 & PB0 \\
2007 & P0 & PB0 \\
2008 & PB0 & B \\
2009 & PB0* & PB0* \\
2010 &PB0 & PB0 \\
2011 & PB0 & PB0 \\
2012 &PB0 & B \\
2013 & PB0 & PB0 \\
\bottomrule
\end{tabular}%
\label{tab:fumbles}%
\end{table}%
By contrast, a logistic regression on the home-win indicators against the number of home fumbles and the number of away fumbles indicates that these are significant predictors for whether the home team will win. Likewise, the home-win indicators significantly improve predictions for the number of home and away fumbles in a Poisson regression. This provides a good contrast between jointly modeling two responses and including one of the responses as a factor in a model for the other: the former searches for a correlation between latent team effects from each of the responses, while the later considers only relationships between the responses on an observation-by-observation basis. In other words, the joint model considers relationships between higher levels in the hierarchy of the models.
\subsection{Scores and Outcomes}
In each year, model PB1 significantly outperforms the predictions for the home-win indicators from both model B and model NB (see Table~\ref{tab:score}). This observation comes in spite of the fact that the estimated Hessian was nearly singular in each year, due to the nearly linear relationship between team win-propensities, team offensive (score) ratings, and team defensive (score) ratings. In this case, the conditional model of \citet{annis} provides a more accurate framework for the data generation process, accounting for the deterministic relationship between the two responses. Nevertheless, this situation highlights the utility of jointly modeling responses in general. Despite the parameter instability (via the inflated standard errors resulting from the near-singular Hessian) of the joint model in the extreme case of modeling scores with the home-win indicator, the home-win predictions still show improvement over those from model B. In fact, in each year the score/home-win model significantly outperforms the yards-per-play/home-win model, which in turn outperforms the sacks/home-win model with respect to log-loss for the home-win predictions.
\begin{table}[htbp]
\centering
\caption{Scores and binary home-win indicators are modelled both individually with a Poisson model (P0 or P1 and B respectively) and jointly (PB0 or PB1).
``Best'' Model for Scores indicates which model, P0, P1, PB0, or PB1, provided the best score prediction measured by the minimum absolute residual for the majority of games in each year.
``Best'' Model for W/L indicates which model produces the best prediction, B, PB0, or PB1, measured by log-loss on the predicted win-probabilities from each model for the majority of games in each year. *Indicates a significant preference over comparison model(s).
}
\begin{tabular}{lll}\\
\toprule
&``Best'' & ``Best'' \\
& Model & Model \\
Year & for Scores & for W/L \\
\midrule
2005 & P1 & PB1* \\
2006 & PB1 & PB1* \\
2007 & P1 & PB1* \\
2008 & P1 & PB1* \\
2009 & PB1 & PB1* \\
2010 &PB1 & PB1* \\
2011 & P1 & PB1* \\
2012 &P1 & PB1* \\
2013 & P1 & PB1* \\
\bottomrule
\end{tabular}%
\label{tab:score}%
\end{table}%
P1 outperforms P0 in every year with respect to absolute residuals for the score predictions, significantly so in four of those years. Furthermore, there are significantly better results from PB1 over PB0 in log-loss for home-win predictions in two of the years. Together, these results suggest that there is an important intra-game correlation between opponent scores. Yet, no significant differences appear with respect to the absolute score residuals appear between models N and P1, or between PB1 and P1. This indicates that the normal and Poisson models for scores perform similarly, and that the game-score predictions are not influenced by the joint modeling of the home-win indicators. This last point is unsurprising since the home-win indicators are a discretized version of a difference of the team scores.
\subsection{Estimated Random Effect Covariance Matrices}
For the 2005, the random effect covariance matrices are presented in table \ref{ta:covmat} (the 2006-2013 season are omitted for brevity).
The correlation matrices are printed below the covariance matrices.
Recall, the columns correspond to the ```offensive'' effect, the ``defensive'' effect, the win-propensity effect, and the game-level effect (score-outcome model only). The correlation matrices are printed below the covariance matrices.
The words offensive and defensive appear in quotes as a reminder that the interpretation of these effects depends on the model structure described in section~\ref{sec:model}. For example, in the sacks-outcomes model, we use the number of sacks recorded by the home team (against the visiting quarterback) as the home response, and likewise define the away response. Thus, a larger offensive effect in the sacks-outcomes model for a given team indicates a larger propensity for that team's \textit{defense} to sack the opposing quarterback.
Additionally, the estimate for the win-propensity variance components from the binary-only model are $(0.43, 0.63, 0.65)$ for three increasingly accurate approximations: first-order Laplace, ``partial'' fully exponential Laplace, and fully exponential Laplace \citep{karlcgs}.
Notice how the estimates for this component from the fumbles-outcomes models are typically similar to the estimate from the first-order approximation (which was used for all of the joint models). This is not surprising, since no significant differences in the outcome prediction accuracy was noted with the joint modeling of fumbles in section~\ref{sec:fumb.out}. By contrast, the sacks-outcomes, yards/play-outcomes, and scores-outcomes models, which were found to produce progressively more accurate outcome predictions, generate progressively larger estimates for the win-propensity variance component. In the same way that the more-accurate fully exponential Laplace approximation tends to correct for the downward bias observed in variance components for a binary response \citep{breslow95,lin96}, the joint-modeling of a relevant response appears to inflate the variance component estimate.\\
\begin{table}
\caption{Random Effect Covariances and Correlation Matrices for the 2005 Season for the Binary model with each game level response (upper triangle). From left to right in each matrix, the columns correspond to the ``offensive'' effect, the ``defensive'' effect, the win-propensity effect, and the game-level effect (score-outcome model only).} \label{ta:covmat}
\centering
\begin{tabular}{ccc} \\
\toprule
Game-level response & Covariance & Correlation \\ \midrule \\
Yards Per Play
&$\begin{bmatrix}{}
0.55 & 0.22 & 0.58 \\
& 0.35 & 0.44 \\
& & 0.84 \\
\end{bmatrix}$
&$\begin{bmatrix}{}
1.00 & 0.50 & 0.85 \\
& 1.00 & 0.82 \\
& & 1.00 \\
\end{bmatrix}$ \\\\
Sacks
&
$\begin{bmatrix}{}
0.07 & 0.03 & 0.17 \\
& 0.09 & 0.14 \\
& & 0.55 \\
\end{bmatrix}$ &
$\begin{bmatrix}{}
1.00 & 0.42 & 0.89 \\
& 1.00 & 0.61 \\
& & 1.00 \\
\end{bmatrix}$ \\\\
Fumbles
&
$\begin{bmatrix}{}
\phantom{-}0.02 & \phantom{-}0.00 & -0.03 \\
& \phantom{-}0.01 & -0.05 \\
& & \phantom{-}0.44 \\
\end{bmatrix}$ &
$\begin{bmatrix}{}
\phantom{-}1.00 & -0.10 & -0.31 \\
& \phantom{-}1.00 & -0.79 \\
& & \phantom{-}1.00 \\
\end{bmatrix}$ \\\\
Scores
& $\begin{bmatrix}{}
0.11 & 0.07 & 0.35 & 0.00 \\
& 0.09 & 0.29 & 0.00 \\
& & 1.20 & 0.00 \\
& & & 0.07 \\
\end{bmatrix}$
&$\begin{bmatrix}{}
1.00 & 0.71 & 0.94 & 0.00 \\
& 1.00 & 0.90 & 0.00 \\
& & 1.00 & 0.00 \\
& & & 1.00 \\
\end{bmatrix}$ \\
\bottomrule
\end{tabular}
\end{table}
The (1,2) component of the matrices in the score-outcome model gives the correlation between offensive and defensive team score ratings.
It ranges from 0.77 for American college football data to $-0.3$ for the professional basketball (NBA) data (not shown). We would expect to see a moderate positive correlation in the American college football data: if schools are able to recruit good offensive players and coaches, they will likely also be able to recruit good defensive ones. Interestingly, the offensive and defensive team ratings are negatively correlated for the NBA data. This may reflect the fact that offense and defense are played by the same players in basketball.
Likewise, the (1,2) component of the matrices in the yards/play-outcome model gives the correlation between offensive and defensive team yards-per-play ratings. Figure~\ref{plot:ncaaf} plots the team defensive ratings against the team offensive ratings. The colors and sizes of the team markers correspond to the team win propensity ratings from the normal-binary model. This plot appears similar to the one based on the score-outcome model in Figure~2 of \citet{karlcgs}.
\begin{figure}
\caption{Football offensive and defensive yards-per-play ratings from the normal-binary model for the 2012 season. The colors and marker sizes indicate the win propensity rating of each team.}
\label{plot:ncaaf}
\centering
\includegraphics[width=5.5in]{modifiedfig.jpg}
\end{figure}
\subsection{Estimated Error Covariance Matrices}
The bivariate normal-binary model provides an estimate of the intra-game correlation between opposing team outcomes. This section uses yards-per-play as the game-level response. The model revealed an intra-game correlation of 0.17 for 2005, 0.04 for 2006, and 0.13 for 2007.
Hence, there is only weak positive correlation between opposing team yards-per-play within games. There might be a positive relationship due to variance induced by weather conditions, time of day, time in the football season, etc; however, that relationship does not appear to be substantial.
\section{Prediction of NCAA Basketball Tournament Results}\label{sec:ncaapred}
The annual NCAA Division I basketball tournament provides an excellent occasion for sports predictions. The most popular format of tournament forecasting requires a prediction for the winner of each bracket spot prior to the beginning of the first round. By contrast, some contests \citep{contest} require predicted probabilities -- as opposed to discrete win/loss prediction -- of outcomes for each potential pairing of teams. This allows the confidence of predictions to be evaluated while ensuring that a prediction is made for every match that occurs. We consider the use of the multivariate generalized linear mixed model (\ref{eq:joint}) to produce predicted outcome probabilities that depend on the observed team scores as well as the home-win indicators.
To illustrate the degree to which the model for a response may be influenced by its conditionally independent counterpart in the joint model (\ref{eq:joint}), we jointly model the team scores and (discretized) binary home-win indicators. By jointly modeling the team scores and binary game outcomes, the team win-propensities are influenced by their correlation with team offensive and defensive ratings, thus incorporating information about the scores into the predicted probabilities from the binary sub-model. To demonstrate the benefit of the joint model over the individual binary model, we compare the fit of the binary and normal-binary models across the most recent 19 tournaments.
Figure~\ref{plot:logloss2} shows that the predicted probabilities produced by joint model NB outperformed (with respect to mean log-loss for tournament games) those produced by model B in all years from 1996-2014 except for two. The p-value for the t-test of the yearly differences in log-loss from the two models is $0.0002$. Thus, the joint model provides a significant improvement in predictive performance for the NCAA tournament by utilizing observed scores while still producing predicted probabilities based on a probit model of outcomes.
\begin{figure}
\caption{Difference in log-loss for the binary (B) and normal-binary (NB) models across years. Dotted lines indicate the 95\% confidence interval for the mean difference.}
\label{plot:logloss2}
\centering
\vspace{.1in}
\includegraphics[trim = 24mm 156mm 73mm 32mm, clip,width=8cm]{Distribution3.pdf}
\end{figure}
\section{Conclusion}
We have developed a combination of multivariate generalized linear mixed models for jointly fitting normal or Poisson responses with binary outcomes. Joint modeling can lead to improved accuracy over separate models for the individual responses. We have developed and introduced the mvglmmRank package for fitting these models using efficient algorithms.
The mvglmmRank package is not limited to the analysis of football or basketball data: the package is written generally to allow for the analysis of any sport. Differences in scoring patterns within each sport can lead to different patterns of fitted model parameters. For example, basketball produces stronger intra-game score correlations than football. If soccer, baseball, hockey, or other low-scoring sports are to be analyzed, the Poisson-binary model may provide a better fit than the normal-binary model. Furthermore, the estimation routine \citep{karlcgs} is extremely stable, meaning more than two responses could feasibly be modeled jointly.
The process of jointly modeling multiple responses via correlated random effects is useful across a number of applications. For example, \citet{karlcpm} use a similar joint modeling strategy in an analysis of potentially nonignorable dropout, while \citet{broatch10} fit multiple student-level measurements in a joint analysis of a multivariate value-added model.
| {
"attr-fineweb-edu": 1.540039,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdRI4ubng_M3aQ-nE | \section{Introduction}
For two-player competitive games like chess, professional sports, online gaming, etc, it is common practice to assign players/teams a real-number \emph{rating} capturing their skill level.
Ratings are commonly applied in matchmaking systems \cite{AL09, AM17}; some more creative recent applications include pricing algorithms \cite{YDM14}, item response theory \cite{Forisek09}, etc.
A \emph{rating system} is an algorithm that adjusts a player's rating upwards after each win, or downwards after each loss.
Some notable rating systems used in practice include Harkness \cite{Harkness67}, Elo \cite{Elo67}, Glicko \cite{Glickman95}, Sonas \cite{Sonas02}, TrueSkill \cite{HMG06}, URS \cite{UniversalRating}, and more.
See \cite{Glickman95, MCZ18, HMG06} for high-quality technical overviews of the inner workings of these rating systems.
\subsection{Our Model of Rating Systems}
The exact model of rating systems studied in this paper is a slight variant of models used in prior work.
Our goal is to consider a maximally general model of rating systems that are:
\begin{itemize}
\item \emph{One-dimensional}, meaning that they only track a rating parameter for each player and no additional statistics of performance,
\item \emph{Memoryless}, meaning that when the system updates players' ratings it considers only the game that was just played and not the history of previous match outcomes, and
\item \emph{Zero-sum}, meaning that the number of rating points gained by the winner of a match is always equal to the number of rating points lost by the loser.
\end{itemize}
Many major rating systems satisfy these three properties in their most basic form,\footnote{The notable exception is Glicko, which is itself a two-dimensional lift of Elo. We discuss this more shortly.} but a common direction for prior work is on modifying a base system to explicitly consider additional performance statistics or game history, e.g.~\cite{Glickman95, CJ16, MRM07, CSEN17, DHMG07, NS10, MCZ18}.
These directions roughly trade some simplicity and interpretability of the rating system for improved predictive accuracy on the probability of one player beating another.
The goal of this paper is instead to continue a recent line of work on \emph{incentive-compatibility} as a property of rating systems.
We introduce a new desirable property of rating systems, informally stating that the system does not incentivize a rating-maximizing player to selectively seek matches against opponents of one rating over another.
One of our main conceptual contributions is just to \emph{define} this property in a reasonable way: as we discuss shortly, the natural first attempt at a definition turns out to be too strong to be feasible.
We then describe a different version of the definition, and in support of this definition, we show that there is a natural class of rating systems (most notably including \emph{Sonas}, described shortly) that satisfy this alternate definition.
In order to explain this progression in ideas formally, we next give details on our model of rating systems.
\subsubsection{Formal Model of Rating Systems}
We follow the common basic model that each player has a hidden ``true rating,'' as well as a visible ``current rating'' assigned by the system.
Although current ratings naturally fluctuate over time, the goal of a rating system is to assign current ratings that are usually a reasonable approximation of true ratings.
When two players play a match, their respective true ratings determine the probability that one beats the other, and their respective current ratings are used by the system as inputs to a function that determines the changes in the rating of the winner and the loser.
Concretely, a rating system is composed of two ingredients.
The first is the \emph{skill curve} $\sigma$, which encodes the system's model of the probability that a player of true rating $x$ will beat a player of true rating $y$.
The second is the \emph{adjustment function} $\alpha$, which takes the players' current ratings as inputs, and encodes the number of points that the system awards to the first player and takes from the second player in the event that the first player beats the second player.
We are not allowed to pair together an \emph{arbitrary} skill curve and adjustment function; rather, a basic fairness axiom must be satisfied.
This axiom is phrased in terms of an \emph{expected gain function}, which computes the expected number of rating points gained by a player of current rating $x$ and true rating $x^*$, when they play a match against a player of current rating $y$ and true rating $y^*$.
The fairness axiom states that, when any two correctly-rated players play a match, their expected rating change should be $0$.
\begin{definition} [Rating Systems] \label{def:ratingsystems}
A \textbf{rating system} is a pair $(\sigma, \alpha)$, where:
\begin{itemize}
\item $\sigma : \mathbb{R}^2 \to [0, 1]$, the \textbf{skill curve}, is a continuous function that is weakly increasing in its first parameter and weakly decreasing in its second parameter.
We assume the game has no draws, and thus $\sigma$ must satisfy\footnote{We ignore the possibility of draws in this exposition purely for simplicity. In the case of draws, one could interpret a win as $1$ victory point, a draw as $1/2$ a victory point, and a loss as $0$ victory points, and then interpret $\sigma(x, y)$ as the expected number of victory points scored by a player of true rating $x$ against a player of true rating $y$. See \cite{Kovalchik20} for more discussion, and for a rating systems that can handle even finer-grained outcomes than the win/loss/draw trichotomy.}
\begin{align*}
\sigma(x, y) + \sigma(y, x) = 1 \tag*{for all $x, y$. (Draw-Free)}
\end{align*}
\item $\alpha : \mathbb{R}^2 \to \mathbb{R}_{\ge 0}$ is the \textbf{adjustment function}.
\item Let $\gamma : \mathbb{R}^4 \to \mathbb{R}$ be the \textbf{expected gain function}, defined by
$$\gamma(x,x^* \mid y,y^*) := \alpha(x,y)\sigma(x^*,y^*)-\alpha(y,x)\sigma(y^*,x^*).$$
Then we require the following fairness axiom:
$$\gamma(x, x \mid y, y) = 0 \text{ for all } x, y.$$
\end{itemize}
\end{definition}
\subsubsection{$K$-Functions}
Rearranging the fairness axiom, we get the identity
\begin{align*}
\sigma(x, y) &= \frac{\alpha(y, x)}{\alpha(x, y) + \alpha(y, x)}.
\end{align*}
Thus the adjustment function determines the skill curve (in the sense that, for a given adjustment function $\alpha$, there is a unique skill curve $\sigma$ that satisfies the fairness axiom).
However, when designing a rating system, it is more intuitive to model the game with a skill curve first and then pick an adjustment function second.
The skill curve does \emph{not} fully determine the adjustment function; rather, the value of $\sigma(x, y)$ only determines the \emph{ratio} between $\alpha(x, y)$ and $\alpha(y, x)$.
The \emph{$K$-function} is an auxiliary function that determines the scaling.
We define it as the denominator part of the previous identity.\footnote{One can also interpret $K$-functions through a gambling model where the players respectively put $\alpha(x, y), \alpha(y, x)$ of their rating points into a pot, and then the winner takes all the rating points in the pot. The $K$-function $K(x, y) = K(y, x)$ is the pot size.}
\begin{definition} [$K$-Functions]
For a rating system $(\sigma, \alpha)$, its associated \textbf{$K$-Function} is defined by $K(x, y) := \alpha(x, y) + \alpha(y, x)$.
\end{definition}
Many rating systems in the literature do not \emph{explicitly} define an adjustment function; rather, they define their adjustment function \emph{implicitly} by instead giving the skill curve and the $K$-function \cite{Glickman95, Elo67}.
Most notably, Elo uses \emph{$K$-factors}, which are a special case of our $K$-functions.\footnote{We have chosen the name \emph{$K$-function} to indicate that they generalize $K$-factors. $K$ does not stand for anything; is an arbitrary variable name in the Elo adjustment function formula.}
See \cite{Sonas11} for a discussion of the practical considerations behind choosing a good $K$-function.
\subsection{The Elo and Sonas Rating Systems}
The most famous and popular rating system used in practice is probably Elo.
Its basis is a logistic skill curve: in a match between two players of true ratings $x, y$, Elo maps the quantity $x-y$ to a win probability for the first player using a logistic function.
Many implementations of ELO, like the one used by the chess federation FIDE, also include a \emph{thresholding rule}: e.g., if $x-y > 400$, then the FIDE implementation instead treats $x-y$ as exactly $400$.
In other words, it assumes the game is chaotic enough that no player ever has more than a 96\% win probability over another.
This thresholding also regularizes against extreme rating swings.
\begin{figure}[h]
\centering
\includegraphics[scale=0.4]{elooverlay.jpg}
\includegraphics[scale=0.4]{linefit.jpg}%
\caption{Left: Elo skill curve (in yellow) overlayed with empirical data on the outcomes of FIDE-rated chess matches. Right: Sonas skill curve (in red) overlayed with empirical data on the outcomes of FIDE-rated chess matches in which both players were rated 2200 or higher. Figures by Jeff Sonas, taken from \cite{Sonas11}. See \cite{Sonas02, Sonas20} for additional discussion.}
\label{fig:my_label}
\end{figure}
In 2002, Jeff Sonas published a famous critique of Elo \cite{Sonas02}, in which he analyzed a large sample of FIDE rated chess matches among highly-rated players.
He argued that the logistic curve is so flat within this player pool that a threshold-linear skill curve fits the data just as well, and hence is preferable due to its simplicity.
We will henceforth call skill curves that are linear within some threshold \emph{Sonas-like skill curves}:
\begin{definition} [Sonas-like skill curves] \label{def:sonaslike}
A skill curve is \textbf{Sonas-like} if there exist $a, s > 0$ such that
for ratings $x,y \in \mathbb{R}$, when $|x-y| \le s$, we have $\sigma(x,y) = ax-ay + 0.5$.
\end{definition}
In 2011 \cite{Sonas11}, Sonas augmented his previous analysis with an interesting nuance: when \emph{all} FIDE rated chess matches are considered, including those where the participants have lower ratings, then the logistic curve is not so flat and Elo skill curves seem superior.
Thus, the relative value of Elo and Sonas seems to depend a bit on the strength of the pool of players being modeled.
\subsection{Strategyproof Rating Systems and Our Results}
An interesting recent trend in rating system design has been to enforce \emph{strategyproofness}.
The idea is that players experience rating as an incentive, and they will sometimes take strategic action to maximize their rating in the system.
A system is strategyproof against a certain kind of undesirable strategic behavior if it does not incentivize a rating-maximizing player to perform that behavior.
We next overview a recent case study, in which players in a real-world system discovered and implemented an attack on a rating system that was missing an important strategyproofness property.
\subsubsection{Volatility Hacking}
An obviously-desirable strategyproofness property is that a rating-maximizing player should always be incentivized to try their best to win each game they play.
Interestingly, these incentives were recently discovered \emph{not} to hold in the popular Glicko rating system.
Glicko \cite{Glickman95} is a higher-dimensional version of Elo that tracks a \emph{volatility parameter} for each player.
A higher volatility parameter indicates that the system is more uncertain about the player's true rating, which in turn amplifies the adjustment function.
This volatility parameter was repeatedly shown to improve predictive accuracy of systems, and so it has been widely adopted.
But recently, an attack has been discovered called \emph{volatility hacking}.
A player first strategically loses many matches, lowering their rating to the point where they only face much lower-rated opponents and can essentially win or lose at will.
Then, the player alternates wins and losses, which boosts their volatility parameter as much as they like while keeping their rating roughly stable.
Finally, they win a few games, and due to their enormous volatility parameter their rating skyrockets well above its initial value.
Volatility hacking was used to attack the game Pok{\'e}mon Go \cite{PokemonReddit}, which uses a version of Glicko to rank players.
An interesting recent paper by Ebtekar and Liu \cite{EL21} shows that some other popular rating systems are also vulnerable to volatility hacking attacks, and it proposes concrete fixes to the handling of volatility parameters that would make a Glicko-like system strategyproof against volatility hacking.
\subsubsection{Opponent Indifference and First Main Result}
This paper is about securing rating systems against a less pernicious but likely much more common type of attack that we will call \emph{opponent selection}.
Many matchmaking systems provide players with some control over their next opponent.
For example, some online games use an interface in which players view a public list of active challenges from opponents of various ratings, and they can accept any challenge they like.
Alternately, some games use random matchmaking, but then let players abort a match without penalty after glimpsing their opponent's rating.
If players have any control over their next opponent's rating, then it is clearly undesirable for a rating system to allow a player to boost their rating by exercising this control strategically.
For example, in the Elo rating system, one can compute that the magnitude of expected gain is greatest when a (correctly-rated) opponent's true rating is exactly halfway between a player's current rating and their true rating.
Which rating systems are \emph{immune} to opponent selection incentives?
The natural definition would capture systems where the a player's expected gain doesn't depend on their opponent's rating, so long as that opponent is correctly rated.
Formally:
\begin{definition} [Opponent Indifference]
A rating system $(\sigma, \alpha)$ with expected gain function $\gamma$ is \textbf{opponent indifferent} if there is a function $\gamma^* : \mathbb{R}^2 \to \mathbb{R}$ such that
$$\gamma(x, x^* | y, y) = \gamma^*(x, x^*) \text{ for all } x, x^*, y.$$
\end{definition}
Before we proceed, let us be more specific about the kind of opponent selection attacks that become impossible in a rating system that satisfies this definition.
The premise is that the players in a system are not usually correctly rated, i.e., their current rating differs significantly from their true rating.
One reason is due to random walk effects, where a player's current rating will naturally drift over time.
Another reason is that a player's true rating might change from day to day due to external factors like rest, stress, distraction, etc.\footnote{Experts have estimated that chess strength fluctuates by about $\pm$ 200 Elo rating points (in the FIDE system) on a given day due to external factors \cite{COchess}. However, we have not been able to find statistical validation of these estimates, so they should be taken as informal.}
An opponent selection attack would have a player seek opponents of one rating when they feel overrated by the system, and another when they feel underrated, causing their rating fluctuations to hold a significantly higher average than they would without strategic behavior.
Having motivated the axiom of opponent indifference, our next goal is to check whether it is satisfied by any actual rating systems, and whether these systems are ``reasonable.''
Although opponent indifferent rating systems exist (this follows from Lemma \ref{lemma:exist-op} in this paper), we actually argue that they are \emph{not} sufficiently expressive to be interesting.
Our reasoning is as follows.
Most interesting games exhibit \emph{skill chains}.
In chess, for example, a hobbyist can beat a beginner (say) 95\% of the time, and an expert can beat a hobbyist 95\% of the time, and a master can beat an expert 95\% of the time, and so on.
Most rating systems allow, in theory, for an infinite chain of players of ascending ratings, with each player beating the next with reasonably high probability.
We can formalize a version of this property as follows:
\begin{definition} [Full Scale] \label{def:fullscale}
A rating system $(\sigma, \alpha)$ has \textbf{full scale} if there exists $p > 0.5$ and an infinite ascending chain of ratings $r_1 < r_2 < \dots$ such that $\sigma\left(r_i, r_{i-1}\right) \ge p$ for all $i$.
\end{definition}
Our critique of opponent indifference is that it is incompatible with full scale, and thus cannot really capture games like chess.
\begin{theorem} [First Main Result] \label{thm:introOIFS}
There is no rating system that simultaneously satisfies opponent indifference and full scale.
\end{theorem}
Although we do not emphasize this point, for most reasonable parameters $p$, the maximum possible length of a skill chain (i.e., increasing sequence of ratings $r_1 < r_2 < \dots$ as in Definition \ref{def:fullscale}) is generally quite short; for large enough $p$ (say $p = 0.9$), it is $2$.
\subsubsection{$P$ Opponent Indifference and Second Main Result}
In light of Theorem \ref{thm:introOIFS}, our goal is now to weaken the definition of opponent indifference in a way that escapes the impossibility result of Theorem \ref{thm:introOIFS}, while still providing an effective type of immunity against opponent selection attacks.
Fortunately, we argue that a reasonable relaxation of opponent indifferent is already implicit in the \emph{thresholding effects} used in implementations of Elo and in Sonas.
These systems hardcode a rating difference at which the rating system punts: the system declares that chaotic effects dominate, and ratings are no longer a good predictor of outcome probabilities.
A natural relaxation is to enforce opponent indifference \emph{only} between opponents in the ``non-chaotic'' rating regime.
We call this weaker version \emph{$P$ opponent indifference}, where the parameter $P$ controls the threshold beyond which we no longer require opponent indifference to hold.
In the following, let us say that ratings $x, y$ are $P$-close if we have $\sigma(x, y) \in (0.5-P, 0.5+P)$.
Formally:\footnote{See Definition \ref{def:poi} for the equivalent technical definition used in the paper.}
\begin{definition} [$P$ Opponent Indifference] \label{pop_def}
For a parameter $P \in (0,0.5]$, a rating system is \textbf{$P$ opponent indifferent} if there is a function $\gamma^* : \mathbb{R}^2 \to \mathbb{R}$ such that
$$\gamma(x, x^* | y, y) = \gamma^*(x, x^*)$$
for any $x, x^*, y$ with $x,y$ and $x^*, y$ both $P$-close.
\end{definition}
$P$ opponent indifference might feel like a light tweak on opponent indifference;
it places the exact same requirements as opponent indifference on all ``reasonable'' matches that might be played (according to a thresholding rule).
Thus it provides immunity against opponent selection attacks in the cases of interest.
But, perhaps surprisingly, this relaxation is enough to escape impossibility.
We prove the following characterization theorem.
\begin{definition} [$P$ Separable]
A skill curve $\sigma$ is \textbf{$P$ separable} if there is a weakly increasing function $\beta : \mathbb{R} \to \mathbb{R}$ such that $\sigma(x, y) = \beta(x) - \beta(y) + 0.5$ for all $P$-close $x, y$.
\end{definition}
\begin{definition} [$P$ Constant]\label{def:pconst}
For $P \in (0,0.5]$, a $K$-function is \textbf{$P$ constant} if for all $x,y$ such that $\sigma(x,y)\in(0.5-P,0.5+P)$, $K(x,y) = C$ for some constant C.
\end{definition}
\begin{theorem} [Second Main Result -- See Theorem \ref{theorem:characterize_pop} in the body]
A nontrivial rating system $(\sigma, \alpha)$ is $P$ opponent indifferent if and only if $\sigma$ is $P$ separable and its $K$-function is $P$ constant.
\end{theorem}
It is relatively easy from this characterization theorem to show that $P$ opponent rating systems exist, and that they can exhibit full scale (even with respect to any given parameter $P$).
In the next part, we show that these desirable properties continue to hold even under a natural \emph{strengthening} of $P$ opponent indifference.
\subsubsection{Strong $P$ Opponent Indifference and Third Main Result}
$P$ opponent indifference requires indifference between any two opponents within threshold, \emph{assuming those opponents are correctly rated}.
One might want to more strictly require indifference between two opponents that are both \emph{incorrectly rated by the same amount}.
Formally:
\begin{definition} [Strong $P$ Opponent Indifference]
For a parameter $P$, a rating system is \textbf{strongly $P$ opponent indifferent} if there is a function $\gamma^* : \mathbb{R}^3 \to \mathbb{R}$ such that
$$\gamma(x, x^* | y, y+\delta) = \gamma^*(x, x^*,\delta)$$
for all $x, x^*, y,y+\delta$ with $x, y$ and $x^*, y+\delta$ both $P$-close.
\end{definition}
This strengthening remains tenable, as shown by the following theorem:
\begin{theorem} [Third Main Result -- See Corollary \ref{cor:sonaschar}] For any $0 < P \le 0.5$ and any rating system $(\sigma, \alpha)$, the rating system is strongly $P$ opponent indifferent if and only if it is Sonas-like with a $P$ constant $K$-function.
\end{theorem}
Conceptually, we interpret this theorem as a significant technical point in favor of Sonas.
For games where Elo and Sonas are equally preferable in terms of accurately modeling the player pool, the Sonas model has the additional advantage of fighting opponent selection attacks, which may give a reason why it should be favored.
In Corollary \ref{cor:soiimpossible}, we also discuss the analogous strengthening of (general) opponent indifference, and we strengthen our impossibility result to show that no nontrivial strong (general) opponent indifferent rating systems exist, whether or not they satisfy full scale.
\section{Preliminaries}
Some of our results in the introduction reference \emph{nontrivial} rating systems.
We define these formally:
\begin{definition}[Trivial and nontrivial]
A rating system is \textbf{trivial} if for all $x,y$ we have $\sigma(x,y) = 0.5$, or \textbf{nontrivial} otherwise.
\end{definition}
In the introduction, we discuss versions of properties that only hold over a restricted domain of ratings (like $P$ opponent indifference, $P$ constant, etc).
In the technical part of this paper, we will need to similarly generalize other functions, and so the following (slightly informal) language will be helpful:
\begin{definition} [Property over $(A,B)$]
Given a property defined over a set of inputs in $\mathbb{R}$, we say that a property holds \textbf{over $(A,B)$} if it holds whenever the inputs are taken in the interval $(A,B)$.
We will equivalently say that $(A, B)$ is a property interval for the rating system.
For example, a rating system is nontrivial over $(A, B)$ if there exist $x,y \in (A,B)$ with $\sigma(x,y) \ne 0.5$, and in this case we say that $(A, B)$ is a nontrivial interval for the rating system.
\end{definition}
The definition of the expected gain function in the introduction is phrased in a way that makes its intuitive meaning clear.
However, the following alternate characterization in terms of the $K$-function will be useful in some of our proofs.
\begin{lemma}\label{exp-gain}
The expected gain function for a rating system $(\sigma, \alpha)$ satisfies
\begin{align*}
\gamma(x,x^*,y,y^*) = K(x,y)(\sigma(x^*,y^*)-\sigma(x,y))
\end{align*}
\end{lemma}
\begin{proof}
We compute:
\begin{align*}
\gamma(x,x^*,y,y^*) &= \alpha(x,y)\sigma(x^*,y^*)-\alpha(y,x)\sigma(y^*,x^*)\\
&= K(x,y)\sigma(y,x)\sigma(x^*,y^*)-K(y,x)\sigma(x,y)\sigma(y^*,x^*)\tag*{Def of $\alpha$ using $K$-function}\\
&= K(x,y)(\sigma(y,x)\sigma(x^*,y^*)-\sigma(x,y)\sigma(y^*,x^*))\tag*{K is symmetric}\\
&= K(x,y)((1-\sigma(x,y))\sigma(x^*,y^*)-\sigma(x,y)(1-\sigma(x^*,y^*)))\tag*{$\sigma$ draw-free}\\
&= K(x,y)(\sigma(x^*,y^*)-\sigma(x,y)\sigma(x^*,y^*)-\sigma(x,y)+\sigma(x,y)\sigma(x^*,y^*))\\
&= K(x,y)(\sigma(x^*,y^*)-\sigma(x,y)). \tag*{\qedhere}
\end{align*}
\end{proof}
\section{Opponent Indifference}
In this section we will characterize all opponent indifferent rating systems.
Some of our lemmas are proved with extra generality ``over $(A,B)$"; this generality will become useful in the following section.
\subsection{Opponent Indifference vs.\ Full Scale\label{op-vs-fs}}
This section is about the incompatibility between the opponent indifference and full scale axioms discussed in the introduction.
\begin{lemma}\label{opntk-func}
If a rating system $(\sigma, \alpha)$ is opponent indifferent over $(A,B)$ and nontrivial over $(A,B)$, then $K(x,y)$ is constant over $(A,B)$.
\end{lemma}
\begin{proof}
Let $x,x^* \in (A,B)$ where $\sigma(x,x^*) \ne 0.5$ (which exist by nontriviality).
From the definition of opponent indifference, the expected gain does not depend on its latter two parameters.
We therefore have
\begin{align*}
\gamma(x, x^* \mid x, x) &= \gamma(x, x^* \mid x^*, x^*) \tag*{Opponent Indifference}\\
K(x,x)(\sigma(x^*,x)-\sigma(x,x)) &= K(x,x^*)(\sigma(x^*,x^*)-\sigma(x,x^*)) \tag*{Lemma \ref{exp-gain}}\\
K(x,x)(\sigma(x^*,x)-0.5) &= K(x,x^*)(0.5-\sigma(x,x^*))\\
K(x,x)(1-\sigma(x,x^*)-0.5) &= K(x,x^*)(0.5-\sigma(x,x^*))\tag*{Draw-free}\\
K(x,x)(0.5-\sigma(x,x^*)) &= K(x,x^*)(0.5-\sigma(x,x^*))\\
K(x,x) &= K(x,x^*).
\end{align*}
Thus, over $(A,B)$, $K(x, y)$ depends only on its first input.
Since additionally $K$ is symmetric in its two parameters, it must in fact be constant over $(A,B)$.
\end{proof}
We comment that the nontriviality hypothesis in this lemma is necessary, since otherwise $K(x,y)$ can be essentially any symmetric function over $(A,B)$ and the expected gain will still be $0$.
\begin{lemma}\label{lemma:relation-xyz}
If a rating system $(\sigma, \alpha)$ is opponent indifferent over $(A,B)$, then
$$\sigma(x,y) = \sigma(x,z)-\sigma(y,z) + 0.5$$
for all $x,y,z \in (A,B)$.
\end{lemma}
\begin{proof}
If the system is trivial over $(A,B)$ then the lemma holds immediately, so assume nontriviality.
From the definition of opponent indifference the expected gain only depends on a player's current rating and true rating, $x$ and $x^*$, so long as all ratings are within the interval $(A,B)$.
So we have:
\begin{align*}
\gamma(x, x^* \mid y, y) &= \gamma(x, x^* \mid z, z) \tag*{Opponent Indifference}\\
K(x,y) (\sigma(x^*,y)-\sigma(x,y)) &= K(y,x) (\sigma(x^*,z)-\sigma(x,z)) \tag*{Lemma \ref{exp-gain}}\\
C \cdot (\sigma(x^*,y)-\sigma(x,y)) &= C \cdot (\sigma(x^*,z)-\sigma(x,z)) \tag*{Lemma \ref{opntk-func}.}
\end{align*}
Now consider the possible setting $x^* = y$.
Under this, we continue:
\begin{align*}
C \cdot (\sigma(y,y)-\sigma(x,y)) &= C \cdot (\sigma(y,z)-\sigma(x,z))\\
0.5-\sigma(x,y) &= \sigma(y,z)-\sigma(x,z)\\
\sigma(x,y) &= \sigma(x,z)-\sigma(y,z) + 0.5. \tag*{\qedhere}
\end{align*}
\end{proof}
\begin{definition}[Separable]
A skill curve is \textbf{separable} if there is a function $\beta: \mathbb{R} \to [C,C+0.5]$ for some constant C such that
$$\sigma(x, y) = \beta(x) - \beta(y) + 0.5.$$
for all $x,y$. $\beta$ is called a \textbf{bisector} of the skill curve.
\end{definition}
Note that bisectors are not unique: if $\beta$ is a bisector for $\sigma$, then any vertical translation of $\beta$ is also a bisector.
However, all bisectors differ by this translation:
\begin{lemma} \label{lem:bisectorshift}
For any skill curve $\sigma$ that is separable over $(A,B)$, if $\beta, \beta'$ are both bisectors of $\sigma$ over $(A,B)$, then there is a constant $C$ such that $\beta'(x) = \beta(x) + C$ for all $x \in (A,B)$.
\end{lemma}
\begin{proof}
Fix an arbitrary $y \in (A, B)$.
We have
\begin{align*}
\sigma(x, y) = \beta'(x) - \beta'(y) + 0.5 = \beta(x)-\beta(y) + 0.5
\end{align*}
and so, rearranging, we get
\begin{align*}
\beta'(x) = \beta(x) + (\beta'(y) - \beta(y)).
\end{align*}
Now the claim follows by taking $C := \beta'(y) - \beta(y)$.
\end{proof}
\begin{lemma}\label{op-sep}
If a rating system $(\sigma,\alpha)$ is opponent indifferent over $(A,B)$, then $\sigma$ is separable over $(A,B)$ and all bisectors are continuous.
\end{lemma}
\begin{proof}
Suppose $(\sigma,\alpha)$ is opponent indifferent over $(A,B)$. Let $m \in (A,B)$ be an arbitrary constant. From Lemma \ref{lemma:relation-xyz} we have
$$\sigma(x,y) = \sigma(x,m) - \sigma(y,m) + 0.5.$$
Therefore $\beta(x) := \sigma(x,m)$ is a bisector for $\sigma$ over $(A,B)$, and since $\sigma$ is continuous, $\beta$ is continuous as well.
Finally, Lemma \ref{lem:bisectorshift} implies that since one bisector is continuous, all bisectors are continuous.
\end{proof}
\begin{lemma}\label{lemma:exist-op}
If a rating system's $K$-function is constant over $(A,B)$ and the skill curve is separable over $(A,B)$ then the rating system is opponent indifferent over $(A,B)$.
\end{lemma}
\begin{proof}
Plugging the equations into the expected gain function for $x,x^*,y \in (A,B)$, we have
\begin{align*}
\gamma(x, x^* \mid y, y) &= K(x,y) (\sigma(x^*,y)-\sigma(x,y)) \tag*{Lemma \ref{exp-gain}}\\
&= C \cdot (\beta(x^*)-\beta(y)+0.5-(\beta(x)-\beta(y)+0.5)) \\
&= C \cdot (\beta(x^*)-\beta(x)).
\end{align*}
Thus $\gamma$ depends only on its first two parameters $x$ and $x^*$, implying opponent indifference.
\end{proof}
Note that Lemma \ref{lemma:exist-op} implies that nontrivial opponent indifferent rating systems do indeed exist, as we can choose a constant $K$ function and suitable bisector function $\beta$ over the entire interval $(-\infty, \infty)$.
However, as discussed in the introduction, our next theorem shows that no opponent indifferent rating system can exhibit full scale:
\begin{theorem} \label{thm:nooifs}
No opponent indifferent rating system (over $(-\infty, \infty)$) has full scale.
\end{theorem}
\begin{proof}
Let $(\sigma,\alpha)$ be an opponent indifferent rating system and fix some $p \in (0.5,1)$.
Our plan is to prove that for some integer $N$ depending only on $p$, there does not exist a chain of ratings $r_1 < \dots < r_N$ such that for all $1 <i \le N$, we have $\sigma(r_i, r_{i-1}) = p$.
More specifically, our strategy is to prove that
\begin{equation}
\sigma(r_N,r_1) = (N-1)p - \frac{N-2}{2}. \label{eq:ind}
\end{equation}
Since $\sigma(r_N, r_1) \le 1$, this implies
\begin{align*}
(N-1)p - \frac{N-2}{2} \le 1\\
Np - p - \frac{N}{2} + 1 \le 1\\
N\left(p - \frac{1}{2}\right) \le p\\
N \le \frac{2p}{2p-1}
\end{align*}
which is an upper bound for $N$ depending only on $p$, as desired.
It now remains to prove equation (\ref{eq:ind}).
We do so by induction on $N$:
\begin{itemize}
\item (Base Case, $N = 1$) We have $\sigma(r_1,r_1) = 1/2$, as desired.
\item (Inductive Step)
By Lemma \ref{op-sep} the skill curve is separable, and so for some bisector $\beta$ we have
\begin{align*}
\sigma(r_{N+1},r_1) &= \beta(r_{N+1}) - \beta(r_1) + 0.5\\
&= \left(\beta(r_{N+1}) - \beta(r_N) + 0.5\right) - \left(\beta(r_1) - \beta(r_N) + 0.5 \right) + 0.5\\
&= \sigma(r_{N+1},r_N) - \sigma(r_1,r_N) + 0.5\\
&= p - (1-\sigma(r_N,r_1)) + 0.5 \tag*{Draw-free}\\
&= p - \left(1-\left((N-1)p - \frac{N-2}{2}\right)\right) + 0.5 \tag*{Inductive Hypothesis}\\
&= p - 1 + (N-1)p - \frac{N-2}{2} + 0.5\\
&= Np -\frac{N-1}{2}
\end{align*}
which verifies (\ref{eq:ind}) for $N+1$. \qedhere
\end{itemize}
\end{proof}
We note the setting $N \le \frac{2p}{2p-1}$ that arises in this theorem, which for a given $p$ controls the maximum possible length of a skill chain that may exist in an opponent indifferent system.
\subsection{Impossibility of Strong Opponent Indifference}
Here, we consider an even stronger notion of opponent indifference.
While opponent indifference enforces that we are indifferent between two \emph{correctly-rated} opponents, we could more strongly require indifference between two opponents \emph{misrated by the same amount}.
In particular, the following definition enforces indifference between opponents who are overrated by $\delta$:
\begin{definition} [Strong Opponent Indifference]
A rating system is \textbf{strongly opponent indifferent} if there is a function $\gamma^* : \mathbb{R}^3 \to \mathbb{R}$ such that, for all $x, x^*, y, y+\delta$, we have
$$\gamma(x,x^*,y,y+\delta) = \gamma^*(x, x^*,\delta).$$
\end{definition}
A strongly opponent indifferent rating system is also opponent indifferent (by considering $\delta=0$), and thus by Theorem \ref{thm:nooifs}, it cannot exhibit full scale.
We will prove an even stronger impossibility theorem.
While this is a slight detour (as we have arguably already shown that strong opponent indifference is undesirable), the results in this section are used in the following section.
As before, we prove our intermediate lemmas over arbitrary intervals $(A, B)$, which provides flexibility that will be helpful later in the paper.
\begin{definition} [Translation Invariant]
A rating system $(\sigma,\alpha)$ is \textbf{translation invariant} if there is a function $\sigma^* : \mathbb{R} \to [0, 1]$ with $\sigma(x, y) = \sigma^*(x-y)$ for all $x, y$.
\end{definition}
\begin{theorem}\label{sop-t}
If a rating system is strongly opponent indifferent over $(A,B)$, then it is translation invariant over $(A,B)$.
\end{theorem}
\begin{proof}
If the system is trivial over $(A,B)$ then the lemma is immediate, so assume nontriviality over $(A,B)$.
Let $(\sigma,\alpha)$ be a strongly opponent indifferent rating system.
To show translation invariance over $(A,B)$, let $x, x^*, \delta$ be such that $x,x^*,x+\delta,x^*+\delta \in (A,B)$.
Since strong opponent indifference implies opponent indifference, Lemmas \ref{exp-gain} and \ref{opntk-func} apply.
We can therefore compute:
\begin{align*}
\gamma(x, x^* \mid y, y+\delta) &= K(x,y) (\sigma(x^*,y+\delta)-\sigma(x,y)) \tag*{Lemma \ref{exp-gain}}\\
&= C \cdot (\sigma(x^*,y+\delta)-\sigma(x,y)) \tag*{Lemma \ref{opntk-func}.}
\end{align*}
From the definition of strong opponent indifference, the expected gain function depends only on the difference between its latter two parameters, and so we have
\begin{align*}
\gamma(x, x^* \mid x^*, x^*+\delta) &= \gamma(x, x^* \mid x, x+\delta)\\
C \cdot (\sigma(x^*,x^*+\delta)-\sigma(x,x^*)) &= C \cdot (\sigma(x^*,x+\delta )-\sigma(x,x)) \tag*{previous equation}\\
\sigma(x^*,x^* + \delta)-\sigma(x,x^*)&= \sigma(x^*,x + \delta )-0.5\\
\sigma(x,x^*) &= \sigma(x,x^* + \delta)- \sigma(x^*,x + \delta ) + 0.5\\
\sigma(x,x + \delta)- \sigma(x^*,x + \delta ) + 0.5 &= \sigma(x^*,x^* + \delta)- \sigma(x^*,x + \delta ) + 0.5 \tag*{Lemma \ref{lemma:relation-xyz}}\\
\sigma(x,x + \delta)&= \sigma(x^*,x^* + \delta),
\end{align*}
which implies translation invariance over $(A, B)$.
\end{proof}
\begin{theorem}\label{sop-linear}
If a rating system $(\sigma, \alpha)$ is strongly opponent indifferent over $(A,B)$, then $\sigma$ has a bisector that is linear over $(A,B)$ (i.e., on the interval $(A, B)$ it coincides with a function of the form $\beta(x) = mx$ for some constant $m$).
\end{theorem}
\begin{proof}
By considering an appropriate horizontal translation of our rating system, we may assume without loss of generality that our opponent indifferent interval is symmetric about the origin, i.e., it has the form $(-A, A)$.
Let $\beta$ be an arbitrary function that bisects $\sigma$ over $(-A, A)$, and further assume without loss of generality that $\beta$ intersects the origin, i.e., $\beta(0)=0$.
Our goal is to prove that, for all $x, y \in (-A, A)$, we have $\beta(x+y) = \beta(x) + \beta(y)$.
Since we already have that $\beta$ is continuous, this implies linearity of $\beta$ over $(-A, A)$ (see Appendix \ref{app:linearize} for details).
Let $x, y, x+y \in (-A, A)$.
We then have
\begin{align}
\sigma(x+y,x) = f(y) \tag*{Theorem \ref{sop-t}}\\
\beta(x+y)-\beta(x) = f(y)\tag*{Lemma \ref{op-sep}}\\
\beta(x+y) = \beta(x)+f(y) \label{eq:xdelta}.
\end{align}
Additionally, by reversing the roles of $x$ and $y$, we have
\begin{align}
\beta(x+y)= f(x)+\beta(y) \label{eq:deltax}.
\end{align}
Thus, combining (\ref{eq:xdelta}) and (\ref{eq:deltax}), we get
\begin{align*}
f(x) + \beta(y) &= \beta(x) + f(y).
\end{align*}
It follows from equation (\ref{eq:xdelta}) that $f(0) = 0$, and we recall that $\beta(0) = 0$.
Therefore, plugging $y=0$ into the previous equation, we get
\begin{align*}
f(x) + \beta(0) &= \beta(x) + f(0)\\
f(x) &= \beta(x)
\end{align*}
and so, recombining with (\ref{eq:xdelta}), we have $\beta(x+y) = \beta(x)+\beta(y)$, as desired.
\end{proof}
\begin{corollary} \label{cor:soiimpossible}
A strongly opponent indifferent rating system (over $(-\infty, \infty)$) must be trivial.
\end{corollary}
\begin{proof}
If a rating system is strongly opponent indifferent, then by Theorem \ref{sop-linear} there exists a bisector $\beta$ that is linear over $(-\infty, \infty)$.
Additionally, the range of $\beta$ is contained within $[C,C+0.5]$ for some constant $C$.
The only such functions are those with slope $0$.
Thus
$$\sigma(x,y) = 0 - 0 + 0.5 = 0.5$$
for all $x,y \in \mathbb{R}$, and so it is trivial.
\end{proof}
\section{$P$ Opponent Indifference}
Recall that $P$ opponent indifference is a relaxation of (full) opponent indifference, in which we only require indifference among opponents who are reasonably evenly matched.
\begin{definition} [$P$ Opponent Indifference]\label{def:poi}
For $P \in (0,0.5]$, a rating system is \textbf{$P$ opponent indifferent} if the rating system is opponent indifferent over $(A,B)$ for all $A<B$ where\footnote{Technically, this definition differs slightly from Definition \ref{pop_def} of partial opponent indifference in the introduction, because it forces all pairs among $x, x^*, y$ to be $P$-close, whereas Definition \ref{pop_def} only explicitly forces $x, y$, and $x^*, y$ to be $P$-close. However, it follows as an easy corollary of the results in this section that the two definitions are equivalent.}
$$\sigma(A, B) > 0.5-P.$$
\end{definition}
Our next goal is to prove the following characterization theorem:
\begin{theorem}[Characterization of $P$ Opponent Indifferent Rating Systems]\label{theorem:characterize_pop}
A nontrivial rating system is $P$ opponent indifferent if and only if the skill curve is $P$ separable and the $K$-function is $P$ constant.
\end{theorem}
We work towards a proof with some intermediate structural lemmas.
\begin{lemma}\label{lemma:pop_reals}
If a rating system $(\sigma,\alpha)$ is $P$ opponent indifferent then every real number is within an opponent indifferent interval.
\end{lemma}
\begin{proof}
Let $b\in \mathbb{R}$. Since $\sigma$ is continuous, there exists $0<\epsilon$ such that $0.5\ge\sigma(b-\epsilon, b+\epsilon)> 0.5-P$. Thus $(b-\epsilon, b+\epsilon)$ is an opponent indifferent interval.
\end{proof}
\begin{lemma}\label{p_sep}
If a rating system $(\sigma,\alpha)$ is $P$ opponent indifferent, then $\sigma$ is $P$ separable with a continuous bisector.
\end{lemma}
Let us quickly discuss this lemma statement before we begin its proof.
By Lemma \ref{op-sep}, for each $P$-interval $(A, B)$, there exists a continuous function $\beta$ that bisects $\sigma$ on this interval.
But this lemma is claiming something stronger: that there is a \emph{single} continuous function $\beta$ that bisects $\sigma$ on \emph{all} $P$-intervals simultaneously.
This strengthening requires a bit of topology to prove, but is overall fairly straightforward.
\begin{proof} [Proof of Lemma \ref{p_sep}]
We construct a bisector $\beta$ as follows.
First, we arbitrarily fix a point; say, $\beta(0)=0$, and all other values of $\beta$ are currently undefined.
Then, iterate the following process.
Let $x$ be the current supremum of the points on which $\beta$ has been defined.
Let $\varepsilon>0$ be the largest value such that $(x-\varepsilon, x+\varepsilon)$ is a $P$-interval.
Note that this interval intersects at least one previously-defined point, and thus there is a unique extension of $\beta$ to this interval that bisects $\sigma$ on $(x-\varepsilon, x+\varepsilon)$.
Extend $\beta$ to this interval, and repeat.
We claim that, for every nonnegative number $r$, there is a constant $c_r$ such that the value of $\beta(r)$ is defined after finitely many iterations of this process.
To see this, let $D \subseteq \mathbb{R}$ be the set of points with this property, and suppose for contradiction that the supremum of $D$ is a finite real number $r^*$.
Note that $D$ is the union of open intervals, and thus $D$ is open, which means it does not contain its supremum.
Since $r^* \notin D$, there is no constant $c_{r^*}$ for which $\beta(r^*)$ is defined after $c_{r^*}$ iterations.
Additionally, since $r^*$ is the supremum of $D$, any interval of the form $(r^* - \varepsilon, r^* + \varepsilon)$ intersects $D$.
Let $r \in (r^* - \varepsilon, r^* + \varepsilon) \cap D$.
Then after $c_r+1$ iterations, we would define $\beta(r^*)$.
Since $c_r+1$ is finite, this implies $r^* \in D$, which completes the contradiction.
By a symmetric process, we may then extend $\beta$ to negative inputs.
\end{proof}
\begin{lemma}\label{lemma:constant-pk}
For a nontrivial $P$ opponent indifferent rating system, every opponent indifferent interval $(A,B)$ has a constant $K$-function over $(A,B)$.
\end{lemma}
\begin{proof}
The rating system is either nontrivial or trivial over $(A,B)$. If the rating system is nontrivial, the result is immediate by Lemma \ref{opntk-func}.
Thus the remaining case is when the system is trivial over $(A, B)$, but nontrivial overall.
We have $\sigma(A, B) = 0.5$.
Since $\sigma$ is not identically $0.5$, there exist $B' > B, A' < A$ such that $\sigma(A', B') < 0.5$.
Moreover, since $\sigma$ is continuous, we may specifically choose values $A', B'$ satisfying
$$0.5-P < \sigma(A', B') < 0.5.$$
So $(A', B')$ is a $P$-interval, and $\sigma$ is nontrivial over $(A', B')$.
By Lemma \ref{opntk-func}, we thus have a constant $K$-function over $(A', B')$.
Since $(A, B) \subseteq (A', B')$, the lemma follows.
\end{proof}
We are now ready to prove Theorem \ref{theorem:characterize_pop}.
\begin{proof} [Proof of Theorem \ref{theorem:characterize_pop}]
($\rightarrow$) Lemma \ref{p_sep} implies that $\sigma$ is $P$ separable, so it only remains to show that the $K$-function is $P$ constant.
We prove this in two steps.
First, let $f(x) = K(x, x)$, and we will prove that $f(x)$ is constant.
Since every $x$ lies in an opponent indifferent interval (Lemma \ref{lemma:pop_reals}), by Lemma \ref{lemma:constant-pk} $f$ is constant on that interval.
In particular this implies $f$ is differentiable at $x$, with $f'(x)=0$, and thus $f$ is constant.
Finally: for any $x, y$ in the same opponent indifferent interval $(A, B)$, by Lemma \ref{lemma:constant-pk} we have $K(x, y) = K(x, x)$, and thus $K$ is $P$ constant.
($\leftarrow$) This direction is immediate from Lemma \ref{lemma:exist-op}.
\end{proof}
\subsection{Strong $P$ Opponent Indifference}
In this section, we will show how the strong version of P opponent indifference characterizes Sonas-like curves.
We begin with an auxiliary lemma:
\begin{lemma}\label{lemma:sufficient-sop}
If a rating system's $K$-function is constant over $(A,B)$ and the skill curve is separable over $(A,B)$ with a linear bisector, then the rating system is strongly opponent indifferent over $(A,B)$.
\end{lemma}
\begin{proof}
Plugging the equations into the expected gain function for $x,x^*,y,y+\delta \in (A,B)$, we have
\begin{align*}
\gamma(x, x^* \mid y, y+\delta) &= K(x,y) (\sigma(x^*,y+\delta)-\sigma(x,y)) \tag*{Lemma \ref{exp-gain}}\\
&= C \cdot (\beta(x^*)-\beta(y+\delta)+0.5-(\beta(x)-\beta(y)+0.5)) \\
&= C \cdot (\beta(x^*)-\beta(x) - \beta(\delta))
\end{align*}
which depends only on $x, x^*$, and $\delta$.
\end{proof}
\begin{definition}[Strong $P$ Opponent Indifference]
For $P \in (0,0.5]$, a rating system is \textbf{strongly $P$ opponent indifferent} if the rating system is strongly opponent indifferent over $(A,B)$ for all A and B where\footnote{Similar to $P$ opponent indifference, this definition is equivalent to the one given in the introduction, despite looking slightly different.}
$$\sigma(A, B) > 0.5-P.$$
\end{definition}
\begin{theorem}[Characterization of Strongly $P$ Opponent Indifferent Rating Systems]\label{spop-char}
A nontrivial continuous rating system $(\sigma, \alpha)$ is strongly $P$ opponent indifferent if and only if the skill curve is $P$ separable with a linear bisector and the $K$-function is $P$ constant.
\end{theorem}
\begin{proof} ($\longrightarrow$)
Assume the rating system is strongly $P$ opponent indifferent. Because strongly $P$ opponent indifferent rating systems are also $P$ opponent indifferent, we already have that the skill curve is $P$ separable and the $K$-function is $P$ constant by Theorem \ref{theorem:characterize_pop}.
It only remains to prove is that the skill curve has a linear bisector.
The skill curve has a continuous bisector $\beta$ by Lemma \ref{p_sep}.
Recall from Theorem \ref{sop-linear} that, for any strongly opponent indifferent interval $(A, B)$, $\sigma$ is separable on this interval with a bisector that is linear over the interval.
Moreover, the slope of $\beta$ on this interval must be $\frac{\sigma(B,A) - 0.5}{B-A}$.
From Lemma \ref{lemma:pop_reals} and the fact that the rating system is nontrivial, we know there exists some $0<\epsilon$ and $a \in \mathbb{R}$ such that $(a-\epsilon,a+\epsilon)$ is a nontrivial strongly opponent indifferent interval.
Since the bisector is linear and nontrivial over $(a-\epsilon,a+\epsilon)$, we know $ 0.5-P< \sigma(a-\epsilon,a+\epsilon) < 0.5$, and thus the slope of $\beta$ is positive over $(a-\epsilon,a+\epsilon)$.
Because of this, there must be some value $c$ where $\sigma(a,c) = \sigma(a-\epsilon,a+\epsilon)$. Thus $(a-\epsilon,a+\epsilon)$ and $(a,c)$ are overlapping strongly opponent indifferent intervals so $\beta$ must have the same slope over both intervals. In order for this to be the case, $c = a+2\varepsilon$.
We can continue this argument, repeatedly adding $\varepsilon$, to show that $\beta$ has the same slope as the interval $(a-\varepsilon,a+\varepsilon)$ for all $r>a$. We can also make a similar argument subtracting $\varepsilon$ each time to show that the bisector has the same slope as the interval $(a-\varepsilon,a+\varepsilon)$ for $r<a$. Thus, the bisector's slope is constant. Since any vertical translation of $\beta$ is also a bisector, there exists a linear bisector.
($\longleftarrow$) This direction follows directly from Lemma $\ref{lemma:sufficient-sop}$.
\end{proof}
\begin{definition}[$P$ Translation Invariant]
A rating system $(\sigma,\alpha)$ is \textbf{$P$ translation invariant} if for every $x,y \in \mathbb{R}$ where $\sigma(x,y) \in (0.5-P,0.5+P)$, we have $\sigma(x,y) = \sigma^*(x-y)$ for some $\sigma^*: \mathbb{R} \to \mathbb{R}$.
\end{definition}
\begin{lemma}\label{sonas-like}
A skill curve is Sonas-like with parameters $a, s$ if and only if it is $\sigma(s,0) - 0.5$ separable with $\beta(x) = ax$.
(Sonas-like skill curves are defined in Definition \ref{def:sonaslike}.)
\end{lemma}
\begin{proof}
($\longrightarrow$) This direction follows from Definition \ref{def:sonaslike}, as $\sigma(s+x,x)$ is constant for all x.\\
($\longleftarrow$) From the definition of $P$ separable and linear bisectors, for all $x,x+\delta$ where $\sigma(x,x+\delta) \in (0.5-P,0.5+P),$ we have
$$\sigma(x,x+\delta) = \beta(x)-\beta(x+\delta) + 0.5 = -\beta(\delta)+0.5,$$
and so the rating system is $P$ translation invariant. Thus, for all $|x-y| < s$ we have $\sigma(x,y) = ax-ay + 0.5$.
Since $\sigma$ is continuous, it additionally holds that $\sigma(x,y) = ax-ay + 0.5$ when $|x-y| = s$.
\end{proof}
\begin{corollary} \label{cor:sonaschar}
A rating system is strongly $P$ opponent indifferent if and only if it is Sonas-like with a $P$ constant $K$-function.
\end{corollary}
\begin{proof}
This follows directly from Theorem $\ref{spop-char}$ and Lemma $\ref{sonas-like}$.
\end{proof}
\begin{comment}
\end{comment}
\bibliographystyle{plainurl}
| {
"attr-fineweb-edu": 1.759766,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdcU5qoTBEBw4I9SS |
\section{INTRODUCTION}
Trip recommendation (or trip planning) aims to recommend a trip consisting of several ordered Points of Interest (POIs) for a user to maximize the user experience.
This problem has been extensively investigated \zhou{over the past years} \cite{lim2015personalized, chen2016learning, jia2019joint}.
Most existing \zjb{studies} tackle the problem by a two-stage process.
First, they majorly exploit POI popularity, user preferences, or POI co-occurrence to score POIs and design various objective functions respectively.
Then, they model the trip recommendation problem as a combinatorial problem: Orienteering problem \cite{golden1987orienteering}, and generate trips by maximizing the pre-defined objective with the help of constraint programming (CP).
Though using this \zjb{CP-based} paradigm to solve such a combinatorial problem is very popular over the past years, its drawbacks are still obvious.
First, the recommended trips by such methods are optimized by the pre-defined objective function, which may not follow the latent patterns \zjb{hidden in the human mobility data generated by users.}
For instance, according to the statistics from a real-life trip dataset from Beijing (see Experiment section), after watching a film, 26\% users choose to go to a restaurant while only less than 1\% users choose to go to a Karaoke bar.
However, the pre-defined objective may not be capable of capturing such mobility sequential preferences and generate unusual trips like (cinema $\rightarrow$ Karaoke bar).
Second,
the time complexity of such CP-based methods is usually too high to handle hundreds of POIs in a city in real time. As shown in our experiment section, the response time of such methods with 100 POIs can be more than 1 minute.
Such a weakness is very disruptive to the user experience.
To this end, we propose an \underline{A}dversarial \underline{N}eural \underline{T}rip Recommendation (ANT) framework \zjb{to solve the challenges mentioned above}.
At first, we propose an encoder-decoder based trip generator that can generate the trip under given constraints in an end-to-end fashion.
Concretely, the encoder takes advantage of multi-head self-attention to capture correlations among POIs.
Afterwards, the decoder subsequently selects POI into a trip with mask mechanism to meet the given constraints while maintaining a novel context embedding to represent the contextual environment when choosing POIs.
Second, we
devise an adversarial learning strategy into the
specially designed reinforcement learning paradigm to train the generator.
Specifically, we introduce a discriminator to distinguish the real-life trips taken by users from the trips generated by the trip generator for better learning the latent human mobility patterns. During the training process,
once trips are produced by the generator, they will be evaluated by the discriminator while the feedback from the discriminator can be regarded as reward signals to optimize the generator.
Therefore, the generator will push itself to generate high-quality trips to obtain high rewards from the discriminator.
Finally, a significant distinction of our framework from existing trip planning methods is that we do not adopt the traditional constraint programming methodology.
Considering the excellent performance for inference(prediction) of the deep-learning (DL) based models, the efficiency of our method is much better than such CP-based methods.
To sum up, the contributions of this paper can be summarized as follows:
\begin{itemize}
\item To the best of our knowledge, we are the first to propose an end-to-end DL-based framework to study the trip recommendation problem.
\item We devise a novel encoder-decoder model to generate trips under given constraints.
Furthermore, we propose an adversarial learning strategy integrating with reinforcement learning to guide the trip generator to produce trips that follow the latent human mobility patterns.
\item We conduct extensive experiments on four large-scale real-world datasets.
The results demonstrate that ANT remarkably outperforms the state-of-the-art techniques from both effectiveness and efficiency perspectives.
\end{itemize}
\section{RELATED WORK}
\zhou{Our study is related with POI recommendation and trip recommendation problems which are briefly discussed in this section respectively.}
\subsection{POI Recommendation}
POI recommendation \zhou{usually} takes the user's historical check-ins as input and aims to \zhou{predict} the POIs that the user is interested in. \zhou{This problem has been extensively investigated in the past years. For example,}
\citet{yang2017bridging} \zhou{proposed to} jointly learn user embeddings and POI embeddings simultaneously to fully comprehend user-POI interactions and predict user preference on POIs under various contexts.
\citet{ma2018point} \zhou{investigated to} utilize attention mechanism to seek what factors of POIs users are concerned about, integrating with geographical influence.
\zhou{\citet{luo2020spatial} studied to build a multi-level POI recommendation model with considering the POIs in different spatial granularity levels.}
However, such methods target on recommending an individual POI not a sequence of POI, and \zhou{do not} consider the dependence and correlations among POIs.
In addition, these methods do not take time budget into consideration while it is vital to recommend trips under the time budget constraint.
\subsection{Trip Recommendation}
Trip recommendation aims to recommend a sequence of POIs (i.e. trip) to maximize user experience under given constraints.
\citet{lim2015personalized} focused on user interest based on visit duration and personalize the POI visit duration for different users.
\citet{chen2016learning} modeled the POI transit probabilities, integrating with some manually designed features to suggest trips.
\zhou{Another study} modeled POIs and users in a unified latent space by integrating the co-occurrences of POIs, user preferences and POI popularity\cite{jia2019joint}.
These methods above share similar constraints: a start POI, an end POI and a time budget or trip length constraint, and they all maximize respective pre-defined objectives by adopting constraint programming.
However, such pre-defined objectives may fail to generate trips
\zhou{that follow the latent human mobility patterns among POIs.}
\li{Different from these methods, \citet{gu2020enhancing} focused on the attractiveness of the routes between POIs to recommend trips and generate trips by using greedy algorithm.
However, only modeling users and POIs in category space may not be capable of learning the complex human mobility patterns.}
\zjb{The prediction performance based on greedy strategy is also not satisfied enough.}
\section{PRELIMINARIES}
\zhou{In this section, we first introduce the basic concepts and notations, and then we give a formal definition of the trip recommendation problem.}
\subsection{Settings and Concepts}
A \textbf{POI} $l$ is a unique location with geographical coordinates $(\alpha,\beta)$ and a category $c$, i.e. $l=<(\alpha, \beta), c>$.
A \textbf{check-in} is a record that indicates a user $u$ arrives in a POI $l$ at timestamp $t_a$ and leaves at
timestamp $t_d$, which can be represented as $r = (u, l, t_a, t_d)$.
We denote all check-ins as $\mathcal{R}$ and the check-ins on \zhou{a} specific location $l$ as $\mathcal{R}_l$.
Since we have the check-ins generated by users, we can estimate the user duration time on POIs. Given a POI $l$ and corresponding check-in data $\mathcal{R}_l$, the expected duration time of a user spends on the POI is denoted by $T_d(l)$, which is the average duration time of all check-ins on location $l$:
\begin{equation}
T_d(l) = \frac {\sum \limits_{(u,l,t_a,t_d) \in \mathcal{R}_l } t_d - t_a} {|\mathcal{R}_l|}
\end{equation}
\zhou{We denote the transit time from a POI $l_i$ to another POI $l_j$ as $T_e(l_i, l_j)$.
The time cost along one trip can be calculated by summing all the duration time of each POI and all the time cost on the transit between POIs.
In our experiment, the transit time is estimated by the \li{distance} between POIs and the walking speed of the user (e.g. 2m/s). }
A \textbf{trip} is an ordered sequence of POIs ${S} = l_0 \rightarrow l_1 \rightarrow \cdots \rightarrow l_n$.
Given a query user u, a time budget $T_{max}$ and a start POI $l_0$, we aim to plan a trip ${S} = l_0 \rightarrow l_1 \rightarrow \cdots
\rightarrow l_n$ for the user.
We name the query user, the start POI and the time budget a \textbf{trip query}, denoted as a triple $q = (u, l_0, T_{max})$.
\subsection{Trip Recommendation}\label{prob_def}
Now we define the trip recommendation problem formally.
Given a trip query $q=(u, l_0, T_{max})$, we aim to recommend a well-designed trip that does not exceed the time budget and maximize the \zhou{likelihood} that the user will follow the planned trip.
\li{For convenience, we denote the sum of transit time from current POI to the next POI and duration time on the next POI as
$T_a(S_{i}, S_{i+1}) = T_d(S_{i + 1}) + T_e(S_{i}, S_{i+1})$}, $S_i$ is the $i$-th POI in trip $S$.
So the time cost on the planned trip denoted as $T({S})$ can be calculated by
$T({S}) = T_d({S_0}) + \sum\limits_{i = 0}^{|{S}|-1} T_a(S_{i}, S_{i+1})$.
Overall, the problem can be formulated as follows:
\begin{equation}\label{obj}
\max_{T({S}) \le T_{max}} P({S} \mid q)
\end{equation}
\section{APPROACH}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{figures/aaai-plan-overview-v2.pdf}
\vspace{-2mm}
\caption{An overview of the proposed framework.}
\label{fig-overview}
\vspace{-2mm}
\end{figure}
\zhou{The overall framework of ANT is shown in Figure \ref{fig-overview}. We first selectively retrieve hundreds of POIs to construct a candidate set. Next, we use a well-devised novel time-aware trip generator $G$ to generate the well-planned trip for users with incorporating the time budget and the POI correlation.}
\label{reinforce}
\zhou{The trip generation process can be considered as a sequential decision process, that is to say, at each step there is a smart \textbf{agent} to select the best POI which can finally form an optimal trip.
Thus, we model the trip generation procedure as a Markov Decision Process(MDP)\cite{bellman1957markovian}, where we regard selecting POI as \textbf{action}, the probability distribution on POIs \zjb{to be selected} as a stochastic \textbf{policy}, and contextual environment(e.g. available time, selected POIs) when selecting POIs as \textbf{state}.
Therefore, our goal is to learn an optimal policy, which guarantees that the \textbf{agent} can always take the best action, i.e. the POI with the highest probability is the most promising option.
To train the policy, we construct a discriminator $D$ (following the Generative Adversarial Networks(GAN) structure \cite{goodfellow2014generative}) to provide feedback compared with the \zjb{real-life trips taken by users}. Therefore, the generator can be trained through policy gradient \zjb{by reinforcement learning} to draw the generated trips and the real-life trips closer.}
\subsection{Candidate Construction}
As for a trip query, it is \zhou{usually not necessary} to take all the POIs into consideration to plan a reasonable trip.
For instance, those POIs that are too far away from the start POI are impossible to be part of the trip.
\zhou{Here} we propose a rule-based retrieval procedure to pick up a small amount of POI \zhou{candidates} from the large POI corpus, named candidate set, which incorporates the impact of connection among trips and geographic influence.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\columnwidth]{figures/aaai-hypergraph_v2.pdf}
\caption{An instance of hypergraph construction.}
\label{fig-hypergraph}
\vspace{-4mm}
\end{figure}
\subsubsection{Drawing Lessons from Other Trips}
If a user requests a trip at the start location $l_0$, former trips that are associated with $l_0$ are promising to provide a reference.
Inspired by this, we could assume that given the start POI $l_0$ of a trip query, those POIs that once co-occurred with $l_0$ in the same trip could be potential options for the trip query, which can be named \textit{drawing lessons from other trips}.
Hypergraph provides a natural way to gather POIs belonging to different trips and also to glimpse other trips via hyperedges.
\begin{definition}[Trip Hypergraph]
Let $G=(L,E)$ denote a hypergraph, where $L$ is the vertex set and $E$ is the hyperedge set.
Each vertex represents a POI $l_i$ and each hyperedge $e \in E$ connects two or more vertices, representing a trip.
\end{definition}
Specifically, we use trips in the training set to build the trip hypergraph.
On one hand, all the POIs in the same trip are linked by a hyperedge, which preserves the matching information between POIs and trips.
On the other hand, a POI may exist in arbitrary hyperedges, connecting different trips via hyperedges.
Given a trip query $(u, l_0, T_{max})$, POIs that are connected with $l_0$ via hyperedges are promising to be visited for the upcoming trip request, so we directly add them into the candidate set.
Figure \ref{fig-hypergraph} is a simple example of trip hypergraph retrieval.
If the start POI of the upcoming trip query is $l_2$, POIs $\{l_1, l_3, l_4, l_5, l_6, l_7, l_8\}$ will be added into the candidate set for the corresponding trip query.
\subsubsection{Spatial Retrieval}
Distance between \zhou{users} and POIs is a crucial factor affecting user's decisions in \zhou{location-relative recommendation.}
It is typical that a user's check-ins are mostly centralized on several areas\zhou{\cite{hao2016user,hao2020unified}}, which is the famous geographical clustering phenomenon and is adopted by earlier work to enhance the performance of location recommendation \cite{ma2018point, lian2014geomf, li2015rank}.
\zhou{Therefore, except for the candidates generated by hypergraph, we also add POIs into candidate set from near to far}.
{In our framework, we generate fixed-length candidate sets for every trip query, denoted as $\mathcal{A}_q$ for the corresponding trip query $q$. We first use the hypergraph to generate candidates and then use the spatial retrieval. In other words, if the numbers of candidates generated by the hypergraph retrieval for different trip queries are smaller than the pre-defined number $|\mathcal{A}_q|$, we pad the candidate set with the sorted POIs}
\li{in order of}
\zhou{distance to a fixed length.}
\subsection{Time-aware Trip Generator}\label{generator}
As shown in Figure \ref{fig-generator}, the generator consists of two main components: 1) a POI correlation encoding module (i.e., the encoder), which outputs the representation of all POIs in the candidate set; 2) a trip generation module (i.e., the decoder), which
{selects location sequentially by maintaining a special context embedding, and keeps the time budget constraint satisfied by masking mechanism.}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{figures/aaai-plan-generator-v1.pdf}
\caption{Illustration of time-aware trip generator.}
\label{fig-generator}
\vspace{-2mm}
\end{figure}
\subsubsection{Joint Embedding}
Given the trip query $(u, l_0, T_{max})$ and the corresponding selected candidate set $\mathcal{A}_q$, we use a simple linear transform to combine the user $u$ and the POI $l_i$ in $\mathcal{A}_q$ with its category $c$ for embedding:
\begin{equation}
h_i^{(0)} = [x_{l_i}; x_{c}; x_u] \mathbf{W_I} + b_I
\end{equation}
where $x_{l_i}$, $x_{c}$, $x_u$ are POI embedding, category embedding and user embedding (which are all trainable embedding), $[a; b; c]$ means concatenation of vectors $a$, $b$, $c$, and $\mathbf{W_I}$, $b_I$ are trainable parameters.
Thus, we get the matrix presentation of the candidates $\mathbf{H}^{(0)} \in \mathbb{R}^{N \times d}$, each row of $\mathbf{H}^{(0)}$ is the representation of a POI in the candidate set.
\subsubsection{POI Correlation Encoding}
\zhou{We apply a self-attention encoder to produce the representation of locations. The reasons to use the self-attention encoder can be justified from two perspectives. The first reason is due to the permutation invariance for sets. For a candidate set, \zjb{the order of POIs in this set} is invariant to the final result, i.e. any permutation of the inputs is supposed to produce the same output representation. Thus, we do not adopt the classical RNN-based encoder architecture because it focuses on the sequential information of the inputs, which is not suitable for our problem. Second, a reasonable generated trip is supposed to consider the relationship between POIs.
For instance, after staying at a restaurant for a while a person is more interested in other kinds of POIs but not another restaurant.
So it is helpful to produce a POI representation with attention to other POIs. }
The encoder we apply is similar to the encoder used in the Transformer architecture \cite{vaswani2017attention} while we remove the position encoding, which is not suitable for our problem.
We stack multiple attention layers and each layer has the same sublayers: a multi-head attention(MHA), \zjb{and} a point-wise feed-forward network(FFN).
The initial input of the first attention layer is $\mathbf{H}^{(0)}$
and we apply the scaled dot-product attention for each head in layer $l$ as:
\begin{equation}
head_i^{(l)} = \mathrm{Attn}(\mathbf{H}^{(l - 1)} \mathbf{W}_Q, \mathbf{H}^{(l - 1)} \mathbf{W}_K,
\mathbf{H}^{(l - 1)} \mathbf{W}_V)
\end{equation}
where $1 \le i \le M, \mathbf{W}_Q, \mathbf{W}_K, \mathbf{W}_V \in \mathbb{R}^ {d \times d_h}, d_h = d / M$, $M$ is the number of heads and $d_h$ is the dimension for each head. The scaled dot-product attention computes as:
\begin{equation}
\mathrm{Attn}(\mathbf{Q}, \mathbf{K}, \mathbf{V}) = softmax(\frac{\mathbf{Q} \mathbf{K}^T}{\sqrt{d_h}}) \mathbf{V}
\end{equation}
where the softmax is row-wise. $M$ attention heads are able to capture different aspects of attention information and the results from each head are concatenated followed by a linear projection to get the final output of the MHA. We compute the output of MHA sublayer as:
\begin{equation}
\hat{\mathbf{H}}^{(l)} = [head_1^{(l)}; \cdots; head_M^{(l)}] \mathbf{W}_O
\end{equation}
where $\mathbf{W}_O \in \mathbb{R}^{d \times d}$.
\zhou{We endow the encoder with nonlinearity by adding interactions between dimensions by using the FFN sublayer.} The FFN we apply is a two-layer feed-forward network, whose output is computed as:
\begin{equation}
\mathbf{H}^{(l)} = \mathrm{ReLu} ( \hat{\mathbf{H}}^{(l)} \mathbf{W}^{f1} + \mathbf{b}^{f1}) \mathbf{W}^{f2} + {b}^{f2}
\end{equation}
where $\mathbf{W}^{f1} \in \mathbb{R}^{d \times d_{f}}, \mathbf{W}^{f} \in \mathbb{R}^{d_{f} \times d}$. Note that all
the parameters for each attention layer is unique.
Besides, to stabilize and speed up converging, the multi-head attention and feed-forward network are both followed by skip connection and batch normalization \cite{vaswani2017attention}.
To sum up, by considering the interactions and inner relationship among POIs, the encoder transforms the original embeddings of POIs into informative representations.
\subsubsection{Trip Generation}
\li{It is of great importance to consider the contextual environment when planning the trip so we design a novel context embedding integrating candidate information, time budget and selected POIs.}
{\bfseries Self-Attention Context Embedding.}
By aggregating the location embeddings, we apply a mean \li{pooling} of final location embedding $\bar{h}^{(L)} = \frac{1}{N} \sum \limits_{i=1} ^ {N} h_i^{(L)}$ as candidate embedding.
During the process of decoding, the decoder selects a POI from the candidate set once at a time based on selected POIs $S_{t^\prime}$, $t^\prime < t$ and the available time left .
We keep track of the remaining available time $T_t$ at time step $t$.
Initially $T_1 = T_{max} - T_d(S_0)$, and $T_t$ is updated as:
\begin{equation}
T_{t + 1} = T_t - T_a(S_{t-1}, S_{t}), t \ge 1
\end{equation}
where $S_0 = l_0$.
\zhou{Following existing methods to represent the contextual environment in the procedure of decoding \cite{bello2016neural, kool2018attention}, we employ a novel context embedding $h_c$ \zhou{conditioned on candidate set and remaining time}, which will change along the decoding proceeds.}
The context embedding $h_c$ is defined as:
\begin{equation}
h_c = [\bar{h}^{(L)}; h_{S_{t-1}}^{(L)}; T_t], t \ge 1
\end{equation}
where $h_c \in \mathbb{R}^{1 \times (2d+1)}$.
\li{Before deciding which POI to add into the trip at time step $t$, it is important to look back the information about candidates and remind ourselves which POIs are optional and which POIs should not be considered because they break the given constraints.
Therefore, we first glimpse the candidates that are optional, i.e. are never selected before and do not exceed the time budget, and then integrate the information with attention to the output from the encoder:}
\begin{equation}
q_c = h_c \mathbf{W}_Q^c
\quad k_i = h_i^{(L)} \mathbf{W}_K^c \quad v_i = h_i^{(L)} \mathbf{W}_V^c \\
\end{equation}
\begin{equation}
\alpha_{tj} = \frac{\Theta(T_t - T_a(S_{t - 1}, l_j))exp(\frac{q_c k_j^T}{\sqrt{d}})}
{\sum\limits_{l_m\in\mathcal{A}_q\backslash S_{0:t-1}} \Theta(T_t - T_a(S_{t-1}, l_m)) exp(\frac{q_c k_m^T}{\sqrt{d}})}
\end{equation}
where $\mathbf{W}_Q^c \in \mathbb{R}^{(2d+1) \times d}, \mathbf{W}_K^c, \mathbf{W}_V^c \in \mathbb{R}^{d \times d}$, $h_i^{(L)}$ is the $i$-th row of the location embedding matrix $\mathbf{H}^{(L)}$,
and $\Theta(\cdot)$ is a Heaviside step function, which plays a crucial role as the time-aware mask operator. Thus, the refined context embedding $\bar{h}_c$ is computed as:
\begin{equation}
\bar{h}_c = \sum_{l_j \in \mathcal{A}_q} \alpha_{tj} \cdot v_j
\end{equation}
We omit the multi-head due to the page limit.
{\bfseries Self-Attention Prediction.} After getting the refined context embedding, we apply a final attention layer with a single attention head with mask mechanism.
\begin{equation}
u_{cj} =
\begin{cases}
\frac{\bar{h}_c k_j^T}{\sqrt{d}} & \text{otherwise.}\\
-\infty & \text{if}\ l_j \in S_{0:t - 1} \ \text{or} \ T_t < T_a(S_{t -1}, l_j).
\end{cases}
\end{equation}
\li{Finally, the softmax is applied to get the probability distribution:}
\begin{equation}
p(S_t = l_j | \bar{h}_c) = \frac{e^{u_{cj}}} {\sum_{l_m \in \mathcal{A}_q} e^{u_{cm}}}
\label{equ:prob}
\end{equation}
The decoding proceeds until there is no enough time left and then we get the entire trip generated by the decoder $S_{0:t}$.
To sum up, by maintaining a context embedding and using the representation of location from the encoder, the decoder \li{constructs a trip}
with attention mechanism and meets the constraints by mask mechanism.
\subsection{Policy Optimization by Adversarial Learning}
\zhou{The next problem is how to train the encoder-decoder framework for trip generation.}
We devise a mobility discriminator to distinguish \zjb{real-life} trips \zjb{taken by users} between generated trips, which provides feedback to guide the optimization of the trip generator. \zhou{After the evaluation between generated trips and real-life trips, the output of the discriminator can be regarded as reward signals to improve the generator.}
\zhou{By} the adversarial procedure, the generator pushes itself to generate high-quality trips to obtain high rewards from the discriminator.
\subsubsection{Mobility Discriminator}
\zhou{The task for the discriminator essentially is binary classification. Here we apply a simple but effective one-layer Gated Recurrent Unit (GRU) \cite{cho2014learning}, followed by a two-layer feed-forward network to accomplish this task.}
We denote the mobility discriminator as $D_\phi$ and the trip generator as $G_\phi$, where $\theta$ and $\phi$ represent the parameters of the generator and discriminator respectively.
We denote all the real-life trips as $P_{data}$.
As a binary classification task, we train the discriminator $D_\phi$ as follows:
\begin{equation}
\mathop{\mathrm{max}}\limits_{\phi} \mathbb{E}_{\hat{S} \sim P_{data}}[\log D_\phi(\hat{S})] + \mathbb{E}_{S\sim G_\theta}[\log (1 - D_\phi(S))]
\end{equation}
\subsubsection{Adversarial Learning with Policy Gradient}
\zhou{We adopt the reinforcement learning technique to train the generator. The standard training algorithm for GAN does not apply to our framework: the discrete output of the trip generator blocks the gradient back-propagation, making it unable to optimize the generator \cite{yu2017seqgan}. As described previously, the trip generation process is a sequential decision problem, leading us to tackle the problem by adopting reinforcement learning techniques. With modeling the trip generation procedure as an MDP, an important setting is to regard the score from the discriminator as reward.
Thus, we define the loss as:
}
$\mathcal{L}(S) = \mathbb{E}_{p_{\theta}(S \mid q)}
[D_\phi(S)]$, which represents the expected score for the generated trip $S$ given trip query $q$.
Following REINFORCE \cite{williams1992simple} algorithm, we optimize the loss by gradient ascent:
\begin{equation}
\nabla \mathcal{L}(\theta \mid q) =
\mathbb{E}_{p_{\theta}(S \mid q)} [D_\phi(S) \nabla \log p_\theta(S \mid q)]
\label{eqa:policy gradient}
\end{equation}
\subsubsection{Learning from Demonstration}
\zhou{In order to accelerate the training process and further improve the performance, we propose a novel pre-train schema based on learning from demonstration \cite{silver2010learning}, which not only fully utilizes the data of real-life trips but also obtains a decent trip generator before adversarial learning.}
Learning directly from rewards is sample-inefficient and hard to achieve the promising performance \cite{yu2017seqgan}, \zhou{which is also our reason to introduce the pre-train schema.}
During pre-training,
we use real-life trips taken as ground-truth, regard choosing POI at each time step as a multi-classification problem and optimize by \textit{softmax} loss function.
Nevertheless, during inference, the trip generator needs the preceding POI to select the next POI while we have no access to the true preceding POI \li{in training}, which may lead to cumulative poor decisions \cite{samy2015scheduled}.
To bridge such a gap between training and inference, we select POI by sampling with the probability distribution (defined in Equation \ref{equ:prob}) during training. Finally, the loss can be computed as:
\begin{equation}
\mathcal{L}_c = - \sum \limits_{\hat{S} \in P_{data}} \sum \limits_{t=1}^{|\hat{S}|} \log p(\hat{S}_t | S_{0:t-1}; \theta)
\label{eqa: supervisedloss}
\end{equation}
where $S$ is the actual generated trip during training and $\hat{S}$ is the corresponding real-life trip.
\subsubsection{Teacher Forcing}
The training process is usually unstable by optimizing the generator with Equation \ref{eqa:policy gradient} \cite{li2017adversarial}.
The reason behind this is that once the generator deteriorates in some training batches and the discriminator will recognize the unreasonable trips \li{soon}, then the generator will be lost.
The generator knows the generated trips are not good based on the received rewards from the discriminator, but it does not know how to improve the quality of generated trips.
To alleviate this issue and give the generator more access to real-life trips, after we update the generator with adversarial loss, we also feed the generator real-life trips and update it with supervised loss(Equation \ref{eqa: supervisedloss}) again.
To sum up, we first pre-train the trip generator by leveraging demonstration data.
Afterwards, we alternately update the discriminator and the generator with the respective objective.
During updating the generator, we also feed real-life trips to the generator, regulating the generator from deviation from the demonstration data.
\subsubsection{Discriminator}
As our goal is to maximize the probability that the user will follow the planned trip.
We can directly estimate the probability with the discriminator $D_\phi$.
The discriminator takes the real trip and the trips generated by the trip generator and outputs the scores, which could be used to guide the optimization of the trip generator.
Compared to the complicated task for the trip generator, the task for the discriminator is a much easier binary classification task.
Following \cite{goodfellow2014generative}, we train the discriminator $G_\theta$ and the generator $D_\phi$ alternately for enhancing both discriminator and generator.
We denote all trajectories as $P_{data}$. As a binary classification task,
\hide{we computes loss for the discriminator as:
\begin{align}
\mathcal{L}_D = - \sum \limits_{(S, I) \in P_{data}} & [\ \log D_\phi(S| I; \theta) \notag \\ & +
\log (1 - D_\phi(S| I; \theta)]
\end{align}}
\li{
we train the discriminator $D_\phi$ as follows:
\begin{align}
\mathop{\mathrm{min}}\limits_{\phi} - \mathbb{E}_{S\sim P_{data}}[\log D_\phi(S)] - \mathbb{E}_{S\sim G_\theta}[\log (1 - D_\phi(S))]
\end{align}
}
\subsection{Training}
Though the proposed framework is composed of a generator and a discriminator, following the GAN structure, the standard training algorithm is not capable to our framework.
Compared to standard GAN structure, the discrete output from the generator, which is sampling based on the probability distribution makes the loss from discriminator non-differentiable and can't optimize the generator by back-propagation.
\subsubsection{Reinforcement Learning based Training}
Thus, we consider the plan generation procedure as a sequential decision making process and update the planning generator by using reinforcement learning techniques.
From the perspective of reinforcement learning, the POIs that are selected $S_{0:t}$ until time step $t$ and the context embedding can be regarded as \emph{state} and the output probability can be seen as a stochastic \emph{policy}.
So the score from the discriminator can be seen as a reward signal, which can be used to optimize the generator
and we train our generator with \emph{policy gradient} \cite{sutton2018reinforcement}.
A benefit of using discriminator as reward signal is that we can dynamically update the discriminator to further optimize the generator.
After our plan generator produces enough plans, we will update the discriminator to distinguish real trajectories from generated plans better.
Once we get a new discriminator, we will train the plan generator with new reward signal from the new discriminator. Given a instance $S$, Section \ref{generator} defines a probability distribution $p_{\theta} (S | S)$, where we can sample to make a plan $S$.
In order to train the plan generator, we define the loss
$
\mathcal{L}(S) = \mathbb{E}_{p_{\theta}(S | S)}
[D_\phi(S | S)]
$.
Follow the REINFORCE algorithm, we generate the policy gradient as:
\begin{equation}
\nabla \mathcal{L}(I) = \nabla \mathbb{E}_{p_{\theta}(S | I)} [D_\phi(S | I)] =
\nabla \mathbb{E}_{p_{\theta}(S | I)} [D_\phi(S | I) \nabla \log p_\theta(S|I)]
\end{equation}
On the basis of gradient above, we can update the parameters $\theta$ of our plan generator by gradient ascent with proper learning rate.
\subsubsection{Pre-training}
However, learning directly from such a sparse reward is sample inefficient and hard to converge.
Following \cite{yu2017seqgan}, we utilize the real trajectories to pre-train our generator to make the best use of data.
We can learn the policy from the real trajectory, which is named \emph{learning from demonstration} in reinforcement learning \cite{silver2010learning}.
\emph{Learning from demonstration} is helpful to solve the sample-inefficiency and speed up the training process by utilizing the demonstrations.
Actually, learning from demonstration is a method belonging to supervised learning, which uses the interaction records from an experienced expert as the ground truth \cite{silver2010learning}.
In our problem we use the real trajectory from the user as ground truth and maximize the probability of the real trajectory by using maximum likelihood estimation (MLE).
For each trajectory $Y \in P_{data}$ and corresponding problem instance $S$, the probability of outputting the trajectory $Y$ from the plan generator is:
\begin{equation}
P(Y \mid S) = \prod \limits_{t=1}^{|Y|} P(Y_t \mid Y_{1:t-1}, S; \theta)
\end{equation}
Then we define the loss by MLE:
\begin{equation}
\mathcal{L}_S = - \sum \limits_{Y \in P_{data} } \sum \limits_{t=1}^{|Y|} P(Y_i \mid Y_{1:t-1}, S; \theta)
\end{equation}
During inference, the decoder needs the previous POI to select the next POI while we have no access to the true previous POI.
To prevent the gap between training and inference, we select POI by sampling with the probability distribution during training.
So the loss we compute during training is:
\begin{equation}
\mathcal{L}_S = - \sum \limits_{Y \in P_{data}} \sum \limits_{t=1}^{|Y|} P(Y_t | S_{0:t-1}, S; \theta)
\end{equation}
where $S_{0:t-1}$ is the actual generated plan in the training process.
To sum up, we first pre-train our plan generator by leveraging demonstration data. To further improve the plan generator, we use reward signal from the discriminator to directly optimize our generator.
The discriminator is also updated during the process, which provides a better guidance for improving the generator.
\section{EXPERIMENTS}
\subsection{Experimental Setups}
\subsubsection{Dataset}
We use four real-world POI check-in datasets and Table \ref{tbl-dataset} summarizes the statistics of the four datasets.
\begin{table}[t]
\centering
\begin{tabular}{ccccc}
\toprule
\multirow{2}{*}{Dataset} & \multicolumn{2}{c}{Foursquare} & \multicolumn{2}{c}{Map} \\
\cmidrule(r){2-3} \cmidrule(r){4-5}
& NYC & Tokyo & Beijing & Chengdu\\
\midrule
\# users & 796 & 2019 & 22399 & 8869 \\
\# POIs & 8619 & 14117 & 13008 & 8914 \\
\# trips & 16518 & 58893 & 212758 & 95166 \\
\bottomrule
\end{tabular}
\caption{Dataset statistics.}
\label{tbl-dataset}
\vspace{-2mm}
\end{table}
\textbf{Foursquare}\cite{yang2014modeling} This real-world check-in dataset includes check-in data in New York City and Tokyo collected from Foursquare.
We sort the check-ins of a user by timestamp and split them into non-overlapping trips.
If the time interval between two successive check-ins is more than five hours, we split them into two trips.
\textbf{Map} This dataset collects real-world check-ins in Beijing and Chengdu from 1 July 2019 to 30 September 2019 \zhou{from an online map service provider in China}. We consider the check-ins in one day as a trip for a user.
We remove the trips of which length is less than 3 and we remove the POIs visited by fewer than 5 users as they are outliers in the dataset.
We split the datasets in chronological order, where the former 80 \% for training, the medium 10 \% for validation, and the last 10 \% for testing.
\subsubsection{Baselines}
We compare the performance of our proposed method with three state-of-the-art baselines that are designed for trip recommendation:
\textbf{TRAR} \cite{gu2020enhancing} proposes the concept of attractive routes and enhances trip recommendation with attractive routes.
\textbf{PERSTOUR} \cite{lim2015personalized} personalizes the duration time for each user based on their preferences and generates trips to maximize user preference.
\textbf{C-ILP} \cite{jia2019joint} learns a context-aware POI embedding by integrating POI co-occurrences, user preferences and POI popularity, and transforms the problem to an integer linear programming problem.
\zhou{For C-ILP and PERSTOUR, we first generate 100 candidates by using our proposed retrieval procedure \li{because larger candidates can not be solved in a tolerable time}. \zjb{C-ILP and PERSTOUR both utilize} \textit{lpsolve} \cite{berkelaar2004lpsolve}, a linear programming package, to generate trips among the candidates, which follows their implementation.}
To fully validate the effectiveness of our proposed method, we introduce some baselines designed for POI recommendation.
These baselines share the same trip generation procedure: they repeatedly choose the POI with the highest score among all the unvisited POIs until the time budget exhausts.
The scores of POIs are produced by the corresponding model in SAE-NAD and GRU4Rec while the scores are popularity(visit frequency) of POIs in POP.
\textbf{POP} is a naive method that measures the popularity of POIs by counting the visit frequency of POIs. \textbf{SAE-NAD} \cite{ma2018point} applies a self-attentive encoder for presenting POIs and a neighbor-aware decoder for exploiting the geographical influence.
\textbf{GRU4Rec} \cite{bal2016session} models the sequential information by GRU.
The implementation and hyper-parameters will be reported in the appendix.
\begin{table*}[tb]
\centering
\begin{tabular}{cccccccccccccccccc}
\toprule
\multirow{2}{*}{Method} & \multicolumn{2}{c}{NYC} & \multicolumn{2}{c}{Tokyo} & \multicolumn{2}{c}{Beijing} & \multicolumn{2}{c}{Chengdu} \\
\cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){2-3} \cmidrule(r){6-7} \cmidrule(r){8-9}
& HR & OSP & HR & OSP & HR & OSP & HR & OSP \\
\midrule
POP & 0.0397 & 0.0036 & 0.0482 & 0.0128 & 0.0461 & 0.0102 & 0.0483 & 0.0131 \\
SAE-NAD & 0.0119 & 0.0001 & 0.0875 & 0.0003 & 0.1345 & 0.0005 & 0.1220 & 0.0005 \\
GRU4Rec & 0.0181 & 0.0012 & 0.0276 & 0.0018 & 0.0307 & 0.0020 & 0.0206 & 0.0015 \\
TRAR & 0.0047 & 0.0020 & 0.0010 & 0.0001 & 0.0013 & 0.0001 & 0.0027 & 0.0001 \\
PERSTOUR & 0.0075 & 0.0013 & 0.0197 & 0.0048 & 0.0134 & 0.0021 &0.0145 & 0.0024 \\
C-ILP & 0.0449 & 0.0031 & 0.0241 & 0.0012 & 0.0160 & 0.0001 & 0.0175 & 0.0003\\
ANT & \textbf{0.2103} & \textbf{0.1154} & \textbf{0.1922} & \textbf{0.1232} & \textbf{0.1610} & \textbf{0.0388} &\textbf{ 0.1348} & \textbf{0.0349} \\
\bottomrule
\end{tabular}
\caption{Comparison with baselines.}
\label{tab-result}
\vspace{-2mm}
\end{table*}
\subsubsection{Evaluation Metrics}
A trip is determined by the POIs that compose the trip and the order of POIs in the trip.
We evaluate these two aspects by Hit Ratio and Order-aware Precision \cite{huang2019dynamic} respectively.
\zhou{These two metrics are popularly used for trip recommendation (and planning) in previous studies.}
\textbf{Hit Ratio}. Hit Ratio (HR) is a recall-based metric, which measures how many POIs in the real trip are covered in the planned trip except the start POI:
$
HR = \frac{ |S \cap \hat{S}| - 1 }{|\hat{S}| - 1}
$.
\textbf{Order-aware Sequence Precision} \cite{huang2019dynamic}. Order-aware Sequence Precision \zjb{(OSP)} measures the order precision of overlapped part between the real trip and the generated trip except the start POI, which is defined as:
$OSP = M / B$
where B is the number of all POI pairs in the overlapped part and M is the number of the pairs that contain the correct order. \li{We give an example in the appendix.}
\subsection{Experimental Results}
\subsubsection{Effectiveness}
Table \ref{tab-result} shows the performance \zhou{under} HR and OSP \zhou{metrics} on the four datasets with respect to different methods.
It can be observed that our proposed method consistently outperforms all the baselines \zhou{with a significant margin} on all the four datasets, especially on OSP, \zhou{which demonstrates that our method can recommend high-quality trips.}
PERSTOUR and CILP are both based on integer linear programming, which restricts them to respond in real time when the number of locations is large and affects their performance.
\li{TRAR is ill-behaved because modeling users and POIs only in category space are not enough to extract informative features to recommend reasonable trips.}
SAE-NAD is a strong baseline with \zhou{good} performance on HR while it performs poorly on OSP, which validates that conventional POI recommendation methods are not capable of \zhou{being extended to support trip recommendation directly.}
Due to the page limit, we omit the results on Beijing and Chengdu \zjb{if without specification} in the following analysis and the conclusions are similar on these two datasets, which \zhou{can be found in the appendix.}
\subsubsection{Efficiency}
\zhou{Besides the high prediction accuracy, another advantage of our framework is its good efficiency that \li{is} investigated in this section.}
We compare the running time of ANT with trip recommendation baselines, \zhou{i.e. TRAR, C-ILP and PERSTOUR}.
Even though ANT can be parallelized,
\zhou{for fair comparison} we make ANT generate trips serially and we run all four methods on the same CPU device (Intel 6258R).
The average running time of C-ILP and PERSTOUR \li{on an instance }both exceed one minute, while the average running time of ANT is less than 45 ms, which
\zhou{demonstrates} the superiority of our model in efficiency compared to traditional CP-based models.
Even though TRAR is faster than ANT with the help of the greedy algorithm but TRAR's performance is much worse than ANT, even worse than PERSTOUR and C-ILP.
To further validate ANT's availability to scale up to various numbers of candidates, we run ANT conditioned on \zjb{varying} numbers of candidates and show the result in Figure \ref{fig:time}, \zhou{which shows that the inference time of ANT is relatively stable with \zjb{different} numbers of candidates.}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\columnwidth]{exp-fig/aaai-runtime-cp.pdf}
\includegraphics[width=0.49\columnwidth]{exp-fig/aaai-candidates-time.pdf}
\vspace{-2mm}
\caption{Running time compared with baselines and running time on different numbers of candidates.}
\label{fig:time}
\vspace{-2mm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\columnwidth]{exp-fig/aaai-abl-hr.pdf}
\includegraphics[width=0.49\columnwidth]{exp-fig/aaai-abl-hr-bjcd.pdf}
\vspace{-2mm}
\caption{\zjb{Ablation study} of each component.}
\label{fig:ablation}
\vspace{-2mm}
\end{figure}
\subsubsection{Ablation Study}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\columnwidth]{exp-fig/aaai-candidates-hr.pdf}
\includegraphics[width=0.49\columnwidth]{exp-fig/aaai-candidates-osp.pdf}
\caption{The impact of the number of candidates.}
\label{fig:candidates}
\vspace{-2mm}
\end{figure}
\zhou{To analyze the effect of each component of the ANT framework, we conduct an experimental evaluation on four variants of ANT:} ANT-E, ANT-D, ANT-A, ANT-P.
ANT-E means
\zhou{to} remove the POI correlation module, ANT-D means the trip generation module is replaced with Pointer Networks \cite{vinyals2015pointer}, ANT-A means that we train the whole model only using learning from demonstration and ANT-P means we train the model only using the adversarial learning.
Due to the page limit, we omit the performance under OSP, which can be found in the appendix.
As can be observed in Figure \ref{fig:ablation}, each component makes contributions to the final performance.
Training our model without pre-training leads to huge performance degradation, indicating the effectiveness of pre-training on stabilizing and speeding up the training process.
Adversarial learning makes the model further improved based on pre-training.
Also, removing the POI correlation module leads to a performance drop, indicating the necessity of \li{multi-head self-attention} to capture the POI correlation.
And compared to Pointer Networks, the well-designed context embedding for trip recommendation also shows its superiority.
\subsubsection{Impact of Candidates}
\zhou{Here we evaluate the impact of candidates on the performance of ANT. Intuitively, when increasing the number of candidates, the target POIs have a high probability to be included in the candidate set for trip recommendation, but it also raises the difficulty to plan the correct trips. As we can see from Figure \ref{fig:candidates}, when the number of candidates is larger than 200, the prediction performance of ANT (under HR and OSP) on the NYC dataset becomes relatively stable with the number of candidates increasing. The same phenomenon can be found on the Tokyo dataset when the number of candidates is larger than 250. Therefore, the performance is not sensitive to the number of candidates if the number is relatively large enough. In this experimental evaluation, we set the number of candidates to 200, which can be also adjusted according to different characteristics of different cities.}
\section{CONCLUSION}
\zhou{In this paper, we investigated the trip recommendation problem by an end-to-end deep learning model. Along this line, we devised an encoder-decoder based trip generator to learn a well-formed policy to select the optimal POI at each time step by integrating POI correlation and contextual environment. Especially, we proposed a novel adversarial learning strategy integrating with reinforcement learning to train the trip generator. The extensive results on four large-scale real-world datasets demonstrate our framework could remarkably outperform the state-of-the-art baselines both on effectiveness and efficiency.}
\section{APPENDIX}
In this section, we first introduce the training procedure for ANT.
Then we give an example of evaluation metrics and introduce implementation, data pre-process and the parameter setting.
Finally, we represent the rest of experimental results, which are omitted in the experiment section.
\subsection{Algorithm}
\subsubsection{Training Algorithm for ANT}
We represent the training procedure for ANT in Algorithm \ref{alg:ANT}.
\begin{algorithm}
\caption{Training Procedure for ANT}
\label{alg:ANT}
\KwIn{Traing set $\mathcal{D}$}
\KwOut{Trained model parameters $\theta$ of the generator}
Initialize $G_\theta$, $D_\phi$ with random parameters $\theta$, $\phi$\;
Generate trips using $G_\theta$ for pre-training $D_\phi$\;
Pre-train $D_\phi$ via softmax loss function\;
Pre-train $G_\theta$ via learning from demonstration\;
\For{n\_epoches}{
\For{m\_batches}{
Generate trips by using $G_\theta$\;
Update $D_\phi$ via softmax loss function\;
Update $G_\theta$ via adversarial loss\;
Update $G_\theta$ via supervised loss function\;
\tcp{Teacher forcing}
}
}
\end{algorithm}
\subsection{Experiment Details}
\subsubsection{An Example of Evaluation Metrics}
Here we give an example about HR and OSP.
If the real trip is $l_0 \rightarrow l_1 \rightarrow l_2 \rightarrow l_3 \rightarrow l_4$ and the recommended trip is $l_0 \rightarrow l_2 \rightarrow l_5 \rightarrow l_1 \rightarrow l_4$,
it can be calculated that $HR = \frac{4-1} {5-1} = 0.75$.
As for OSP, the overlapped part is $(l_2, l_1, l_4)$ and all the ordered POI pairs in the overlapped part is $\{l_2 \Rightarrow l_1, l_2\Rightarrow l_4, l_1\Rightarrow l_4\}$, i.e., $B=3$. And $\{l_2 \Rightarrow l_1, l_2 \Rightarrow l_4 \}$ has the correct order as the real trip, so $M =2$ and $OSP = 0.67$.
\subsubsection{Data Pre-process}
The raw Foursquare dataset does not include the departure timestamp so we estimate the departure timestamp for check-ins.
As for the successive check-ins in a trip, we use the arrival timestamp on the next POI as the departure timestamp for the current POI.
Specially, for the last check-in in the trip, we set the departure timestamp as 30 minutes after the arrival timestamp.
The Map dataset already includes the full information that we need for trip recommendation so we don't pre-process it.
\subsubsection{Implementation}
We implement ANT in PyTorch and the model is trained on Tesla V100 GPU with running environment of Python 3.7, PyTorch 1.8.1 and CUDA 11.0.
\subsubsection{Parameter Setting}
We set embedding dimensions of user, POI, and category as 256, 256 and 32 respectively.
For the encoder, the dimension of multi-head self-attention is 256, the number of attention heads is 8, the inner-layer dimension of the feed-forward sublayer is 256, and we stack 6 attention layers in the encoder.
For the decoder, we set the number of attention heads as 8 and the dimension of attention is 256.
For the discriminator, we set the dimension of the hidden state of GRU as 256, the dimensions of inner layers in the feed-forward network are 32 and 2.
As for training, we set batch size as 512.
We use Adam optimizer to train our whole framework with a learning rate of 0.0001 in the pre-training stage and 0.00001 in the adversarial learning stage.
\subsection{Experimental Results}
\subsubsection{Efficiency}
We compare the running time of our proposed ANT with trip recommendation baselines, i.e. TRAR, C-ILP and PERSTOUR, and the running time of ANT on different numbers of candidates.
We make ANT generate trips serially the same as described in the experiment section.
As shown in Figure \ref{fig:run-time-bjcd}, the running time of C-ILP and PERSTOUR on an instance both exceed one minute, while the running time of ANT is less than 100ms, which demonstrates the excellent efficiency of ANT.
The running time of TRAR is shorter than ANT with the help of greedy algorithm, but its performance is much worse than ANT, even worse than PERSTOUR and C-ILP.
And we also represent the running time of ANT on different numbers of candidates.
The results show that the inference time of ANT is relatively stable on different numbers of candidates.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\columnwidth]{exp-fig/aaai-runtime-cp-bjcd.pdf}
\includegraphics[width=0.49\columnwidth]{exp-fig/aaai-candidates-time-bjcd.pdf}
\caption{Running time compared with baselines and running time on different numbers of candidates.}
\label{fig:run-time-bjcd}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\columnwidth]{exp-fig/aaai-abl-osp.pdf}
\includegraphics[width=0.49\columnwidth]{exp-fig/aaai-abl-osp-bjcd.pdf}
\caption{Ablation study of each component.}
\label{fig:ablation-osp}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\columnwidth]{exp-fig/aaai-candidates-hr-bjcd.pdf}
\includegraphics[width=0.49\columnwidth]{exp-fig/aaai-candidates-osp-bjcd.pdf}
\caption{The impact of the number of candidates.}
\label{fig:candidates-bjcd}
\end{figure}
\subsubsection{Ablation Study}
The performance of four variants of ANT and ANT under OSP is showed in Figure \ref{fig:ablation-osp}.
As can be observed in Figure \ref{fig:ablation-osp}, each component makes contribution to the final performance.
Training the whole framework without pre-training results in a big performance drop, which demonstrates the necessity of pre-training on stabilizing and speeding up the training process.
Based on the pre-training, adversarial learning further improves the framework.
Removing the POI correlation encoding module also makes the performance worse, indicating the effectiveness of multi-head self-attention to capture the relationships among POIs.
And our specially designed context embedding for trip recommendation also outperforms Pointer Networks.
\subsubsection{Impact of Candidates}
The performance of ANT conditioned on different numbers of candidates is illustrated in Figure \ref{fig:candidates-bjcd}.
As we can see from Figure \ref{fig:candidates-bjcd}, for Chengdu, the performance is relatively stable on different numbers of candidates under both HR and OSP.
As for Beijing, when the number of candidates is less than 200, the performance improves with increase of candidates under both HR and OSP, and when the number of candidates is more than 200, the performance deteriorates with increase of candidates.
So the proper number of candidates for Beijing is 200.
Thus, we can adjust the number of candidates according to the different characteristics of different cities.
| {
"attr-fineweb-edu": 1.824219,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbwTxK2li-Hhr-h2E |
\section{Introduction}
\label{sec:intro}
Football, or soccer, is undeniably the most popular sport worldwide. Predicting which team will win the next World Cup or the Champions League final are issues that lead to heated discussions and debates among football fans, and even attract the attention of casual watchers. Or put more simply, the question of which team will win the next match, independent of its circumstances, excites the fans. Bookmakers have made a business out of football predictions, and they use highly advanced models taking into account numerous factors (like a team's current form, injured players, the history between both teams, the importance of the game for each team, etc.) to obtain the odds of winning, losing and drawing for both teams.
One major appeal of football, and a reason for its success, is its simplicity as game. This stands somehow in contrast to the difficulty of predicting the winner of a football match. A help in this respect would be a ranking of the teams involved in a given competition based on their current strength, as this would enable football fans and casual watchers to have a better feeling for who is the favourite and who is the underdog. However, the existing rankings, both at domestic leagues level and at national team level, fail to provide this, either because they are by nature not designed for that purpose or because they suffer from serious flaws.
Domestic league rankings obey the 3-1-0 principle, meaning that the winner gets 3 points, the loser 0 points and a draw earns each team 1 point. The ranking is very clear and fair, and tells at every moment of the season how strong a team has been since the beginning of the season. However, given that every match has the same impact on the ranking, it is not designed to reflect a team's current strength. {A recent illustration of this fact can be found in last year's English Premier League, where the newly promoted team of Huddersfield Town had a very good start in the season 2017-2018 with 7 out of 9 points after the first 3 rounds. They ended the first half of the season on rank 11 out of 20, with 22 points after 19 games. Their second half season was however very poor, with only 15 points scored in 19 games, earning them the second last spot over the second half of the season (overall they ended the year on rank 16, allowing them to stay in the Premier League). There was a clear tendency of decay in their performance, which was hidden in the overall ranking by their very good performance at the start of the season.}
Contrary to domestic league rankings, the FIFA/Coca-Cola World Ranking of national soccer teams is intended to rank teams according to their recent performances in international games. Bearing in mind that the FIFA ranking forms the basis of the seating and the draw in international competitions and its qualifiers, such a requirement on the ranking is indeed necessary. However, the current FIFA ranking\footnote{While the present paper was in the final stages of the revision procedure, the FIFA decided to change its ranking in order to avoid precisely the flaws we mention here. Given the short time constraint, we were not able to study their new ranking and leave this for future research.} fails to reach these goals in a satisfying way and is subject to many discussions (\citet{Cummings,Tweedale, TAP}). It is based on the 3-1-0 system, but each match outcome is multiplied by several factors like the opponent team's ranking and confederation, the importance of the game, and a time factor. We spare the reader those details here, which can be found on the webpage of the FIFA/Coca-Cola World Ranking (\url{https://www.fifa.com/mm/document/fifafacts/rawrank/ip-590_10e_wrpointcalculation_8771.pdf}). In brief, the ranking is based on the weighted average of ranking points a national team has won over each of the preceding four rolling years. The average ranking points over the last 12 month period make up half of the ranking points, while the average ranking points in the 13-24 months before the update count for 25\% leaving 15\% for the 25-36 month period and 10\% for the 37-48 month period before the update. This {arbitrary decay function} is a major criticism of the FIFA ranking: a similar match of eleven months ago can have approximately twice the contribution as a match played twelve months ago. A striking example hereof was Scotland: ranked $50^{\rm th}$ in August 2013, it dropped to rank 63 in September 2013 before making a major jump to rank 35 in October 2013. This high volatility demonstrates a clear weakness in the FIFA ranking's ability of mirroring a team's current strength.
In this paper, we intend to fill the gap and develop a ranking that does reflect a soccer team's current strength. To this end, we consider and compare various existing and new statistical models that assign one or more strength parameters to each soccer team and where these parameters are estimated over an entire range of matches by means of maximum likelihood estimation. We shall propose a smooth time depreciation function to give more weight to more recent matches. {The comparison between the distinct models will be based on their predictive performance, as the model with the best predictive performance will also yield the best current-strength-ranking.} The resulting ranking represents an interesting addition to the well-established rankings of domestic leagues and can be considered as promising alternative to the FIFA ranking of national teams.
The present paper is organized as follows. We shall present in Section~\ref{sec:models} 10 different strength-based models whose parameters are estimated via maximum likelihood. More precisely, via weighted maximum likelihood as we introduce two types of weight parameters: the above-mentioned time depreciation effect and a match importance effect for national team matches. In Section~\ref{statistics} we describe the exact computations behind our estimation procedures as well as {a criterion} according to which we define a statistical model's predictive performance. Two case studies allow us to compare our 10 models at domestic league and national team levels in Section~\ref{sec:comp}: we investigate the English Premier League seasons from 2008-2017 (Section~\ref{sec:PL}) as well as national team matches between 2008 and 2017 (Section~\ref{sec:NT}). On basis of the best-performing models, we then illustrate in Section~\ref{sec:rankings} the advantages of our current-strength based ranking via various examples. We conclude the paper with final comments and an outlook on future research in Section~\ref{sec:conclu}.
\section{The statistical strength-based models}\label{sec:models}
\subsection{Time depreciation and match importance factors}\label{sec:weights}
Our strength-based statistical models are of two main types: Thurstone-Mosteller and Bradley-Terry type models on the one hand, which directly model the outcome (home win, draw, away win) of a match, and the Independent and Bivariate Poisson models on the other hand, which model the scores of a match. Each model assigns strength parameters to all teams involved and models match outcomes via these parameters. Maximum likelihood estimation is employed to estimate the strength parameters, and the teams are ranked according to their resulting overall strengths. More precisely, we shall consider weighted maximum likelihood estimation, where the weights introduced are of two types: time depreciation (domestic leagues and national teams) and match importance (only national teams).
\subsubsection{A smooth decay function based on the concept of Half period}
A feature that is common to all considered models is our proposal of decay function in order to reflect the time depreciation. Instead of the step-wise decay function employed in the FIFA ranking, we rather suggest a continuous depreciation function that gives less weight to older matches with a maximum weight of 1 for a match played today. Specifically, the time weight for a match which is played $x_m$ days back is calculated as
\begin{equation}\label{smoother}
w_{time,m}(x_m) = \left(\frac{1}{2}\right)^{\frac{x_m}{\mbox{Half period}}},
\end{equation}
meaning that a match played \emph{Half period} days ago only contributes half as much as a match played today and a match played $3\times$\emph{Half period} days ago contributes 12.5\% of a match played today. Figure~\ref{fig:decay} shows a graphical comparison of our continuous time decay function versus the arbitrary FIFA decay function. In the sequel, $w_{time,m}$ will serve as weighting function in the likelihoods associated with our various models. This idea of weighted likelihood or pseudo-likelihood to better estimate a team's current strength is in line with the literature on modelling (mainly league) football scores, see~\citet{dixon1997modelling}.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{decayZW.png}
\end{center}
\caption{Comparison of the FIFA ranking decay function versus our exponential smoother~\eqref{smoother}. The continuous depreciation line uses a Half Period of 500 days.}
\label{fig:decay}
\end{figure}
\subsubsection{Match importance}
While in domestic leagues all matches are equally important, the same cannot be said about national team matches where for instance friendly games are way less important than matches played during the World Cup. Therefore we need to introduce importance factors. The FIFA weights seem reasonable for this purpose and will be employed whenever national team matches are analyzed. The relative importance of a national match is indicated by $w_{type,m}$ and can take the values 1 for a friendly game, 2.5 for a confederation or world cup qualifier, 3 for a confederation tournament (e.g., UEFA EURO2016 or the Africa Cup of Nations 2017) or the confederations cup, and 4 for World Cup matches.
\subsection{The Thurstone-Mosteller and Bradley-Terry type models}
Thurstone-Mosteller (TM) \citep{thurstone1927psychophysical,mosteller2006remarks} and Bradley-Terry (BT) models \citep{BT52} have been designed to predict the outcome of pairwise comparisons. Assume from now on that we look at $M$ matches involving in total $T$ teams. Both models consider latent continuous variables $Y_{i,m}$ which stand for the performance of team $i$ in match $m$, $i \in \{1,\ldots,T\}$ and $m \in \{1,\ldots,M\}$. When the performance of team $i$ is much better than the performance of team $j$ in match $m$, say $Y_{i,m}-Y_{j,m}>d$ for some positive real-valued $d$, then team $i$ beats team $j$ in that match. If the difference in their performances is lower than $d$, i.e. $|Y_{i,m}-Y_{j,m}|<d$, then the game will end in a draw. The parameter $d$ thus determines the overall chance for a draw. The performances $Y_{i,m}$ depend on the strengths of the teams, denoted by $r_i$ for $i \in \{1,\ldots,T\}$, implying that a total of $T$ team strengths need to be estimated.
\subsubsection{Thurstone-Mosteller model}
The Thurstone-Mosteller model assumes that the performances $Y_{i,m}$ are normally distributed with means $r_{i}$, the strengths of the teams. The variance is considered to be the same for all teams, which leads to $Y_{i,m}\sim N(r_i, \sigma^2)$. Since the variance $\sigma^2$ only determines the scale of the ratings $r_i$, it can be chosen arbitrarily. Another assumption is that the performances of teams are independent, implying that $Y_{i,m}-Y_{j,m}\sim N(r_i-r_j, 2\sigma^2)$. For games not played on neutral ground, a parameter $h$ is added to the strength of the home team. In the remainder of this article, we will assume that team $i$ is the home team and has the home advantage, unless stated otherwise.
If we call $P_{H_{ijm}}$ the probability of a home win in match $m$, $P_{D_{ijm}}$ the probability of a draw in match $m$ and $P_{A_{ijm}}$ the probability of an away win in match $m$, then the outcome probabilities are
\begin{align*}
P_{H_{ijm}} &=P(Y_{i,m}-Y_{j,m}>d)= \Phi\left(\frac{(r_{i}+h)-r_{j}-d}{\sigma \sqrt{2}}\right); \\
P_{A_{ijm}} &=P(Y_{j,m}-Y_{i,m}>d)= \Phi\left(\frac{r_{j}-(r_{i}+h)-d}{\sigma \sqrt{2}}\right);\\
P_{D_{ijm}} &= 1-P_{H_{ijm}}-P_{A_{ijm}},
\end{align*}
where $\Phi$ denotes the cumulative distribution function of the standard normal distribution. For the sake of clarity we wish to stress that $r_{i}$ and $r_{j}$ belong to the set $\{r_1,\ldots,r_T\}$ of all $T$ team strengths. In principle we should adopt the notation $r_{i(m)}$ and $r_{j(m)}$ with $i(m)$ and $j(m)$ indicating the home and away team in match $m$; however, we believe this notation is too heavy and the reader readily understands what we mean without these indices. If the home effect $h$ is greater than zero, it inflates the strength of the home team and increases its modeled probability to win the match. This is typically the case since playing at home gives the benefit of familiar surroundings, the support of the home crowd and the lack of traveling. Matches on neutral ground are modeled by dropping the home effect $h$.
The strength parameters are estimated using maximum likelihood estimation on match outcomes. Let $y_{R_{ijm}}$ be 1 if the result of match $m$ is $R$ and $y_{R_{ijm}} = 0$ otherwise, for $R=H,D,A$ as explained above. Under the common assumption that matches are independent, the likelihood for $M$ matches corresponds to
\begin{align}
L &= \prod_{m=1}^{M}\prod_{i,j\in\{1,\ldots,T\}}\prod_{R\in \{H,D,A\}}P_{R_{ijm}}^{y_{ijm}\cdot y_{R_m}\cdot w_{type,m} \cdot w_{time,m}} \label{likelihood}
\end{align}
with $w_{type,m}$ and $w_{time,m}$ the weights described in Section~\ref{sec:weights} and where $y_{ijm}$ equals 1 if $i$ and $j$ are the home resp. the away team in match $m$ and $y_{ijm}=0$ otherwise.
\subsubsection{Bradley-Terry model}
In the Bradley-Terry model, the normal distribution is replaced with the logistic distribution. This leads to the assumption that $Y_{i,m}-Y_{j,m}\sim logistic(r_i-r_j, s)$ where again the scale parameter $s$ is considered equal for all teams and can be chosen arbitrarily. The corresponding outcome probabilities are
\begin{align*}
P_{H_{ijm}} &=P(Y_{i,m}-Y_{j,m}>d)=\frac{1}{1+\exp\left(-\frac{(r_{i}+h)-r_{j}-d}{s}\right)} ; \\
P_{A_{ijm}} &=P(Y_{j,m}-Y_{i,m}>d)=\frac{1}{1+\exp\left(-\frac{r_{j}-(r_{i}+h)-d}{s}\right)} ;\\
P_{D_{ijm}} &= 1-P_{H_{ijm}}-P_{A_{ijm}},
\end{align*}
where again $h$ and $d$ stand for the home effect parameter and draw parameter and $r_{i}$ and $r_{j}$ respectively stand for the strength parameters of home and away team in match $m$. The parameters are estimated via maximum likelihood in the same way as for the Thurstone-Mosteller model.
\subsubsection{Bradley-Terry-Davidson model}
In the original Bradley-Terry model, there exists no possibility for a draw ($d=0$). The two possible outcomes can then be written in a very simple and easy-to-understand formula, if we transform the parameters by taking $r_i^*=\exp(r_i/s)$ and $h^*=\exp(h/s)$:
\begin{align*}
P_{H_{ijm}} &=\frac{h^* r_i^*}{h^* r_i^*+r_j^*} ;\\
P_{A_{ijm}} &=\frac{r_j^*}{h^* r_i^*+r_j^*}.
\end{align*}
These simple formulae are one of the reasons for the popularity of the Bradley-Terry model. Starting from there, \citet{Davidson} modeled the draw probability in the following way:
\begin{align*}
P_{H_{ijm}} &=\frac{h^* r_i^*} {h^*r_i^*+d^*\sqrt{h^*r_i^*r_j^*} +r_j^*} ;\\
P_{A_{ijm}} &=\frac{r_j^*}{h^*r_i^*+d^*\sqrt{h^*r_i^*r_j^*} +r_j^*} ;\\
P_{D_{ijm}} &=\frac{d^*\sqrt{h^*r_i^*r_j^*}}{h^*r_i^*+d^*\sqrt{h^*r_i^*r_j^*} +r_j^*}.
\end{align*}
The draw effect $d^*$ is best understood by assuming similar strengths in the absence of a home effect. In that case $P_{H_{ijm}}$ is similar to $P_{A_{ijm}}$ and the relative probability of $P_{D_{ijm}}$ compared to a home win or loss is approximately equal to $d^*$. Parameter estimation works in the same way as in the previous two sections.
\subsubsection{Thurstone-Mosteller, Bradley-Terry and Bradley-Terry-Davidson models with Goal Difference weights}
The basic Thurstone-Mosteller, Bradley-Terry and Bradley-Terry-Davidson models of the previous sections do not use all of the available information. They only take the match outcome into account, omitting likely valuable information present in the goal difference. A team that wins by 8-0 and loses the return match by 0-1 is probably stronger than the opponent team. Therefore we propose an extension of these models that modifies the basic models in the sense that matches are given an increasing weight when the goal difference grows. The likelihood function is calculated as
\begin{align*}
L &= \prod_{m=1}^{M}\prod_{i,j\in\{1,\ldots,T\}}\prod_{R\in\{H,D,A\}}P_{R_{ijm}}^{y_{ijm}*y_{R_{ijm}}\cdot w_{goalDiffscaled,m} \cdot w_{type,m} \cdot w_{time,m}},
\end{align*}
where $P_{R_{ijm}}$ can stand for the Thurstone-Mosteller, Bradley-Terry and Bradley-Terry-Davidson expressions respectively, leading to three new models. This formula slightly differs from~\eqref{likelihood} through the goal difference weight
$$
w_{goalDiffscaled,m}=\left\{\begin{array}{ll}
1&\mbox{if draw}\\
\log_2(goalDiff_m+1)&\mbox{else,}
\end{array}\right.
$$
with $goalDiff_m$ the absolute value of the goal difference in match $m$ (both outcomes 2-0 and 0-2 thus give the same goal difference of 2). This way, a goal difference of 1 receives a goal difference weight of 1 and every additional increment in goal difference results in a smaller increase of the goal difference weight. A goal difference of 7 goals receives a goal difference weight of 3. Parameter estimation is achieved in the same way as in the basic models
\subsection{The Poisson models}
Poisson models were first suggested by \cite{PoissonMaher} to model football match results. He assumed the number of scored goals by both teams to be {independent} Poisson distributed variables. Let $G_{i,m}$ and $G_{j,m}$ be the random variables representing the goals scored by team $i$ and team $j$ in match $m$, respectively. With those assumptions the probability function can be written as
\begin{align}
{\rm P}(G_{i,m}=x,G_{j,m}=y) &= \frac{\lambda_{i,m}^x}{x!}\exp(-\lambda_{i,m}) \cdot \frac{\lambda_{j,m}^y}{y!}\exp(-\lambda_{j,m}), \label{poissonDens}
\end{align}
where $\lambda_{i,m}$ and $\lambda_{j,m}$ stand for the means of $G_{i,m}$ and $G_{j,m}$, respectively. In what follows we shall consider this model and variants of it, including the Bivariate Poisson model that removes the independence assumption.
Being a count-type distribution, the Poisson is a natural choice to model soccer matches. It bares yet another advantage when it comes to predicting matches. If $GD_m = G_{i,m} - G_{j,m}$, then the probability
of a win of team $i$ over team $j$, the probability of a draw as well as the win of team $j$ in match $m$ are respectively computed as ${\rm P}(GD_m > 0)$, ${\rm P}(GD_m = 0)$ and ${\rm P}(GD_m<0)$. The Skellam distribution, the discrete probability distribution of the difference of two independent Poisson random variables, is used to derive these probabilities given $\lambda_{i,m}$ and~$\lambda_{j,m}$. This renders the prediction of future matches via the Poisson model particularly simple.
\subsubsection{Independent Poisson model}
Attributing again a single strength parameter to each team, {denoted as before by $r_1,\ldots,r_T$, and keeping the notation $r_{i},r_{j}\in\{r_1,\ldots,r_T\}$ for the home and away team strengths in match $m$}, we define the Poisson means as $\lambda_{i,m} = \exp( c + (r_{i}+h) - r_{j})$ and $\lambda_{j,m} = \exp( c + r_{j} - (r_{i}+h))$ with $h$ the home effect, $c$ a common intercept. Matches on neutral ground are modeled by dropping the home effect $h$. With this in hand, the overall likelihood can be written as
$$
L = \prod_{m=1}^{M}\prod_{i,j\in \{1,...,T\}} \left(\frac{\lambda_{i,m}^{g_{i,m}}}{g_{i,m}!}\exp(-\lambda_{i,m}) \cdot \frac{ \lambda_{j,m}^{g_{j,m}}}{g_{j,m}!} \exp(-\lambda_{j,m})\right)^{y_{ijm} \cdot w_{type,m} \cdot w_{time,m}},
$$
where $y_{ijm}=1$ if $i$ and $j$ are the home team, resp. away team in match $m$ and $y_{ijm}=0$ otherwise, and $g_{i,m}$ and $g_{j,m}$ stand for the actual goals made by both teams in match $m$. Maximum likelihood estimation yields the values of the strength parameters. It is important to notice that the Poisson model uses two observations for each match (the goals scored by each team) while using the same number of parameters (number of teams + 2). The TM and BT models, except for the models with Goal Difference Weight, only use a single observation for each match.
\subsubsection{The Bivariate Poisson model}
A potential drawback of the Independent Poisson models lies precisely in the independence assumption. Of course, some sort of dependence between the two playing teams is introduced by the fact that the strength parameters of each team are present in the Poisson means of both teams, however this may not be a sufficiently rich model to cover the interdependence between two teams.
\cite{PoissonBivariate} suggested a bivariate Poisson model by adding a correlation between the scores. The scores in a match between teams $i$ and $j$ are modelled as $G_{i,m}=X_{i,m}+X_{C}$ and $G_{j,m}=X_{j,m}+X_{C}$, where $X_{i,m}$, $X_{j,m}$ and $X_{C}$ are independent Poisson distributed variables with parameters $\lambda_{i,m}$, $\lambda_{j,m}$ and $\lambda_{C}$, respectively. The joint probability function of the home and away score is then given by
\begin{align}
{\rm P}(G_{i,m}=x, G_{j,m}=y)=\frac{\lambda_{i,m}^x \lambda_{j,m}^y}{x!y!} \exp(-(\lambda_{i,m}+\lambda_{j,m}+\lambda_{C})) \sum_{k=0}^{\min(x,y)} \binom{x}{k} \binom{y}{k}k!\left(\frac{\lambda_{C}}{\lambda_{i,m}\lambda_{j,m}}\right)^k, \label{bivpoissonDens}
\end{align}
which is the formula for the bivariate Poisson distribution with parameters $\lambda_{i,m}$, $\lambda_{j,m}$ and $\lambda_{C}$. It reduces to~\eqref{poissonDens} when $\lambda_{C}=0$. This parameter thus can be interpreted as the covariance between the home and away scores in match $i$ and might reflect the game conditions. The means $\lambda_{i,m}$ and $\lambda_{j,m}$ are similar as in the Independent model, but we attract the reader's attention to the fact that the means for the scores are now given by $\lambda_{i,m}+\lambda_{C}$ and $\lambda_{j,m}+\lambda_{C}$, respectively. We assume that the covariance $\lambda_{C}$ is constant over all matches. All $T+3$ parameters are again estimated by means of maximum likelihood estimation.
Letting $GD_m$ again stand for the goal difference, we can easily see that the probability function of the goal difference for the bivariate case is the same as the probability function for the Independent model with parameters $\lambda_{i,m}$ and $\lambda_{j,m}$, since
\begin{align*}
P(GD_m=x)&=P(G_{i,m}-G_{j,m}=x)\\ &= P(X_{i,m}+X_{C}-(X_{j,m}+X_{C})=x) = P(X_{i,m}-X_{j,m}=x),
\end{align*}
implying that we can again use the Skellam distribution for predicting the winner of future games.
One can think of many other ways to model dependent football scores. \cite{PoissonBivariate} also consider bivariate Poisson models where the dependence parameter $\lambda_C$ depends on either the home team, either the away team, or both teams. We do not include these models here as they are more complicated and, in preliminary comparison studies that we have done, always performed worse than the above-mentioned model with constant $\lambda_C$. Other ways to model the dependence between the home and away scores have been proposed in the literature. For instance, the dependence can be modelled by all kinds of copulas or adaptations of the Independent model. Incorporating them all in our analysis seems an impossible task, which is why we opted for the very prominent Karlis-Ntzoufras proposal. Notwithstanding, we mention some important contributions in this field: \cite{dixon1997modelling} added an additional parameter to adjust for the probabilities on low scoring games (0-0, 1-0, 0-1 and 1-1), \cite{HaleScarf} investigated copula dependence structures, and \cite{boshnakov2017bivariate} recently proposed a copula-based bivariate Weibull count model.
\subsubsection{Poisson models with defensive and attacking strengths}
In the previous sections we have defined a slightly simplified version of Maher's original idea. In fact, Maher assumed the scoring rates to be of the form $\lambda_{i,m} = \exp(c + (o_{i}+h) - d_{j})$ and $\lambda_{j,m} = \exp(c + o_{j} - (d_{i}+h))$, with $o_{i}$, $o_{j}$, $d_{i}$ and $d_{j}$ standing for offensive and defensive capabilities of teams $i$ and $j$ in match $m$. This allows us to extend both the Independent and Bivariate Poisson model to incorporate offensive and defensive abilities, opening the door to the possibility of an offensive and defensive ranking of the teams. These models thus consider $2T$ team strength parameters to be estimated via maximum likelihood.
Since every team is given two strength parameters in this case, one may wonder how to build rankings. We suggest two options. On the one hand, this model can lead to two rankings, one for attacking strengths and the other for defensive strengths. On the other hand, we can simulate a round-robin tournament with the estimated strength parameters and consider the resulting ranking. We refer the reader to \cite{scarf2011numerical} for details about this approach.
\section{Parameter estimation and model selection}\label{statistics}
In this section we shall briefly describe two crucial statistical aspects of our investigation, namely how we compute the maximum likelihood estimates and which criterion we apply to select the model with the highest predictive performance.
\subsection{Computing the maximum likelihood estimates}
Parameters in the Thurstone-Mosteller and Bradley-Terry type as well as in the Poisson models are estimated using maximum likelihood estimation. To this end, we have used the $optim$ function in $\mathtt{R}$ \citep{Rteam} by specifying as preferred method the \textit{BFGS} (Broyden-Fletcher-Goldfarb-Shanno optimization algorithm). We have opted for this quasi-Newton method because of its robust properties. Note that the ratings $r_i$ are unique up to addition by a constant. To identify these parameters, we add the constraint that the sum of the ratings has to equal zero. For the Bradley-Terry-Davidson model the same constraint can be applied after logtransformation of the ratings $r_i^*$. Thanks to this constraint, only $T-1$ strengths have to be estimated when we consider $T$ teams. For the models with 2 parameters per team, we have to estimate $2(T-1)$ strength parameters. The strictly positive parameters are initialized at 1, the other parameters get an initial value of 0. After the first optimization, the estimates are used as initial values in the next optimization to speed up the calculations.
\subsection{Measure of predictive performance}\label{predperf}
The studied models are built to perform three-way outcome prediction (home win, draw or home loss). Each of the three possible match outcomes is predicted with a certain probability but only the actual outcome is observed. The predicted probability of the outcome that was actually observed is thus a natural measure of predictive performance. The ideal predictive performance metric is able to select the model which approximates the true outcome probabilities the best.
The metric we use is the Rank Probability Score (RPS) of \cite{epstein1969scoring}. It represents the difference between cumulative predicted and observed distributions via the formula
$$
\frac{1}{2M}\sum_{m=1}^M\left((P_{H_m}-y_{H_m})^2+(P_{A_M}-y_{A_M})^2\right)
$$
where we simplify the previous notations so that $P_{H_m}$ and $P_{A_m}$ stand for the predicted probabilities in match $m$ and $y_{H_m}$ and $y_{A_m}$ for the actual outcomes (hence, 1 or 0). It has been shown in \cite{constantinou2012solving} that the RPS is more appropriate as soccer performance metric than other popular metrics such as the RMS and Brier score. The reason is that, by construction, the RPS works at an ordinal instead of nominal scale, meaning that, for instance, it penalizes more severely a wrongly predicted home win in case of a home loss than in case of a draw.
\section{Comparison of the 10 models in terms of their predictive performance}\label{sec:comp}
In this section we compare the predictive performance of all 10 models described in Section~\ref{sec:models}. To this end, we first consider the English Premier League as example for domestic league matches, and then move to national team matches played over a period of 10 years all over the world, i.e., without restriction to a particular zone.
\subsection{Case study 1: Premier League}\label{sec:PL}
The engsoccerdata package \citep{engsoccerdata} contains results of all top 4 tier football leagues in England since 1888. The dataset contains the date of the match, the teams that played, the tier as well as the result. The number of teams equals 20 for each of the seasons considered (2008-2017). Matches are predicted for every season separately and on every match day of the season, using two years for training the models. We left out the first 5 rounds of every season, so a total of 3300 matches are predicted. The reason for the burn-in period is the fact that for the new teams in the Premier League, we can not have a good estimation yet of their strength at the beginning of the season since we are lacking information about the previous season(s). Matches are predicted in blocks corresponding to each round, and after every round the parameters are updated. In all our models, the Half Period is varied between 30 days and 2 years in steps of 30 days.
Table \ref{modelSummaryPL} summarizes the analysis by comparing the best performing models of each of the considered classes, \mbox{i.e.} the model with the optimal Half Period. As we can see, the Bivariate Poisson model with 1 strength parameter per team is the best according to the RPS, followed by the Independent Poisson model with just one parameter per team. So parsimony in terms of parameters to estimate is important. We also clearly see that all Poisson-based models outperform the TM and BT type models. This was to be expected since Poisson models use the goals as additional information. Considering the goal difference in the TM and BT type models does not improve their performance. It is also noteworthy that the best two models have among the lowest Half Periods.
\begin{table}
\caption{\label{modelSummaryPL}Comparison table for the best performing models of each of the considered classes with respect to the RPS criterion. The English Premier League matches from rounds 6 to 38 between the seasons 2008-2009 and 2017-2018 are considered.} \vspace{.5cm}
\centering
\begin{tabular}{lll}
\textbf{Model Class} &
\textbf{Optimal Half Period}&
\textbf{RPS} \\
Bivariate Poisson & 390 days & 0.1953 \\
Independent Poisson & 360 days & 0.1954 \\
Independent Poisson Def. \& Att. & 390 days & 0.1961 \\
Bivariate Poisson Def \& Att. & 480 days & 0.1961 \\
Thurstone-Mosteller & 450 days & 0.1985 \\
Bradley-Terry-Davidson & 420 days & 0.1985 \\
Bradley-Terry & 420 days & 0.1986 \\
Thurstone-Mosteller + Goal Difference & 300 days & 0.2000 \\
Bradley-Terry-Davidson + Goal Difference & 420 days & 0.2000 \\
Bradley-Terry + Goal Difference & 450 days & 0.2003 \\
\end{tabular}
\end{table}
\subsection{Case study 2: National teams}\label{sec:NT}
For the national team match results we used the dataset ``International football results from 1872 to 2018" uploaded by Mart J\"urisoo on the website \url{https://www.kaggle.com/}. We predicted the outcome of 4268 games played all over the world in the period from 2008 to 2017. The last game in our analysis is played on 2017-11-15. To avoid a too extreme computational time, we left out the friendly games in the comparison. The parameters are estimated by maximum likelihood on a period of eight years. The Half Period is varied from a half year to six years in steps of a half year.
The results of our model comparison are provided in Table~\ref{modelSummaryNT}. Exactly as for the Premier League, the Bivariate Poisson model with 1 strength parameter per team comes out first, followed by the Independent Poisson model with 1 strength parameter. We retrieve also all the other conclusions from the domestic level comparison. It is interesting to note that a Half Period of 3 years leads to the lowest RPS for both best models. Given the sparsity of national team matches played over a year, we think that no additional level of detail such as 3 years and 2 months is required, as this may also lead to over-fitting.
\begin{table}
\caption{\label{modelSummaryNT}Comparison table for the best performing models of each of the considered classes with respect to the RPS criterion. All of the important matches between the national teams in the period 2008-2017 are considered.}\vspace{.5cm}
\centering
\begin{tabular}{lll}
\textbf{Model Class} &
\textbf{Optimal Half Period}&
\textbf{RPS} \\
Bivariate Poisson & 3 years & 0.1651 \\
Independent Poisson & 3 years & 0.1653 \\
Independent Poisson Def. \& Att. & 3.5 years & 0.1656 \\
Bivariate Poisson Def \& Att. & 3 years & 0.1656 \\
Thurstone-Mosteller & 3.5 years & 0.1658 \\
Bradley-Terry & 4 years & 0.1659 \\
Bradley-Terry-Davidson & 4 years & 0.1660 \\
Thurstone-Mosteller + Goal Difference & 3.5 years & 0.1672 \\
Bradley-Terry + Goal Difference & 3 years & 0.1674 \\
Bradley-Terry-Davidson + Goal Difference & 3.5 years & 0.1681 \\
\end{tabular}
\end{table}
\section{Applications of our new rankings}\label{sec:rankings}
We now illustrate the usefulness of our new current-strength based rankings by means of various examples. Given the dominance of the Bivariate Poisson model with 1 strength parameter in both settings, we will use only this model to build our new rankings.
\subsection{Example 1: Rankings of Scotland in 2013}
As mentioned in the Introduction, the abrupt decay function of the FIFA ranking has entailed that the ranking of Scotland varied a lot in 2013 over a very short period of time: ranked $50^{\rm th}$ in August 2013, it dropped to rank 63 in September 2013 before jumping to rank 35 in October 2013. In Figure~\ref{fig:scotland}, we show the variation of Scotland in the FIFA ranking together with its variation in our ranking based on the Bivariate Poisson model with 1 strength parameter and Half Period of 3 years. While both rankings follow the same trend, we clearly see that our ranking method shows less jumps than the FIFA ranking and is much smoother. It thus leads to a more reasonable and stable ranking than the FIFA ranking.
\begin{figure}[h]
\begin{center}
\includegraphics[width=\linewidth]{Scotland210.png}
\end{center}
\caption{Comparison of the evolution of the FIFA ranking of Scotland in 2013 with the evolution based on our proposed ranking method, using the Bivariate Poisson model with 1 strength parameter and Half Period of 3 years.}
\label{fig:scotland}
\end{figure}
\subsection{Example 2: Drawing for the World Cup 2018}
Another infamous example of the disadvantages of the official FIFA ranking is the position of Poland at the moment of the draw for the 2018 FIFA World Cup (December 1 2017, but the relevant date for the seating was October 16 2017). According to the FIFA ranking of October 16 2017, Poland was ranked $6^{\rm th}$, and so it was one of the teams in Pot 1, in contrast to e.g. Spain or England which were in Pot 2 due to Russia as host occupying one of the 8 spots in Pot 1. Poland has reached this good position thanks to a very good performance in the World Cup qualifiers and, specifically, by avoiding friendly games during the year before the drawing for the World Cup, since friendly games with their low importance coefficient are very likely to reduce the points underpinning the FIFA ranking. This trick of Poland, who used intelligently the flaws of the FIFA ranking, has led to unbalanced groups at the World Cup, as for instance strong teams such as Spain and Portugal were together in Group B and Belgium and England were together in group G. This raised quite some discussions in the soccer world. In the end Poland was not able to advance to the next stage of the World Cup 2018 competition in its group with Columbia, Japan and Senegal, where Columbia and Japan ended first and second, Poland becoming last. This underlines that the position of Poland was not correct in view of their actual strength.
In Table~\ref{Poland} we compare the official FIFA ranking on October 16 2017 to our ranking based on the Bivariate Poisson model with 1 strength parameter and Half Period of 3 years. In our ranking, Poland occupies only position 15 and would not be in Pot 1. Spain and Colombia would enter Pot 1 instead of Poland and Portugal. We remark that, in the World Cup 2018, Spain ranked first in their group in front of Portugal while, as mentioned above, Columbia turned out first of Group H while Poland became last. This demonstrates the superiority of our ranking over the FIFA ranking. A further asset is its readability: one can understand the values of the strength parameters as ratios leading to the average number of goals that one team will score against the other. The same cannot be said about the FIFA points which do not allow making predictions.
\begin{table}[h]
\centering
\caption{\label{Poland} Top of the ranking of the national teams on 16 October 2017 according to the Bivariate Poisson model with 1 strength parameter and a Half Period of 3 years compared to the Official FIFA/Coca-Cola World Ranking on 16 October 2017.}\vspace{.5cm}
\begin{tabular}{rlr|lc}
\textbf{Position} & \textbf{Team} & \textbf{Strength} & \textbf{Team} & \textbf{Points} \\
\hline
1 & Brazil & 1.753 & Germany & 1631(1631.05) \\
2 & Spain & 1.637 & Brazil & 1619(1618.63) \\
3 & Argentina & 1.628 & Portugal & 1446(1446.38) \\
4 & Germany & 1.624 & Argentina & 1445(1444.69) \\
5 & Colombia & 1.496 & Belgium & 1333(1332.55) \\
6 & Belgium & 1.488 & Poland & 1323(1322.83) \\
7 & France & 1.467 & France & 1226(1226.29) \\
8 & Chile & 1.452 & Spain & 1218(1217.94)\\
9 & Netherlands & 1.424 & Chile & 1173(1173.14) \\
10 & Portugal & 1.417 & Peru & 1160(1159.94) \\
11 & Uruguay & 1.354 & Switzerland & 1134(1134.5) \\
12 & England & 1.341 & England & 1116(1115.69) \\
13 & Peru & 1.303 & Colombia & 1095(1094.89) \\
14 & Poland & 1.277 & Wales & 1072(1072.45) \\
15 & Italy & 1.268 & Italy & 1066(1065.65) \\
16 & Croatia &1.259 & Mexico & 1060(1059.6) \\
17 & Sweden & 1.253 & Uruguay & 1034(1033.91) \\
18 & Denmark & 1.216 & Croatia & 1013(1012.81) \\
19 & Ecuador & 1.211& Denmark & 1001(1001.39) \\
20 & Switzerland & 1.150 & Netherlands & 931(931.21)\\
\end{tabular}
\end{table}
\subsection{Example 3: Alternative ranking for the Premier League}
\begin{figure}[h]
\begin{center}
\includegraphics[width=\linewidth]{RankingPLBivPois210.png}
\includegraphics[width=\linewidth]{RankingPLOfficial210.png}
\end{center}
\caption{Above: Premier League ranking according to the Bivariate Poisson model with 1 strength parameter and Half Period of 390 days, updated every week, starting from the sixth week since the start of the season. Below: Official Premier League ranking, weekly updated, starting from the sixth week.}
\label{fig:premierleague}
\end{figure}
In Figure \ref{fig:premierleague}, we compare our ranking based on the Bivariate Poisson model with 1 strength parameter and Half Period of 390 days to the official Premier League ranking for the season 2017-2018, leaving out the first five weeks of the season. At first sight, one can see that our proposed ranking is again smoother than the official ranking, especially in the first part of the season. Besides that, our ranking is constructed in such a way that it does less depend on the game schedules, while the intermediate official rankings are heavily depending on the latter. Indeed, winning against weak teams can rapidly blow up a team's official ranking, while the weakness of the opponents will less increase that team's strength in our ranking which takes the opponent strength into account. Furthermore, the postponing of matches may even entail that at a certain moment some teams have played more games than others, which of course results in an official ranking that is in favour of the teams which have played more games at that time, a feature that is avoided in our ranking.
Coming back to the example of Huddersfield Town, mentioned in the Introduction, we can see that our ranking was able to detect Huddersfield as one of the weakest teams in the Premier League after 15 weeks, while their official ranking was still high thanks to their good start of the season. Thus our ranking fulfills its purpose: it reflects well a team's current strength.
\section{Conclusion and outlook}\label{sec:conclu}
We have compared 10 different statistical strength-based models according to their potential to serve as rankings reflecting a team's current strength. Our analysis clearly demonstrates that Poisson models outperform Thurstone-Mosteller and Bradley-Terry type models, and that the best models are those that assign the fewest parameters to teams. Both at domestic team level and national team level, the Bivariate Poisson model with one strength parameter per team was found to be the best in terms of the RPS criterion. However, the difference between that model and the Independent Poisson with one strength parameter is very small, which is explained by the fact that the covariance in the Bivariate Poisson model is close to zero. This is well in line with recent findings of \cite{Groll2017} who used the same Bivariate Poisson model in a regression context. Applying it to the European Championships 2004-2012, they got a covariance parameter close to zero.
The time depreciation effect in all models considered in the present paper allows taking into account the moment in time when a match was played and gives more weight to more recent matches. An alternative approach to address the problem of giving more weight to recent matches consists in using dynamic time series models. Such dynamic models, based also on Poisson distributions, were proposed in \cite{PoissonTimeEffect}, \cite{koopman2015dynamic} and \cite{angelini2017parx}. In future work we shall investigate in detail the dynamic approach and also compare the resulting models to the Bivariate Poisson model with 1 strength parameter based on the time depreciation approach.
\
\noindent \textbf{ACKNOWLEDGMENTS:}
We wish to thank the Associate Editor as well as two anonymous referees for useful comments that led to a clear improvement of our paper.
\
| {
"attr-fineweb-edu": 1.641602,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcxnxK6EuNCwyu3jf | \section{Football Benchmarks}
The \emph{Football Engine}\xspace is an efficient, flexible and highly customizable learning environment with many features that lets researchers try a broad range of new ideas.
To facilitate fair comparisons of different algorithms and approaches in this environment, we also provide a set of pre-defined benchmark tasks that we call the \emph{Football Benchmarks}\xspace.
Similar to the Atari games in the \emph{Arcade Learning Environment}, in these tasks, the agent has to interact with a fixed environment and maximize its episodic reward by sequentially choosing suitable actions based on observations of the environment.
The goal in the \emph{Football Benchmarks}\xspace is to win a full game\footnote{We define an 11 versus 11 full game to correspond to 3000 steps in the environment, which amounts to 300 seconds if rendered at a speed of 10 frames per second.} against the opponent bot provided by the engine.
We provide three versions of the \emph{Football Benchmarks}\xspace that only differ in the strength of the opponent AI as described in the last section: the easy, medium, and hard benchmarks.
This allows researcher to test a wide range of research ideas under different computational constraints such as single machine setups or powerful distributed settings.
We expect that these benchmark tasks will be useful for investigating current scientific challenges in reinforcement learning such as sample-efficiency, sparse rewards, or model-based approaches.
\subsection{Experimental Setup}
As a reference, we provide benchmark results for three state-of-the-art reinforcement learning algorithms: PPO \cite{schulman2017proximal} and IMPALA \cite{espeholt2018impala} which are popular policy gradient methods, and Ape-X DQN~\cite{Horgan2018DistributedPE}, which is a modern DQN implementation.
We run PPO in multiple processes on a single machine, while IMPALA and DQN are run on a distributed cluster with $500$ and $150$ actors respectively.
In all benchmark experiments, we use the stacked Super Mini Map representation \ref{sec:smm} and the same network architecture.
We consider both the \textsc{Scoring} and \textsc{Checkpoint} rewards.
The tuning of hyper-parameters is done using easy scenario, and we follow the same protocol for all algorithms to ensure fairness of comparison.
After tuning, for each of the six considered settings (three \emph{Football Benchmarks}\xspace and two reward functions), we run five random seeds and average the results.
For the technical details of the training setup and the used architecture and hyperparameters, we refer to the Appendix.
\begin{figure*}[t!]
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth,trim={2px 2px 2px 35px},clip]{figures/picture_academy_empty_goal.png}
\caption{Empty Goal Close}
\end{subfigure}\hfill
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth,trim={2px 2px 2px 35px},clip]{figures/picture_academy_run_to_score.png}
\caption{Run to Score}
\end{subfigure}\hfill
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth,trim={2px 2px 2px 35px},clip]{figures/picture_academy_lazy.png}
\caption{11 vs 11 with Lazy Opponents}
\end{subfigure}\hfill
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth,trim={2px 2px 2px 35px},clip]{figures/picture_academy_3_vs_1.png}
\caption{3 vs 1 with Keeper}
\end{subfigure}\hfill
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth,trim={2px 2px 2px 35px},clip]{figures/picture_academy_pass_and_shoot_with_keeper.png}
\caption{Pass and Shoot}
\end{subfigure}\hfill
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth,trim={2px 2px 2px 35px},clip]{figures/picture_academy_counter_attack.png}
\caption{Easy Counter-attack}
\end{subfigure}
\caption{Example of \emph{Football Academy}\xspace scenarios.}
\label{fig:academy_scenarios}
\end{figure*}
\subsection{Results}
\looseness=-1The experimental results\footnote{All results in this paper are for the versions v$2$.x of the GRF.} for the \emph{Football Benchmarks}\xspace are shown in Figure~\ref{fig:challenges_both_rewards}.
It can be seen that the environment difficulty significantly affects the training complexity and the average goal difference.
The medium benchmark can be beaten by DQN and IMPALA with 500M training steps (albeit only barely with the \textsc{Scoring} reward).
The hard benchmark is even harder, and requires the \textsc{Checkpoint} reward and 500M training steps for achieving a positive score.
We observe that the \textsc{Checkpoint} reward function appears to be very helpful for speeding up the training for policy gradient methods but does not seem to benefit as much the Ape-X DQN as the performance is similar with both the \textsc{Checkpoint} and \textsc{Scoring} rewards.
We conclude that the \emph{Football Benchmarks}\xspace provide interesting reference problems for research and that there remains a large headroom for progress, in particular in terms of performance and sample efficiency on the harder benchmarks.
\section{Football Academy}
\looseness=-1Training agents for the \emph{Football Benchmarks}\xspace can be challenging.
To allow researchers to quickly iterate on new research ideas, we also provide the \emph{Football Academy}\xspace: a diverse set of scenarios of varying difficulty.
These 11 scenarios (see Figure~\ref{fig:academy_scenarios} for a selection) include several variations where a single player has to score against an empty goal (\emph{Empty Goal Close}, \emph{Empty Goal}, \emph{Run to Score}), a number of setups where the controlled team has to break a specific defensive line formation (\emph{Run to Score with Keeper}, \emph{Pass and Shoot with Keeper}, \emph{3 vs 1 with Keeper}, \emph{Run, Pass and Shoot with Keeper}) as well as some standard situations commonly found in football games (\emph{Corner}, \emph{Easy Counter-Attack}, \emph{Hard Counter-Attack}).
For a detailed description, we refer to the Appendix.
Using a simple API, researchers can also easily define their own scenarios and train agents to solve them.
\subsection{Experimental Results}
Based on the same experimental setup as for the \emph{Football Benchmarks}\xspace, we provide experimental results for both PPO and IMPALA for the \emph{Football Academy}\xspace scenarios in Figures~\ref{fig:academy_impala_scoring}, \ref{fig:academy_ppo_scoring}, \ref{fig:academy_impala_checkpoint}, and \ref{fig:academy_ppo_checkpoint} (the last two are provided in the Appendix).
We note that the maximum average scoring performance is 1 (as episodes end in the \emph{Football Academy}\xspace scenarios after scoring) and that scores may be negative as agents may score own goals and as the opposing team can score in the \emph{Corner} scenario.
The experimental results indicate that the \emph{Football Academy}\xspace provides a set of diverse scenarios of different difficulties suitable for different computational constraints.
The scenarios where agents have to score against the empty goal (\emph{Empty Goal Close}, \emph{Empty Goal}, \emph{Run to Score}) appear to be very easy and can be solved both PPO and IMPALA with both reward functions using only 1M steps.
As such, these scenarios can be considered ``unit tests'' for reinforcement learning algorithms where one can obtain reasonable results within minutes or hours instead of days or even weeks.
The remainder of the tasks includes scenarios for which both PPO and IMPALA appear to require between 5M to 50M steps for progress to occur (with minor differences between the \textsc{Scoring} and \textsc{Checkpoint}) rewards).
These harder tasks may be used to quickly iterate on new research ideas on single machines before applying them to the \emph{Football Benchmarks}\xspace (as experiments should finish within hours or days).
Finally, the \textsc{Corner} appears to be the hardest scenario (presumably as one has to face a full squad and the opponent is also allowed to score).
\section{Conclusions}
\looseness=-1In this paper, we presented the \emph{Google Research Football Environment}, a novel open-source reinforcement learning environment for the game of football.
It is challenging and accessible, easy to customize, and it has specific functionality geared towards research in reinforcement learning.
We provided the \emph{Football Engine}\xspace, a highly optimized C++ football simulator, the \emph{Football Benchmarks}\xspace, a set of reference tasks to compare different reinforcement learning algorithms, and the \emph{Football Academy}\xspace, a set of progressively harder scenarios.
We expect that these components will be useful for investigating current scientific challenges like self-play, sample-efficient RL, sparse rewards, and model-based RL.
\clearpage
\section*{Acknowledgement}
We wish to thank Lucas Beyer, Nal Kalchbrenner, Tim Salimans and the rest of the Google Brain team for helpful discussions, comments, technical help and code contributions. We would also like to thank Bastiaan Konings Schuiling, who authored and open-sourced the original version of this game.
\section{Promising Research Directions}
In this section we briefly discuss a few initial experiments related to three research topics which have recently become quite active in the reinforcement learning community: self-play training, multi-agent learning, and representation learning for downstream tasks.
This highlights the research potential and flexibility of the Football Environment.
\subsection{Multiplayer Experiments}
The Football Environment provides a way to train against different opponents, such as built-in AI or other trained agents.
Note this allows, for instance, for self-play schemes.
When a policy is trained against a fixed opponent, it may exploit its particular weaknesses and, thus, it may not generalize well to other adversaries.
We conducted an experiment to showcase this in which a first model $A$ was trained against a built-in AI agent on the standard 11 vs 11 medium scenario.
Then, another agent $B$ was trained against a frozen version of agent $A$ on the same scenario. While $B$ managed to beat $A$ consistently, its performance against built-in AI was poor.
The numerical results showing this lack of transitivity across the agents are presented in Table~\ref{tab:transitivity}.
\begin{table}[h]
\caption{Average goal difference $\pm$ one standard deviation across 5 repetitions of the experiment.}
\centering
\label{tab:transitivity}
\begin{tabular}{lr}
\toprule
$A$ vs built-in AI & $4.25 \pm 1.72$ \\
$B$ vs $A$ & $11.93 \pm 2.19$ \\
$B$ vs built-in AI & $-0.27 \pm 0.33$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Multi-Agent Experiments}
The environment also allows for controlling several players from one team simultaneously, as in multi-agent reinforcement learning.
We conducted experiments in this setup with the \emph{3 versus 1 with Keeper} scenario from Football Academy.
We varied the number of players that the policy controls from 1 to 3, and trained with Impala.
As expected, training is initially slower when we control more players, but the policies seem to eventually learn more complex behaviors and achieve higher scores.
Numerical results are presented in Table~\ref{tab:multiagent}.
\begin{table}
\caption{Scores achieved by the policy controlling 1, 2 or 3 players respectively, after 5M and 50M steps of training.}
\centering
\begin{tabular}{lrr}
\toprule
\textbf{Players controlled} & \textbf{5M steps} & \textbf{50M steps} \\ \midrule
1 & $0.38 \pm 0.23$ & $0.68 \pm 0.03$ \\
2 & $0.17 \pm 0.18$ & $0.81 \pm 0.17$ \\
3 & $0.26 \pm 0.11$ & $0.86 \pm 0.08$ \\
\bottomrule
\end{tabular}
\label{tab:multiagent}
\end{table}
\subsection{Representation Experiments}
Training the agent directly from raw observations, such as pixels, is an exciting research direction.
While it was successfully done for Atari, it is still an open challenge for most of the more complex and realistic environments.
In this experiment, we compare several representations available in the \emph{Football Engine}\xspace.
\emph{Pixels gray} denotes the raw pixels from the game, which are resized to $72 \times 96$ resolution and converted to grayscale.
While pixel representation takes significantly longer time to train, as shown in Table~\ref{tab:representation}, learning eventually takes place (and it actually outperforms hand-picked extensive representations like `Floats').
The results were obtained using Impala with Checkpoint reward on the easy 11 vs.\ 11 benchmark.
\begin{table}
\centering
\caption{Average goal advantages per representation.}
\label{tab:representation}
\begin{tabular}{lrr}
\toprule
\textbf{Representation} & \textbf{100M steps} & \textbf{500M steps} \\ \midrule
Floats & $2.42 \pm 0.46$ & $5.73 \pm 0.28$ \\
Pixels gray & $-0.82 \pm 0.26$ & $7.18 \pm 0.85$ \\
SMM & $5.94 \pm 0.86$ & $9.75 \pm 4.15$ \\
SMM stacked & $7.62 \pm 0.67$ & $12.89 \pm 0.51$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Football Engine}
The Football Environment is based on the Football Engine, an advanced football simulator built around a heavily customized version of the publicly available \emph{GameplayFootball} simulator \cite{gameplayfootball}.
The engine simulates a complete football game, and includes the most common football aspects, such as goals, fouls, corners, penalty kicks, or off-sides (see Figure~\ref{fig:game_features} for a few examples).
\paragraph{Supported Football Rules.}
The engine implements a full football game under standard rules, with 11 players on each team.
These include goal kicks, side kicks, corner kicks, both yellow and red cards, offsides, handballs and penalty kicks.
The length of the game is measured in terms of the number of frames, and the default duration of a full game is $3000$ ($10$ frames per second for $5$ minutes).
The length of the game, initial number and position of players can also be edited in customized scenarios (see \emph{Football Academy}\xspace below).
Players on a team have different statistics\footnote{Although players differ within a team, both teams have exactly the same set of players, to ensure a fair game.}, such as speed or accuracy and get tired over time.
\paragraph{Opponent AI Built-in Bots.}
The environment controls the opponent team by means of a rule-based bot, which was provided by the original \emph{GameplayFootball} simulator \cite{gameplayfootball}.
The difficulty level $\theta$ can be smoothly parameterized between 0 and 1, by speeding up or slowing down the bot reaction time and decision making.
Some suggested difficulty levels correspond to: easy ($\theta = 0.05$), medium ($\theta = 0.6$), and hard ($\theta = 0.95$).
For self-play, one can replace the opponent bot with any trained model.
Moreover, by default, our non-active players are also controlled by another rule-based bot.
In this case, the behavior is simple and corresponds to reasonable football actions and strategies, such as running towards the ball when we are not in possession, or move forward together with our active player.
In particular, this type of behavior can be turned off for future research on cooperative multi-agents if desired.
\paragraph{State \& Observations.}
We define as \emph{state} the complete set of data that is returned by the environment after actions are performed.
On the other hand, we define as \emph{observation} or \emph{representation} any transformation of the state that is provided as input to the control algorithms.
The definition of the state contains information such as the ball position and possession, coordinates of all players, the active player, the game state (tiredness levels of players, yellow cards, score, etc) and the current pixel frame.
\looseness=-1We propose three different representations.
Two of them (pixels and SMM) can be \emph{stacked} across multiple consecutive time-steps (for instance, to determine the ball direction), or unstacked, that is, corresponding to the current time-step only.
Researchers can easily define their own representations based on the environment state by creating wrappers similar to the ones used for the observations below.
\textit{Pixels.} The representation consists of a $1280 \times 720$ RGB image corresponding to the rendered screen.
This includes both the scoreboard and a small map in the bottom middle part of the frame from which the position of all players can be inferred in principle.
\textit{Super Mini Map.}\label{sec:smm} The SMM representation consists of four $72 \times 96$ matrices encoding information about the home team, the away team, the ball, and the active player respectively. The encoding is binary, representing whether there is a player or ball in the corresponding coordinate.
\looseness=-1\textit{Floats.} The floats representation provides a compact encoding and consists of a 115-dimensional vector summarizing many aspects of the game, such as players coordinates, ball possession and direction, active player, or game mode.
\paragraph{Actions.}
The actions available to an individual agent (player) are displayed in Table~\ref{tab:actions}.
They include standard move actions (in $8$ directions), and different ways to kick the ball (short and long passes, shooting, and high passes that can't be easily intercepted along the way).
Also, players can sprint (which affects their level of tiredness), try to intercept the ball with a slide tackle or dribble if they posses the ball.
We experimented with an action to switch the active player in defense (otherwise, the player with the ball must be active).
However, we observed that policies tended to exploit this action to return control to built-in AI behaviors for non-active players, and we decided to remove it from the action set.
We do \emph{not} implement randomized sticky actions.
Instead, once executed, moving and sprinting actions are sticky and continue until an explicit stop action is performed (Stop-Moving and Stop-Sprint respectively).
\paragraph{Rewards.}
The \emph{Football Engine}\xspace includes two reward functions that can be used out-of-the-box: \textsc{Scoring} and \textsc{Checkpoint}.
It also allows researchers to add custom reward functions using wrappers which can be used to investigate reward shaping approaches.
\textsc{Scoring} corresponds to the natural reward where each team obtains a $+1$ reward when scoring a goal, and a $-1$ reward when conceding one to the opposing team.
The \textsc{Scoring} reward can be hard to observe during the initial stages of training, as it may require a long sequence of consecutive events: overcoming the defense of a potentially strong opponent, and scoring against a keeper.
\textsc{Checkpoint} is a (shaped) reward that specifically addresses the sparsity of \textsc{Scoring} by encoding the domain knowledge that scoring is aided by advancing across the pitch:
It augments the \textsc{Scoring} reward with an additional auxiliary reward contribution for moving the ball close to the opponent's goal in a controlled fashion.
More specifically, we divide the opponent's field in 10 checkpoint regions according to the Euclidean distance to the opponent goal.
Then, the first time the agent's team possesses the ball in each of the checkpoint regions, the agent obtains an additional reward of $+0.1$.
This extra reward can be up to $+1$, \emph{i.e.}, the same as scoring a single goal.
Any non-collected checkpoint reward is also added when scoring in order to avoid penalizing agents that do not go through all the checkpoints before scoring (\emph{i.e.}, by shooting from outside a checkpoint region).
Finally, checkpoint rewards are only given once per episode.
\paragraph{Accessibility.}
Researchers can directly inspect the game by playing against each other or by dueling their agents.
The game can be controlled by means of both keyboards and gamepads.
Moreover, replays of several rendering qualities can be automatically stored while training, so that it is easy to inspect the policies agents are learning.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{performance_plots_v2/speed.pdf}
\caption{Number of steps per day versus number of concurrent environments for the \emph{Football Engine}\xspace on a hexa-core Intel Xeon W-2135 CPU with 3.70GHz.}
\label{fig:speed}
\end{figure}
\paragraph{Stochasticity.}
In order to investigate the impact of randomness, and to simplify the tasks when desired, the environment can run in either stochastic or deterministic mode.
The former, which is enabled by default, introduces several types of randomness: for instance, the same shot from the top of the box may lead to a different number of outcomes.
In the latter, playing a fixed policy against a fixed opponent always results in the same sequence of actions and states.
\paragraph{API \& Sample Usage.}
The Football Engine is out of the box compatible with the widely used OpenAI Gym API \cite{brockman2016openai}.
Below we show example code that runs a random agent on our environment.
\begin{lstlisting}
import gfootball.env as football_env
env = football_env.create_environment(
env_name='11_vs_11_stochastic',
render=True)
env.reset()
done = False
while not done:
action = env.action_space.sample()
observation, reward, done, info = \
env.step(action)
\end{lstlisting}
\paragraph{Technical Implementation \& Performance.}
The Football Engine is written in highly optimized C++ code, allowing it to be run on commodity machines both with GPU and without GPU-based rendering enabled. This allows it to obtain a performance of approximately $140$ million steps per day on a single hexacore machine (see Figure~\ref{fig:speed}).
\begin{table}[b]
\centering
\caption{Action Set}
\label{tab:actions}
{
\scriptsize
\begin{tabular}{cccc}
\toprule
Top & Bottom & Left & Right \\
Top-Left & Top-Right & Bottom-Left & Bottom-Right \\
Short Pass & High Pass & Long Pass & Shot \\
Do-Nothing & Sliding & Dribble & Stop-Dribble \\
Sprint &
Stop-Moving &
Stop-Sprint &
--- \\
\bottomrule
\end{tabular}
}
\end{table}
\section{Motivation and Other Related Work}
There are a variety of reinforcement learning environments that have accelerated research in recent years.
However, existing environments exhibit a variety of drawbacks that we address with the \emph{Google Research Football Environment}:
\looseness-1\paragraph{Easy to solve.}
With the recent progress in RL, many commonly used scenarios can now be solved to a reasonable degree in just a few hours with well-established algorithms.
For instance, ${\sim} 50$ commonly used Atari games in the \emph{Arcade Learning Environment} \cite{bellemare2013arcade} are routinely solved to super-human level \cite{hessel2018rainbow}.
The same applies to the \emph{DeepMind Lab} \cite{beattie2016deepmind}, a navigation-focused maze environment that provides a number of relatively simple tasks with a first person viewpoint.
\paragraph{Computationally expensive.}
On the other hand, training agents in recent video-game simulators often requires substantial computational resources that may not be available to a large fraction of researchers due to combining hard games, long episodes, and high-dimensional inputs (either in the form of pixels, or hand-crafted representations).
For example, the \emph{StarCraft II Learning Environment} \cite{vinyals2017starcraft} provides an API to \emph{Starcraft II}, a well-known real-time strategy video game, as well as to a few mini-games which are centered around specific tasks in the game.
\paragraph{Lack of stochasticity.}
\looseness=-2 The real-world is not deterministic which motivates the need to develop algorithms that can cope with and learn from stochastic environments.
Robots, self-driving cars, or data-centers require robust policies that account for uncertain dynamics.
Yet, some of the most popular simulated environments -- like the Arcade Learning Environment -- are deterministic.
While techniques have been developed to add artificial randomness to the environment (like skipping a random number of initial frames or using sticky actions), this randomness may still be too structured and easy to predict and incorporate during training \cite{machado2018revisiting,hausknecht2015impact}.
It remains an open question whether modern RL approaches such as self-imitation generalize from the deterministic setting to stochastic environments \cite{guo2018generative}.
\paragraph{Lack of open-source license.}
Some advanced physics simulators offer licenses that may be subjected to restrictive use terms \cite{todorov2012mujoco}.
Also, some environments such as StarCraft require access to a closed-source binary.
In contrast, open-source licenses enable researchers to inspect the underlying game code and to modify environments if required to test new research ideas.
\paragraph{Known model of the environment.}
Reinforcement learning algorithms have been successfully applied to board games such as Backgammon \cite{tesauro1995temporal}, Chess \cite{hsu2004behind}, or Go \cite{silver2016mastering}.
Yet, current state-of-the-art algorithms often exploit that the rules of these games (\emph{i.e.}, the model of the environment) are specific, known and can be encoded into the approach.
As such, this may make it hard to investigate learning algorithms that should work in environments that can only be explored through interactions.
\paragraph{Single-player.}
In many available environments such as Atari, one only controls a single agent.
However, some modern real-world applications involve a number of agents under either centralized or distributed control. The different agents can either collaborate or compete, creating additional challenges.
A well-studied special case is an agent competing against another agent in a zero sum game.
In this setting, the opponent can adapt its own strategy, and the agent has to be robust against a variety of opponents.
Cooperative multi-agent learning also offers many opportunities and challenges, such as communication between agents, agent behavior specialization, or robustness to the failure of some of the agents.
Multiplayer environments with collaborative or competing agents can help foster research around those challenges.
\paragraph{Other football environments.}
There are other available football simulators, such as the \emph{RoboCup Soccer Simulator} \cite{kitano1995robocup,kitano1997robocup}, and the \emph{DeepMind MuJoCo Multi-Agent Soccer Environment} \cite{liu2019emergent}.
In contrast to these environments, the \emph{Google Research Football Environment} focuses on high-level actions instead of low-level control of a physics simulation of robots (such as in the RoboCup Simulation 3D League).
Furthermore, it provides many useful settings for reinforcement learning, e.g. the single-agent and multi-agent settings as well as single-player and multiplayer player modes.
\emph{Google Research Football} also provides ways to adjust difficulty, both via a strength-adjustable opponent and via diverse and customizable scenarios in Football Academy, and provides several specific features for reinforcement learning research, e.g., OpenAI gym compatibility, different rewards, different representations, and the option to turn on and off stochasticity.
\paragraph{Other related work.}
Designing rich learning scenarios is challenging, and resulting environments often provide a useful playground for research questions centered around a specific reinforcement learning set of topics.
For instance, the \emph{DeepMind Control Suite} \cite{tassa2018deepmind} focuses on continuous control,
the \emph{AI Safety Gridworlds} \cite{leike2017ai} on learning safely, whereas the \emph{Hanabi Learning Environment} \cite{bard2019hanabi} proposes a multi-agent setup.
As a consequence, each of these environments are better suited for testing algorithmic ideas involving a limited but well-defined set of research areas.
\section{Introduction}
The goal of reinforcement learning (RL) is to train smart agents that can interact with their environment and solve complex tasks \cite{sutton2018reinforcement}. Real-world applications include robotics \cite{haarnoja2018soft}, self-driving cars \cite{bansal2018chauffeurnet}, and control problems such as increasing the power efficiency of data centers \cite{lazic2018data}.
Yet, the rapid progress in this field has been fueled by making agents play games such as the iconic Atari console games \cite{bellemare2013arcade,mnih2013playing}, the ancient game of Go \cite{silver2016mastering}, or professionally played video games like Dota 2 \cite{openai_dota} or Starcraft II \cite{vinyals2017starcraft}.
The reason for this is simple: games provide challenging environments where new algorithms and ideas can be quickly tested in a safe and reproducible manner.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth,trim={2px 2px 2px 35px},clip]{figures/picture_full_game_score.png}
\caption{The \emph{Google Research Football Environment} (\texttt{github.com/google-research/football}) provides a novel reinforcement learning environment where agents are trained to play football in an advance, physics based 3D simulation.}
\label{fig:main}
\end{figure}
While a variety of reinforcement learning environments exist, they often come with a few drawbacks for research, which we discuss in detail in the next section.
For example, they may either be too easy to solve for state-of-the-art algorithms or require access to large amounts of computational resources.
At the same time, they may either be (near-)deterministic or there may even be a known model of the environment (such as in Go or Chess).
Similarly, many learning environments are inherently single player by only modeling the interaction of an agent with a fixed environment or they focus on a single aspect of reinforcement learning such as continuous control or safety.
Finally, learning environments may have restrictive licenses or depend on closed source binaries.
This highlights the need for a RL environment that is not only challenging from a learning standpoint and customizable in terms of difficulty but also accessible for research both in terms of licensing and in terms of required computational resources.
Moreover, such an environment should ideally provide the tools to a variety of current reinforcement learning research topics such as the impact of stochasticity, self-play, multi-agent setups and model-based reinforcement learning, while also requiring smart decisions, tactics, and strategies at multiple levels of abstraction.
\begin{figure*}[t]
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\columnwidth,trim={2px 2px 2px 35px},clip]{figures/picture_full_game_kickoff.png}
\caption{Kickoff}
\end{subfigure}\hfill
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\columnwidth,trim={2px 2px 2px 35px},clip]{figures/picture_full_game_yellow_card.png}
\caption{Yellow card}
\end{subfigure}\hfill
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\columnwidth,trim={2px 2px 2px 35px},clip]{figures/picture_full_game_corner.png}
\caption{Corner kick}
\end{subfigure}
\caption{The \emph{Football Engine}\xspace is an advanced football simulator that supports all the major football rules such as (a) kickoffs (b) goals, fouls, cards, (c) corner kicks, penalty kicks, and offside.}
\label{fig:game_features}
\end{figure*}
\paragraph{Contributions}
In this paper, we propose the \emph{Google Research Football Environment}, a novel open-source reinforcement learning environment where agents learn to play one of the world's most popular sports: football (a.k.a.\ soccer).
Modeled after popular football video games, the Football Environment provides a physics-based 3D football simulation where agents have to control their players, learn how to pass in between them and how to overcome their opponent's defense in order to score goals.
This provides a challenging RL problem as football requires a natural balance between short-term control, learned concepts such as passing, and high level strategy.
As our key contributions, we
\begin{itemize}
\item provide the \emph{Football Engine}\xspace, a highly-optimized game engine that simulates the game of football,
\item propose the \emph{Football Benchmarks}\xspace, a versatile set of benchmark tasks of varying difficulties that can be used to compare different algorithms,
\item propose the \emph{Football Academy}\xspace, a set of progressively harder and diverse reinforcement learning scenarios,
\item evaluate state-of-the-art algorithms on both the \emph{Football Benchmarks}\xspace and the \emph{Football Academy}\xspace, providing an extensive set of reference results for future comparison,
\item provide a simple API to completely customize and define new football reinforcement learning scenarios, and
\item showcase several promising research directions in this environment, \emph{e.g.} the multi-player and multi-agent settings.
\end{itemize}
\section{Hyperparameters \& Architectures}
\label{app:hparams}
For our experiments, we used three algorithms (IMPALA, PPO, Ape-X DQN) that are described below. The model architecture we use is inspired by Large architecture from~\cite{espeholt2018impala} and is depicted in Figure~\ref{fig:impala_architecture}.
Based on the "Representation Experiments", we selected the stacked Super Mini Map\ref{sec:smm} as the default representation used in all \emph{Football Benchmarks}\xspace and \emph{Football Academy}\xspace experiments.
In addition we have three other representations.
For each of the six considered settings (three \emph{Football Benchmarks}\xspace and two reward functions), we run five random seeds for $500$ million steps each. For \emph{Football Academy}\xspace, we run five random seeds in all $11$ scenarios for $50$ million steps.
\paragraph{Hyperparameter search}
For each of IMPALA, PPO and Ape-X DQN, we performed two hyperparameter searches: one for \textsc{Scoring} reward and one for \textsc{Checkpoint} reward. For the search, we trained on easy difficulty. Each of 100 parameter sets was repeated with 3 random seeds. For each algorithm and reward type, the best parameter set was decided based on average performance -- for IMPALA and Ape-X DQN after 500M, for PPO after 50M. After the search, each of the best parameter sets was used to run experiments with 5 different random seeds on all scenarios. Ranges that we used for the procedure can be found in Table~\ref{tab:impala_hparams_values} for IMPALA, Table~\ref{tab:ppo_hparams_values} for PPO and Table~\ref{tab:dqn_hparams_values} for DQN.
\paragraph{IMPALA}
Importance Weighted Actor-Learner Architecture \cite{espeholt2018impala} is a highly scalable algorithm that decouples acting from learning. Individual workers communicate trajectories of experience to the central learner, instead of sending gradients with respect to the current policy. In order to deal with off-policy data, IMPALA introduces an actor-critic update for the learner called V-trace. Hyper-parameters for IMPALA are presented in Table~\ref{tab:impala_hparams_values}.
\paragraph{PPO}
Proximal Policy Optimization \cite{schulman2017proximal} is an online policy gradient algorithm which optimizes the clipped surrogate objective.
In our experiments we use the implementation from the OpenAI Baselines~\cite{baselines}, and run it over 16 parallel workers.
Hyper-parameters for PPO are presented in Table~\ref{tab:ppo_hparams_values}.
\paragraph{Ape-X DQN}
Q-learning algorithms are popular among reinforcement learning researchers.
Accordingly, we include a member of the DQN family in our comparison. In particular, we chose Ape-X DQN~\cite{Horgan2018DistributedPE}, a highly scalable version of DQN.
Like IMPALA, Ape-X DQN decouples acting from learning but, contrary to IMPALA, it uses a distributed replay buffer and a variant of Q-learning consisting of dueling network architectures~\cite{pmlr-v48-wangf16} and double Q-learning~\cite{van2016deep}.
Several hyper-parameters were aligned with IMPALA. These includes unroll length and $n$-step return, the number of actors and the discount factor $\gamma$. For details, please refer to the Table~\ref{tab:dqn_hparams_values}.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/gfootball_architecture2}
\caption{Architecture used for IMPALA and PPO experiments. For Ape-X DQN, a similar network is used but the outputs are Q-values.}
\label{fig:impala_architecture}
\end{figure}
\section{Numerical Results for the \emph{Football Benchmarks}\xspace}
In this section we provide for comparison the means and std values of $5$ runs for all algorithms in \emph{Football Benchmarks}\xspace. Table~\ref{tab:benchmark_scoring} contains the results for the runs with \textsc{Scoring} reward while Table~\ref{tab:benchmark_checkpoint} contains the results for the runs with \textsc{Checkpoint} reward.
Those numbers were presented in the main paper in Figure~\ref{fig:challenges_both_rewards}.
\begin{table}[h]
\caption{Benchmark results for \textsc{Scoring} reward.}
\resizebox{\columnwidth}{!}{
\begin{tabular}{lrrr}
\toprule
\textsc{Model} & \textsc{Easy} & \textsc{Medium} & \textsc{Hard} \\
\midrule
PPO @20M & $0.05 \pm 0.13$ & $-0.74 \pm 0.08$ & $-1.32 \pm 0.12$ \\
PPO @50M & $0.09 \pm 0.13$ & $-0.84 \pm 0.10$ & $-1.39 \pm 0.22$ \\
IMPALA @20M & $-0.01 \pm 0.10$ & $-0.89 \pm 0.34$ & $-1.38 \pm 0.22$ \\
IMPALA @500M & $5.14 \pm 2.88$ & $-0.36 \pm 0.11$ & $-0.47 \pm 0.48$ \\
DQN @20M & $-1.17 \pm 0.31$ & $-1.63 \pm 0.11$ & $-2.12 \pm 0.33$ \\
DQN @500M & $8.16 \pm 1.05$ & $2.01 \pm 0.27$ & $0.27 \pm 0.56$ \\
\bottomrule
\end{tabular}
}
\label{tab:benchmark_scoring}
\end{table}
\begin{table}[h]
\caption{Benchmark results for \textsc{Checkpoint} reward.}
\resizebox{\columnwidth}{!}{
\begin{tabular}{lrrr}
\toprule
\textsc{Model} & \textsc{Easy} & \textsc{Medium} & \textsc{Hard} \\
\midrule
PPO @20M & $6.23 \pm 1.25$ & $0.38 \pm 0.49$ & $-1.25 \pm 0.09$ \\
PPO @50M & $8.71 \pm 0.72$ & $1.11 \pm 0.45$ & $-0.75 \pm 0.13$ \\
IMPALA @20M & $-1.00 \pm 0.34$ & $-1.86 \pm 0.13$ & $-2.24 \pm 0.08$ \\
IMPALA @500M & $12.83 \pm 1.30$ & $5.54 \pm 0.90$ & $3.15 \pm 0.37$ \\
DQN @20M & $-1.15 \pm 0.37$ & $-2.04 \pm 0.45$ & $-2.22 \pm 0.19$ \\
DQN @500M & $7.06 \pm 0.85$ & $2.18 \pm 0.25$ & $1.20 \pm 0.40$ \\
\bottomrule
\end{tabular}
}
\label{tab:benchmark_checkpoint}
\end{table}
\clearpage
\begin{table*}
\caption{IMPALA: ranges used during the hyper-parameter search and the final values used for experiments with scoring and checkpoint rewards.}
\begin{center}
\begin{tabular}{lrrr}
\toprule
\textbf{Parameter} &\textbf{Range} & \textbf{Best - Scoring} & \textbf{Best - Checkpoint} \\ \midrule
Action Repetitions & 1 & 1 & 1 \\
Batch size & 128 & 128 & 128 \\
Discount Factor ($\gamma$) & $\{.99, .993, .997, .999\}$ & .993 & .993 \\
Entropy Coefficient & Log-uniform $(1\mathrm{e}{-6}$, $1\mathrm{e}{-3})$ & 0.00000521 & 0.00087453 \\
Learning Rate & Log-uniform $(1\mathrm{e}{-5}$, $1\mathrm{e}{-3})$ & 0.00013730 & 0.00019896 \\
Number of Actors & 500 & 500 & 500 \\
Optimizer & Adam & Adam & Adam \\
Unroll Length/$n$-step & $\{16, 32, 64\}$ & 32 & 32 \\
Value Function Coefficient &.5 & .5 & .5 \\
\bottomrule
\end{tabular}
\label{tab:impala_hparams_values}
\end{center}
\end{table*}
\begin{table*}[h]
\caption{PPO: ranges used during the hyper-parameter search and the final values used for experiments with scoring and checkpoint rewards.}
\begin{center}
\begin{tabular}{lrrr}
\toprule
\textbf{Parameter} & \textbf{Range} & \textbf{Best - Scoring} & \textbf{Best - Checkpoint} \\ \midrule
Action Repetitions & 1 & 1 & 1 \\
Clipping Range & Log-uniform $(.01, 1)$ & .115 & .08 \\
Discount Factor ($\gamma$) & $\{.99, .993, .997, .999\}$ & .997 & .993 \\
Entropy Coefficient & Log-uniform $(.001, .1)$ & .00155 & .003 \\
GAE ($\lambda$) & .95 & .95 & .95 \\
Gradient Norm Clipping & Log-uniform $(.2, 2)$ & .76 & .64 \\
Learning Rate & Log-uniform $(.000025, .0025)$ & .00011879 & .000343 \\
Number of Actors & 16 & 16 & 16 \\
Optimizer & Adam & Adam & Adam \\
Training Epochs per Update & $\{2, 4, 8\}$ & 2 & 2 \\
Training Mini-batches per Update & $\{2, 4, 8\}$ & 4 & 8 \\
Unroll Length/$n$-step & $\{16, 32, 64, 128, 256, 512\}$ & 512 & 512 \\
Value Function Coefficient & .5 & .5 & .5 \\
\bottomrule
\end{tabular}
\label{tab:ppo_hparams_values}
\end{center}
\end{table*}
\begin{table*}
\caption{DQN: ranges used during the hyper-parameter search and the final values used for experiments with scoring and checkpoint rewards.}
\begin{center}
\begin{tabular}{lrrr}
\toprule
\textbf{Parameter} & \textbf{Range} & \textbf{Best - Scoring} & \textbf{Best - Checkpoint} \\ \midrule
Action Repetitions & 1 & 1 & 1 \\
Batch Size & 512 & 512 & 512 \\
Discount Factor ($\gamma$) & $\{.99, .993, .997, .999\}$ & .999 & .999 \\
Evaluation $\epsilon$ & .01 & .01 & .01 \\
Importance Sampling Exponent & $\{0., .4, .5, .6, .8, 1.\}$ & 1. & 1. \\
Learning Rate & Log-uniform $(1\mathrm{e}{-7}$, $1\mathrm{e}{-3})$ & .00001475 & .0000115 \\
Number of Actors & 150 & 150 & 150 \\
Optimizer & Adam & Adam & Adam \\
Replay Priority Exponent & $\{0., .4, .5, .6, .7, .8\}$ & .0 & .8 \\
Target Network Update Period & 2500 & 2500 & 2500 \\
Unroll Length/$n$-step & $\{16, 32, 64, 128, 256, 512\}$ & 16 & 16 \\
\bottomrule
\end{tabular}
\label{tab:dqn_hparams_values}
\end{center}
\end{table*}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{performance_plots_v2/main_plot_academy_checkpoints}
\caption{Average Goal Difference on \emph{Football Academy}\xspace for IMPALA with \textsc{Checkpoint} reward.}
\label{fig:academy_impala_checkpoint}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{performance_plots/ppo_plot_academy_checkpoints}
\caption{Average Goal Difference on \emph{Football Academy}\xspace for PPO with \textsc{Checkpoint} reward. Scores for v$1.x$ (All other results in this paper are for v$2.x$, but for this plot the experiment didn't finish. Please check arxiv for the full v$2.x$ results)}
\label{fig:academy_ppo_checkpoint}
\end{figure*}
\begin{table*}[h!]
\caption{Description of the default \emph{Football Academy}\xspace scenarios. If not specified otherwise, all scenarios end after 400 frames or if the ball is lost, if a team scores, or if the game is stopped (\emph{e.g.} if the ball leaves the pitch or if there is a free kick awarded).The difficulty level is 0.6 (\emph{i.e.}, medium).}
\renewcommand*{\arraystretch}{1.5}
\begin{center}
\begin{tabular}{p{5cm}p{9cm}}
\toprule
\textbf{Name} & \textbf{Description} \\ \midrule
\textit{Empty Goal Close} & Our player starts inside the box with the ball, and needs to score against an empty goal. \\
\textit{Empty Goal} & Our player starts in the middle of the field with the ball, and needs to score against an empty goal. \\
\textit{Run to Score} & Our player starts in the middle of the field with the ball, and needs to score against an empty goal. Five opponent players chase ours from behind. \\
\textit{Run to Score with Keeper} & Our player starts in the middle of the field with the ball, and needs to score against a keeper. Five opponent players chase ours from behind. \\
\textit{Pass and Shoot with Keeper} & Two of our players try to score from the edge of the box, one is on the side with the ball, and next to a defender. The other is at the center, unmarked, and facing the opponent keeper. \\
\textit{Run, Pass and Shoot with Keeper} & Two of our players try to score from the edge of the box, one is on the side with the ball, and unmarked. The other is at the center, next to a defender, and facing the opponent keeper. \\
\textit{3 versus 1 with Keeper} & Three of our players try to score from the edge of the box, one on each side, and the other at the center. Initially, the player at the center has the ball, and is facing the defender. There is an opponent keeper. \\
\textit{Corner} & Standard corner-kick situation, except that the corner taker can run with the ball from the corner. The episode does not end if possession is lost.\\
\textit{Easy Counter-Attack} & 4 versus 1 counter-attack with keeper; all the remaining players of both teams run back towards the ball. \\
\textit{Hard Counter-Attack} & 4 versus 2 counter-attack with keeper; all the remaining players of both teams run back towards the ball. \\
\textit{11 versus 11 with Lazy Opponents} & Full 11 versus 11 game, where the opponents cannot move but they can only intercept the ball if it is close enough to them. Our center-back defender has the ball at first. The maximum duration of the episode is 3000 frames instead of 400 frames.\\
\bottomrule
\end{tabular}
\label{tab:scenario_description}
\end{center}
\end{table*}
\section{Football Benchmarks}
The \emph{Football Engine}\xspace is an efficient, flexible and highly customizable learning environment with many features that lets researchers try a broad range of new ideas.
To facilitate fair comparisons of different algorithms and approaches in this environment, we also provide a set of pre-defined benchmark tasks that we call the \emph{Football Benchmarks}\xspace.
Similar to the Atari games in the \emph{Arcade Learning Environment}, in these tasks, the agent has to interact with a fixed environment and maximize its episodic reward by sequentially choosing suitable actions based on observations of the environment.
The goal in the \emph{Football Benchmarks}\xspace is to win a full game\footnote{We define an 11 versus 11 full game to correspond to 3000 steps in the environment, which amounts to 300 seconds if rendered at a speed of 10 frames per second.} against the opponent bot provided by the engine.
We provide three versions of the \emph{Football Benchmarks}\xspace that only differ in the strength of the opponent AI as described in the last section: the easy, medium, and hard benchmarks.
This allows researcher to test a wide range of research ideas under different computational constraints such as single machine setups or powerful distributed settings.
We expect that these benchmark tasks will be useful for investigating current scientific challenges in reinforcement learning such as sample-efficiency, sparse rewards, or model-based approaches.
\subsection{Experimental Setup}
As a reference, we provide benchmark results for three state-of-the-art reinforcement learning algorithms: PPO \cite{schulman2017proximal} and IMPALA \cite{espeholt2018impala} which are popular policy gradient methods, and Ape-X DQN~\cite{Horgan2018DistributedPE}, which is a modern DQN implementation.
We run PPO in multiple processes on a single machine, while IMPALA and DQN are run on a distributed cluster with $500$ and $150$ actors respectively.
In all benchmark experiments, we use the stacked Super Mini Map representation \ref{sec:smm} and the same network architecture.
We consider both the \textsc{Scoring} and \textsc{Checkpoint} rewards.
The tuning of hyper-parameters is done using easy scenario, and we follow the same protocol for all algorithms to ensure fairness of comparison.
After tuning, for each of the six considered settings (three \emph{Football Benchmarks}\xspace and two reward functions), we run five random seeds and average the results.
For the technical details of the training setup and the used architecture and hyperparameters, we refer to the Appendix.
\begin{figure*}[t!]
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth,trim={2px 2px 2px 35px},clip]{figures/picture_academy_empty_goal.png}
\caption{Empty Goal Close}
\end{subfigure}\hfill
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth,trim={2px 2px 2px 35px},clip]{figures/picture_academy_run_to_score.png}
\caption{Run to Score}
\end{subfigure}\hfill
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth,trim={2px 2px 2px 35px},clip]{figures/picture_academy_lazy.png}
\caption{11 vs 11 with Lazy Opponents}
\end{subfigure}\hfill
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth,trim={2px 2px 2px 35px},clip]{figures/picture_academy_3_vs_1.png}
\caption{3 vs 1 with Keeper}
\end{subfigure}\hfill
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth,trim={2px 2px 2px 35px},clip]{figures/picture_academy_pass_and_shoot_with_keeper.png}
\caption{Pass and Shoot}
\end{subfigure}\hfill
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\linewidth,trim={2px 2px 2px 35px},clip]{figures/picture_academy_counter_attack.png}
\caption{Easy Counter-attack}
\end{subfigure}
\caption{Example of \emph{Football Academy}\xspace scenarios.}
\label{fig:academy_scenarios}
\end{figure*}
\subsection{Results}
\looseness=-1The experimental results\footnote{All results in this paper are for the versions v$2$.x of the GRF.} for the \emph{Football Benchmarks}\xspace are shown in Figure~\ref{fig:challenges_both_rewards}.
It can be seen that the environment difficulty significantly affects the training complexity and the average goal difference.
The medium benchmark can be beaten by DQN and IMPALA with 500M training steps (albeit only barely with the \textsc{Scoring} reward).
The hard benchmark is even harder, and requires the \textsc{Checkpoint} reward and 500M training steps for achieving a positive score.
We observe that the \textsc{Checkpoint} reward function appears to be very helpful for speeding up the training for policy gradient methods but does not seem to benefit as much the Ape-X DQN as the performance is similar with both the \textsc{Checkpoint} and \textsc{Scoring} rewards.
We conclude that the \emph{Football Benchmarks}\xspace provide interesting reference problems for research and that there remains a large headroom for progress, in particular in terms of performance and sample efficiency on the harder benchmarks.
\section{Football Academy}
\looseness=-1Training agents for the \emph{Football Benchmarks}\xspace can be challenging.
To allow researchers to quickly iterate on new research ideas, we also provide the \emph{Football Academy}\xspace: a diverse set of scenarios of varying difficulty.
These 11 scenarios (see Figure~\ref{fig:academy_scenarios} for a selection) include several variations where a single player has to score against an empty goal (\emph{Empty Goal Close}, \emph{Empty Goal}, \emph{Run to Score}), a number of setups where the controlled team has to break a specific defensive line formation (\emph{Run to Score with Keeper}, \emph{Pass and Shoot with Keeper}, \emph{3 vs 1 with Keeper}, \emph{Run, Pass and Shoot with Keeper}) as well as some standard situations commonly found in football games (\emph{Corner}, \emph{Easy Counter-Attack}, \emph{Hard Counter-Attack}).
For a detailed description, we refer to the Appendix.
Using a simple API, researchers can also easily define their own scenarios and train agents to solve them.
\subsection{Experimental Results}
Based on the same experimental setup as for the \emph{Football Benchmarks}\xspace, we provide experimental results for both PPO and IMPALA for the \emph{Football Academy}\xspace scenarios in Figures~\ref{fig:academy_impala_scoring}, \ref{fig:academy_ppo_scoring}, \ref{fig:academy_impala_checkpoint}, and \ref{fig:academy_ppo_checkpoint} (the last two are provided in the Appendix).
We note that the maximum average scoring performance is 1 (as episodes end in the \emph{Football Academy}\xspace scenarios after scoring) and that scores may be negative as agents may score own goals and as the opposing team can score in the \emph{Corner} scenario.
The experimental results indicate that the \emph{Football Academy}\xspace provides a set of diverse scenarios of different difficulties suitable for different computational constraints.
The scenarios where agents have to score against the empty goal (\emph{Empty Goal Close}, \emph{Empty Goal}, \emph{Run to Score}) appear to be very easy and can be solved both PPO and IMPALA with both reward functions using only 1M steps.
As such, these scenarios can be considered ``unit tests'' for reinforcement learning algorithms where one can obtain reasonable results within minutes or hours instead of days or even weeks.
The remainder of the tasks includes scenarios for which both PPO and IMPALA appear to require between 5M to 50M steps for progress to occur (with minor differences between the \textsc{Scoring} and \textsc{Checkpoint}) rewards).
These harder tasks may be used to quickly iterate on new research ideas on single machines before applying them to the \emph{Football Benchmarks}\xspace (as experiments should finish within hours or days).
Finally, the \textsc{Corner} appears to be the hardest scenario (presumably as one has to face a full squad and the opponent is also allowed to score).
\section{Conclusions}
\looseness=-1In this paper, we presented the \emph{Google Research Football Environment}, a novel open-source reinforcement learning environment for the game of football.
It is challenging and accessible, easy to customize, and it has specific functionality geared towards research in reinforcement learning.
We provided the \emph{Football Engine}\xspace, a highly optimized C++ football simulator, the \emph{Football Benchmarks}\xspace, a set of reference tasks to compare different reinforcement learning algorithms, and the \emph{Football Academy}\xspace, a set of progressively harder scenarios.
We expect that these components will be useful for investigating current scientific challenges like self-play, sample-efficient RL, sparse rewards, and model-based RL.
\clearpage
\section*{Acknowledgement}
We wish to thank Lucas Beyer, Nal Kalchbrenner, Tim Salimans and the rest of the Google Brain team for helpful discussions, comments, technical help and code contributions. We would also like to thank Bastiaan Konings Schuiling, who authored and open-sourced the original version of this game.
\section{Promising Research Directions}
In this section we briefly discuss a few initial experiments related to three research topics which have recently become quite active in the reinforcement learning community: self-play training, multi-agent learning, and representation learning for downstream tasks.
This highlights the research potential and flexibility of the Football Environment.
\subsection{Multiplayer Experiments}
The Football Environment provides a way to train against different opponents, such as built-in AI or other trained agents.
Note this allows, for instance, for self-play schemes.
When a policy is trained against a fixed opponent, it may exploit its particular weaknesses and, thus, it may not generalize well to other adversaries.
We conducted an experiment to showcase this in which a first model $A$ was trained against a built-in AI agent on the standard 11 vs 11 medium scenario.
Then, another agent $B$ was trained against a frozen version of agent $A$ on the same scenario. While $B$ managed to beat $A$ consistently, its performance against built-in AI was poor.
The numerical results showing this lack of transitivity across the agents are presented in Table~\ref{tab:transitivity}.
\begin{table}[h]
\caption{Average goal difference $\pm$ one standard deviation across 5 repetitions of the experiment.}
\centering
\label{tab:transitivity}
\begin{tabular}{lr}
\toprule
$A$ vs built-in AI & $4.25 \pm 1.72$ \\
$B$ vs $A$ & $11.93 \pm 2.19$ \\
$B$ vs built-in AI & $-0.27 \pm 0.33$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Multi-Agent Experiments}
The environment also allows for controlling several players from one team simultaneously, as in multi-agent reinforcement learning.
We conducted experiments in this setup with the \emph{3 versus 1 with Keeper} scenario from Football Academy.
We varied the number of players that the policy controls from 1 to 3, and trained with Impala.
As expected, training is initially slower when we control more players, but the policies seem to eventually learn more complex behaviors and achieve higher scores.
Numerical results are presented in Table~\ref{tab:multiagent}.
\begin{table}
\caption{Scores achieved by the policy controlling 1, 2 or 3 players respectively, after 5M and 50M steps of training.}
\centering
\begin{tabular}{lrr}
\toprule
\textbf{Players controlled} & \textbf{5M steps} & \textbf{50M steps} \\ \midrule
1 & $0.38 \pm 0.23$ & $0.68 \pm 0.03$ \\
2 & $0.17 \pm 0.18$ & $0.81 \pm 0.17$ \\
3 & $0.26 \pm 0.11$ & $0.86 \pm 0.08$ \\
\bottomrule
\end{tabular}
\label{tab:multiagent}
\end{table}
\subsection{Representation Experiments}
Training the agent directly from raw observations, such as pixels, is an exciting research direction.
While it was successfully done for Atari, it is still an open challenge for most of the more complex and realistic environments.
In this experiment, we compare several representations available in the \emph{Football Engine}\xspace.
\emph{Pixels gray} denotes the raw pixels from the game, which are resized to $72 \times 96$ resolution and converted to grayscale.
While pixel representation takes significantly longer time to train, as shown in Table~\ref{tab:representation}, learning eventually takes place (and it actually outperforms hand-picked extensive representations like `Floats').
The results were obtained using Impala with Checkpoint reward on the easy 11 vs.\ 11 benchmark.
\begin{table}
\centering
\caption{Average goal advantages per representation.}
\label{tab:representation}
\begin{tabular}{lrr}
\toprule
\textbf{Representation} & \textbf{100M steps} & \textbf{500M steps} \\ \midrule
Floats & $2.42 \pm 0.46$ & $5.73 \pm 0.28$ \\
Pixels gray & $-0.82 \pm 0.26$ & $7.18 \pm 0.85$ \\
SMM & $5.94 \pm 0.86$ & $9.75 \pm 4.15$ \\
SMM stacked & $7.62 \pm 0.67$ & $12.89 \pm 0.51$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Football Engine}
The Football Environment is based on the Football Engine, an advanced football simulator built around a heavily customized version of the publicly available \emph{GameplayFootball} simulator \cite{gameplayfootball}.
The engine simulates a complete football game, and includes the most common football aspects, such as goals, fouls, corners, penalty kicks, or off-sides (see Figure~\ref{fig:game_features} for a few examples).
\paragraph{Supported Football Rules.}
The engine implements a full football game under standard rules, with 11 players on each team.
These include goal kicks, side kicks, corner kicks, both yellow and red cards, offsides, handballs and penalty kicks.
The length of the game is measured in terms of the number of frames, and the default duration of a full game is $3000$ ($10$ frames per second for $5$ minutes).
The length of the game, initial number and position of players can also be edited in customized scenarios (see \emph{Football Academy}\xspace below).
Players on a team have different statistics\footnote{Although players differ within a team, both teams have exactly the same set of players, to ensure a fair game.}, such as speed or accuracy and get tired over time.
\paragraph{Opponent AI Built-in Bots.}
The environment controls the opponent team by means of a rule-based bot, which was provided by the original \emph{GameplayFootball} simulator \cite{gameplayfootball}.
The difficulty level $\theta$ can be smoothly parameterized between 0 and 1, by speeding up or slowing down the bot reaction time and decision making.
Some suggested difficulty levels correspond to: easy ($\theta = 0.05$), medium ($\theta = 0.6$), and hard ($\theta = 0.95$).
For self-play, one can replace the opponent bot with any trained model.
Moreover, by default, our non-active players are also controlled by another rule-based bot.
In this case, the behavior is simple and corresponds to reasonable football actions and strategies, such as running towards the ball when we are not in possession, or move forward together with our active player.
In particular, this type of behavior can be turned off for future research on cooperative multi-agents if desired.
\paragraph{State \& Observations.}
We define as \emph{state} the complete set of data that is returned by the environment after actions are performed.
On the other hand, we define as \emph{observation} or \emph{representation} any transformation of the state that is provided as input to the control algorithms.
The definition of the state contains information such as the ball position and possession, coordinates of all players, the active player, the game state (tiredness levels of players, yellow cards, score, etc) and the current pixel frame.
\looseness=-1We propose three different representations.
Two of them (pixels and SMM) can be \emph{stacked} across multiple consecutive time-steps (for instance, to determine the ball direction), or unstacked, that is, corresponding to the current time-step only.
Researchers can easily define their own representations based on the environment state by creating wrappers similar to the ones used for the observations below.
\textit{Pixels.} The representation consists of a $1280 \times 720$ RGB image corresponding to the rendered screen.
This includes both the scoreboard and a small map in the bottom middle part of the frame from which the position of all players can be inferred in principle.
\textit{Super Mini Map.}\label{sec:smm} The SMM representation consists of four $72 \times 96$ matrices encoding information about the home team, the away team, the ball, and the active player respectively. The encoding is binary, representing whether there is a player or ball in the corresponding coordinate.
\looseness=-1\textit{Floats.} The floats representation provides a compact encoding and consists of a 115-dimensional vector summarizing many aspects of the game, such as players coordinates, ball possession and direction, active player, or game mode.
\paragraph{Actions.}
The actions available to an individual agent (player) are displayed in Table~\ref{tab:actions}.
They include standard move actions (in $8$ directions), and different ways to kick the ball (short and long passes, shooting, and high passes that can't be easily intercepted along the way).
Also, players can sprint (which affects their level of tiredness), try to intercept the ball with a slide tackle or dribble if they posses the ball.
We experimented with an action to switch the active player in defense (otherwise, the player with the ball must be active).
However, we observed that policies tended to exploit this action to return control to built-in AI behaviors for non-active players, and we decided to remove it from the action set.
We do \emph{not} implement randomized sticky actions.
Instead, once executed, moving and sprinting actions are sticky and continue until an explicit stop action is performed (Stop-Moving and Stop-Sprint respectively).
\paragraph{Rewards.}
The \emph{Football Engine}\xspace includes two reward functions that can be used out-of-the-box: \textsc{Scoring} and \textsc{Checkpoint}.
It also allows researchers to add custom reward functions using wrappers which can be used to investigate reward shaping approaches.
\textsc{Scoring} corresponds to the natural reward where each team obtains a $+1$ reward when scoring a goal, and a $-1$ reward when conceding one to the opposing team.
The \textsc{Scoring} reward can be hard to observe during the initial stages of training, as it may require a long sequence of consecutive events: overcoming the defense of a potentially strong opponent, and scoring against a keeper.
\textsc{Checkpoint} is a (shaped) reward that specifically addresses the sparsity of \textsc{Scoring} by encoding the domain knowledge that scoring is aided by advancing across the pitch:
It augments the \textsc{Scoring} reward with an additional auxiliary reward contribution for moving the ball close to the opponent's goal in a controlled fashion.
More specifically, we divide the opponent's field in 10 checkpoint regions according to the Euclidean distance to the opponent goal.
Then, the first time the agent's team possesses the ball in each of the checkpoint regions, the agent obtains an additional reward of $+0.1$.
This extra reward can be up to $+1$, \emph{i.e.}, the same as scoring a single goal.
Any non-collected checkpoint reward is also added when scoring in order to avoid penalizing agents that do not go through all the checkpoints before scoring (\emph{i.e.}, by shooting from outside a checkpoint region).
Finally, checkpoint rewards are only given once per episode.
\paragraph{Accessibility.}
Researchers can directly inspect the game by playing against each other or by dueling their agents.
The game can be controlled by means of both keyboards and gamepads.
Moreover, replays of several rendering qualities can be automatically stored while training, so that it is easy to inspect the policies agents are learning.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{performance_plots_v2/speed.pdf}
\caption{Number of steps per day versus number of concurrent environments for the \emph{Football Engine}\xspace on a hexa-core Intel Xeon W-2135 CPU with 3.70GHz.}
\label{fig:speed}
\end{figure}
\paragraph{Stochasticity.}
In order to investigate the impact of randomness, and to simplify the tasks when desired, the environment can run in either stochastic or deterministic mode.
The former, which is enabled by default, introduces several types of randomness: for instance, the same shot from the top of the box may lead to a different number of outcomes.
In the latter, playing a fixed policy against a fixed opponent always results in the same sequence of actions and states.
\paragraph{API \& Sample Usage.}
The Football Engine is out of the box compatible with the widely used OpenAI Gym API \cite{brockman2016openai}.
Below we show example code that runs a random agent on our environment.
\begin{lstlisting}
import gfootball.env as football_env
env = football_env.create_environment(
env_name='11_vs_11_stochastic',
render=True)
env.reset()
done = False
while not done:
action = env.action_space.sample()
observation, reward, done, info = \
env.step(action)
\end{lstlisting}
\paragraph{Technical Implementation \& Performance.}
The Football Engine is written in highly optimized C++ code, allowing it to be run on commodity machines both with GPU and without GPU-based rendering enabled. This allows it to obtain a performance of approximately $140$ million steps per day on a single hexacore machine (see Figure~\ref{fig:speed}).
\begin{table}[b]
\centering
\caption{Action Set}
\label{tab:actions}
{
\scriptsize
\begin{tabular}{cccc}
\toprule
Top & Bottom & Left & Right \\
Top-Left & Top-Right & Bottom-Left & Bottom-Right \\
Short Pass & High Pass & Long Pass & Shot \\
Do-Nothing & Sliding & Dribble & Stop-Dribble \\
Sprint &
Stop-Moving &
Stop-Sprint &
--- \\
\bottomrule
\end{tabular}
}
\end{table}
\section{Motivation and Other Related Work}
There are a variety of reinforcement learning environments that have accelerated research in recent years.
However, existing environments exhibit a variety of drawbacks that we address with the \emph{Google Research Football Environment}:
\looseness-1\paragraph{Easy to solve.}
With the recent progress in RL, many commonly used scenarios can now be solved to a reasonable degree in just a few hours with well-established algorithms.
For instance, ${\sim} 50$ commonly used Atari games in the \emph{Arcade Learning Environment} \cite{bellemare2013arcade} are routinely solved to super-human level \cite{hessel2018rainbow}.
The same applies to the \emph{DeepMind Lab} \cite{beattie2016deepmind}, a navigation-focused maze environment that provides a number of relatively simple tasks with a first person viewpoint.
\paragraph{Computationally expensive.}
On the other hand, training agents in recent video-game simulators often requires substantial computational resources that may not be available to a large fraction of researchers due to combining hard games, long episodes, and high-dimensional inputs (either in the form of pixels, or hand-crafted representations).
For example, the \emph{StarCraft II Learning Environment} \cite{vinyals2017starcraft} provides an API to \emph{Starcraft II}, a well-known real-time strategy video game, as well as to a few mini-games which are centered around specific tasks in the game.
\paragraph{Lack of stochasticity.}
\looseness=-2 The real-world is not deterministic which motivates the need to develop algorithms that can cope with and learn from stochastic environments.
Robots, self-driving cars, or data-centers require robust policies that account for uncertain dynamics.
Yet, some of the most popular simulated environments -- like the Arcade Learning Environment -- are deterministic.
While techniques have been developed to add artificial randomness to the environment (like skipping a random number of initial frames or using sticky actions), this randomness may still be too structured and easy to predict and incorporate during training \cite{machado2018revisiting,hausknecht2015impact}.
It remains an open question whether modern RL approaches such as self-imitation generalize from the deterministic setting to stochastic environments \cite{guo2018generative}.
\paragraph{Lack of open-source license.}
Some advanced physics simulators offer licenses that may be subjected to restrictive use terms \cite{todorov2012mujoco}.
Also, some environments such as StarCraft require access to a closed-source binary.
In contrast, open-source licenses enable researchers to inspect the underlying game code and to modify environments if required to test new research ideas.
\paragraph{Known model of the environment.}
Reinforcement learning algorithms have been successfully applied to board games such as Backgammon \cite{tesauro1995temporal}, Chess \cite{hsu2004behind}, or Go \cite{silver2016mastering}.
Yet, current state-of-the-art algorithms often exploit that the rules of these games (\emph{i.e.}, the model of the environment) are specific, known and can be encoded into the approach.
As such, this may make it hard to investigate learning algorithms that should work in environments that can only be explored through interactions.
\paragraph{Single-player.}
In many available environments such as Atari, one only controls a single agent.
However, some modern real-world applications involve a number of agents under either centralized or distributed control. The different agents can either collaborate or compete, creating additional challenges.
A well-studied special case is an agent competing against another agent in a zero sum game.
In this setting, the opponent can adapt its own strategy, and the agent has to be robust against a variety of opponents.
Cooperative multi-agent learning also offers many opportunities and challenges, such as communication between agents, agent behavior specialization, or robustness to the failure of some of the agents.
Multiplayer environments with collaborative or competing agents can help foster research around those challenges.
\paragraph{Other football environments.}
There are other available football simulators, such as the \emph{RoboCup Soccer Simulator} \cite{kitano1995robocup,kitano1997robocup}, and the \emph{DeepMind MuJoCo Multi-Agent Soccer Environment} \cite{liu2019emergent}.
In contrast to these environments, the \emph{Google Research Football Environment} focuses on high-level actions instead of low-level control of a physics simulation of robots (such as in the RoboCup Simulation 3D League).
Furthermore, it provides many useful settings for reinforcement learning, e.g. the single-agent and multi-agent settings as well as single-player and multiplayer player modes.
\emph{Google Research Football} also provides ways to adjust difficulty, both via a strength-adjustable opponent and via diverse and customizable scenarios in Football Academy, and provides several specific features for reinforcement learning research, e.g., OpenAI gym compatibility, different rewards, different representations, and the option to turn on and off stochasticity.
\paragraph{Other related work.}
Designing rich learning scenarios is challenging, and resulting environments often provide a useful playground for research questions centered around a specific reinforcement learning set of topics.
For instance, the \emph{DeepMind Control Suite} \cite{tassa2018deepmind} focuses on continuous control,
the \emph{AI Safety Gridworlds} \cite{leike2017ai} on learning safely, whereas the \emph{Hanabi Learning Environment} \cite{bard2019hanabi} proposes a multi-agent setup.
As a consequence, each of these environments are better suited for testing algorithmic ideas involving a limited but well-defined set of research areas.
\section{Introduction}
The goal of reinforcement learning (RL) is to train smart agents that can interact with their environment and solve complex tasks \cite{sutton2018reinforcement}. Real-world applications include robotics \cite{haarnoja2018soft}, self-driving cars \cite{bansal2018chauffeurnet}, and control problems such as increasing the power efficiency of data centers \cite{lazic2018data}.
Yet, the rapid progress in this field has been fueled by making agents play games such as the iconic Atari console games \cite{bellemare2013arcade,mnih2013playing}, the ancient game of Go \cite{silver2016mastering}, or professionally played video games like Dota 2 \cite{openai_dota} or Starcraft II \cite{vinyals2017starcraft}.
The reason for this is simple: games provide challenging environments where new algorithms and ideas can be quickly tested in a safe and reproducible manner.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth,trim={2px 2px 2px 35px},clip]{figures/picture_full_game_score.png}
\caption{The \emph{Google Research Football Environment} (\texttt{github.com/google-research/football}) provides a novel reinforcement learning environment where agents are trained to play football in an advance, physics based 3D simulation.}
\label{fig:main}
\end{figure}
While a variety of reinforcement learning environments exist, they often come with a few drawbacks for research, which we discuss in detail in the next section.
For example, they may either be too easy to solve for state-of-the-art algorithms or require access to large amounts of computational resources.
At the same time, they may either be (near-)deterministic or there may even be a known model of the environment (such as in Go or Chess).
Similarly, many learning environments are inherently single player by only modeling the interaction of an agent with a fixed environment or they focus on a single aspect of reinforcement learning such as continuous control or safety.
Finally, learning environments may have restrictive licenses or depend on closed source binaries.
This highlights the need for a RL environment that is not only challenging from a learning standpoint and customizable in terms of difficulty but also accessible for research both in terms of licensing and in terms of required computational resources.
Moreover, such an environment should ideally provide the tools to a variety of current reinforcement learning research topics such as the impact of stochasticity, self-play, multi-agent setups and model-based reinforcement learning, while also requiring smart decisions, tactics, and strategies at multiple levels of abstraction.
\begin{figure*}[t]
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\columnwidth,trim={2px 2px 2px 35px},clip]{figures/picture_full_game_kickoff.png}
\caption{Kickoff}
\end{subfigure}\hfill
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\columnwidth,trim={2px 2px 2px 35px},clip]{figures/picture_full_game_yellow_card.png}
\caption{Yellow card}
\end{subfigure}\hfill
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=\columnwidth,trim={2px 2px 2px 35px},clip]{figures/picture_full_game_corner.png}
\caption{Corner kick}
\end{subfigure}
\caption{The \emph{Football Engine}\xspace is an advanced football simulator that supports all the major football rules such as (a) kickoffs (b) goals, fouls, cards, (c) corner kicks, penalty kicks, and offside.}
\label{fig:game_features}
\end{figure*}
\paragraph{Contributions}
In this paper, we propose the \emph{Google Research Football Environment}, a novel open-source reinforcement learning environment where agents learn to play one of the world's most popular sports: football (a.k.a.\ soccer).
Modeled after popular football video games, the Football Environment provides a physics-based 3D football simulation where agents have to control their players, learn how to pass in between them and how to overcome their opponent's defense in order to score goals.
This provides a challenging RL problem as football requires a natural balance between short-term control, learned concepts such as passing, and high level strategy.
As our key contributions, we
\begin{itemize}
\item provide the \emph{Football Engine}\xspace, a highly-optimized game engine that simulates the game of football,
\item propose the \emph{Football Benchmarks}\xspace, a versatile set of benchmark tasks of varying difficulties that can be used to compare different algorithms,
\item propose the \emph{Football Academy}\xspace, a set of progressively harder and diverse reinforcement learning scenarios,
\item evaluate state-of-the-art algorithms on both the \emph{Football Benchmarks}\xspace and the \emph{Football Academy}\xspace, providing an extensive set of reference results for future comparison,
\item provide a simple API to completely customize and define new football reinforcement learning scenarios, and
\item showcase several promising research directions in this environment, \emph{e.g.} the multi-player and multi-agent settings.
\end{itemize}
\section{Hyperparameters \& Architectures}
\label{app:hparams}
For our experiments, we used three algorithms (IMPALA, PPO, Ape-X DQN) that are described below. The model architecture we use is inspired by Large architecture from~\cite{espeholt2018impala} and is depicted in Figure~\ref{fig:impala_architecture}.
Based on the "Representation Experiments", we selected the stacked Super Mini Map\ref{sec:smm} as the default representation used in all \emph{Football Benchmarks}\xspace and \emph{Football Academy}\xspace experiments.
In addition we have three other representations.
For each of the six considered settings (three \emph{Football Benchmarks}\xspace and two reward functions), we run five random seeds for $500$ million steps each. For \emph{Football Academy}\xspace, we run five random seeds in all $11$ scenarios for $50$ million steps.
\paragraph{Hyperparameter search}
For each of IMPALA, PPO and Ape-X DQN, we performed two hyperparameter searches: one for \textsc{Scoring} reward and one for \textsc{Checkpoint} reward. For the search, we trained on easy difficulty. Each of 100 parameter sets was repeated with 3 random seeds. For each algorithm and reward type, the best parameter set was decided based on average performance -- for IMPALA and Ape-X DQN after 500M, for PPO after 50M. After the search, each of the best parameter sets was used to run experiments with 5 different random seeds on all scenarios. Ranges that we used for the procedure can be found in Table~\ref{tab:impala_hparams_values} for IMPALA, Table~\ref{tab:ppo_hparams_values} for PPO and Table~\ref{tab:dqn_hparams_values} for DQN.
\paragraph{IMPALA}
Importance Weighted Actor-Learner Architecture \cite{espeholt2018impala} is a highly scalable algorithm that decouples acting from learning. Individual workers communicate trajectories of experience to the central learner, instead of sending gradients with respect to the current policy. In order to deal with off-policy data, IMPALA introduces an actor-critic update for the learner called V-trace. Hyper-parameters for IMPALA are presented in Table~\ref{tab:impala_hparams_values}.
\paragraph{PPO}
Proximal Policy Optimization \cite{schulman2017proximal} is an online policy gradient algorithm which optimizes the clipped surrogate objective.
In our experiments we use the implementation from the OpenAI Baselines~\cite{baselines}, and run it over 16 parallel workers.
Hyper-parameters for PPO are presented in Table~\ref{tab:ppo_hparams_values}.
\paragraph{Ape-X DQN}
Q-learning algorithms are popular among reinforcement learning researchers.
Accordingly, we include a member of the DQN family in our comparison. In particular, we chose Ape-X DQN~\cite{Horgan2018DistributedPE}, a highly scalable version of DQN.
Like IMPALA, Ape-X DQN decouples acting from learning but, contrary to IMPALA, it uses a distributed replay buffer and a variant of Q-learning consisting of dueling network architectures~\cite{pmlr-v48-wangf16} and double Q-learning~\cite{van2016deep}.
Several hyper-parameters were aligned with IMPALA. These includes unroll length and $n$-step return, the number of actors and the discount factor $\gamma$. For details, please refer to the Table~\ref{tab:dqn_hparams_values}.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/gfootball_architecture2}
\caption{Architecture used for IMPALA and PPO experiments. For Ape-X DQN, a similar network is used but the outputs are Q-values.}
\label{fig:impala_architecture}
\end{figure}
\section{Numerical Results for the \emph{Football Benchmarks}\xspace}
In this section we provide for comparison the means and std values of $5$ runs for all algorithms in \emph{Football Benchmarks}\xspace. Table~\ref{tab:benchmark_scoring} contains the results for the runs with \textsc{Scoring} reward while Table~\ref{tab:benchmark_checkpoint} contains the results for the runs with \textsc{Checkpoint} reward.
Those numbers were presented in the main paper in Figure~\ref{fig:challenges_both_rewards}.
\begin{table}[h]
\caption{Benchmark results for \textsc{Scoring} reward.}
\resizebox{\columnwidth}{!}{
\begin{tabular}{lrrr}
\toprule
\textsc{Model} & \textsc{Easy} & \textsc{Medium} & \textsc{Hard} \\
\midrule
PPO @20M & $0.05 \pm 0.13$ & $-0.74 \pm 0.08$ & $-1.32 \pm 0.12$ \\
PPO @50M & $0.09 \pm 0.13$ & $-0.84 \pm 0.10$ & $-1.39 \pm 0.22$ \\
IMPALA @20M & $-0.01 \pm 0.10$ & $-0.89 \pm 0.34$ & $-1.38 \pm 0.22$ \\
IMPALA @500M & $5.14 \pm 2.88$ & $-0.36 \pm 0.11$ & $-0.47 \pm 0.48$ \\
DQN @20M & $-1.17 \pm 0.31$ & $-1.63 \pm 0.11$ & $-2.12 \pm 0.33$ \\
DQN @500M & $8.16 \pm 1.05$ & $2.01 \pm 0.27$ & $0.27 \pm 0.56$ \\
\bottomrule
\end{tabular}
}
\label{tab:benchmark_scoring}
\end{table}
\begin{table}[h]
\caption{Benchmark results for \textsc{Checkpoint} reward.}
\resizebox{\columnwidth}{!}{
\begin{tabular}{lrrr}
\toprule
\textsc{Model} & \textsc{Easy} & \textsc{Medium} & \textsc{Hard} \\
\midrule
PPO @20M & $6.23 \pm 1.25$ & $0.38 \pm 0.49$ & $-1.25 \pm 0.09$ \\
PPO @50M & $8.71 \pm 0.72$ & $1.11 \pm 0.45$ & $-0.75 \pm 0.13$ \\
IMPALA @20M & $-1.00 \pm 0.34$ & $-1.86 \pm 0.13$ & $-2.24 \pm 0.08$ \\
IMPALA @500M & $12.83 \pm 1.30$ & $5.54 \pm 0.90$ & $3.15 \pm 0.37$ \\
DQN @20M & $-1.15 \pm 0.37$ & $-2.04 \pm 0.45$ & $-2.22 \pm 0.19$ \\
DQN @500M & $7.06 \pm 0.85$ & $2.18 \pm 0.25$ & $1.20 \pm 0.40$ \\
\bottomrule
\end{tabular}
}
\label{tab:benchmark_checkpoint}
\end{table}
\clearpage
\begin{table*}
\caption{IMPALA: ranges used during the hyper-parameter search and the final values used for experiments with scoring and checkpoint rewards.}
\begin{center}
\begin{tabular}{lrrr}
\toprule
\textbf{Parameter} &\textbf{Range} & \textbf{Best - Scoring} & \textbf{Best - Checkpoint} \\ \midrule
Action Repetitions & 1 & 1 & 1 \\
Batch size & 128 & 128 & 128 \\
Discount Factor ($\gamma$) & $\{.99, .993, .997, .999\}$ & .993 & .993 \\
Entropy Coefficient & Log-uniform $(1\mathrm{e}{-6}$, $1\mathrm{e}{-3})$ & 0.00000521 & 0.00087453 \\
Learning Rate & Log-uniform $(1\mathrm{e}{-5}$, $1\mathrm{e}{-3})$ & 0.00013730 & 0.00019896 \\
Number of Actors & 500 & 500 & 500 \\
Optimizer & Adam & Adam & Adam \\
Unroll Length/$n$-step & $\{16, 32, 64\}$ & 32 & 32 \\
Value Function Coefficient &.5 & .5 & .5 \\
\bottomrule
\end{tabular}
\label{tab:impala_hparams_values}
\end{center}
\end{table*}
\begin{table*}[h]
\caption{PPO: ranges used during the hyper-parameter search and the final values used for experiments with scoring and checkpoint rewards.}
\begin{center}
\begin{tabular}{lrrr}
\toprule
\textbf{Parameter} & \textbf{Range} & \textbf{Best - Scoring} & \textbf{Best - Checkpoint} \\ \midrule
Action Repetitions & 1 & 1 & 1 \\
Clipping Range & Log-uniform $(.01, 1)$ & .115 & .08 \\
Discount Factor ($\gamma$) & $\{.99, .993, .997, .999\}$ & .997 & .993 \\
Entropy Coefficient & Log-uniform $(.001, .1)$ & .00155 & .003 \\
GAE ($\lambda$) & .95 & .95 & .95 \\
Gradient Norm Clipping & Log-uniform $(.2, 2)$ & .76 & .64 \\
Learning Rate & Log-uniform $(.000025, .0025)$ & .00011879 & .000343 \\
Number of Actors & 16 & 16 & 16 \\
Optimizer & Adam & Adam & Adam \\
Training Epochs per Update & $\{2, 4, 8\}$ & 2 & 2 \\
Training Mini-batches per Update & $\{2, 4, 8\}$ & 4 & 8 \\
Unroll Length/$n$-step & $\{16, 32, 64, 128, 256, 512\}$ & 512 & 512 \\
Value Function Coefficient & .5 & .5 & .5 \\
\bottomrule
\end{tabular}
\label{tab:ppo_hparams_values}
\end{center}
\end{table*}
\begin{table*}
\caption{DQN: ranges used during the hyper-parameter search and the final values used for experiments with scoring and checkpoint rewards.}
\begin{center}
\begin{tabular}{lrrr}
\toprule
\textbf{Parameter} & \textbf{Range} & \textbf{Best - Scoring} & \textbf{Best - Checkpoint} \\ \midrule
Action Repetitions & 1 & 1 & 1 \\
Batch Size & 512 & 512 & 512 \\
Discount Factor ($\gamma$) & $\{.99, .993, .997, .999\}$ & .999 & .999 \\
Evaluation $\epsilon$ & .01 & .01 & .01 \\
Importance Sampling Exponent & $\{0., .4, .5, .6, .8, 1.\}$ & 1. & 1. \\
Learning Rate & Log-uniform $(1\mathrm{e}{-7}$, $1\mathrm{e}{-3})$ & .00001475 & .0000115 \\
Number of Actors & 150 & 150 & 150 \\
Optimizer & Adam & Adam & Adam \\
Replay Priority Exponent & $\{0., .4, .5, .6, .7, .8\}$ & .0 & .8 \\
Target Network Update Period & 2500 & 2500 & 2500 \\
Unroll Length/$n$-step & $\{16, 32, 64, 128, 256, 512\}$ & 16 & 16 \\
\bottomrule
\end{tabular}
\label{tab:dqn_hparams_values}
\end{center}
\end{table*}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{performance_plots_v2/main_plot_academy_checkpoints}
\caption{Average Goal Difference on \emph{Football Academy}\xspace for IMPALA with \textsc{Checkpoint} reward.}
\label{fig:academy_impala_checkpoint}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{performance_plots/ppo_plot_academy_checkpoints}
\caption{Average Goal Difference on \emph{Football Academy}\xspace for PPO with \textsc{Checkpoint} reward. Scores for v$1.x$ (All other results in this paper are for v$2.x$, but for this plot the experiment didn't finish. Please check arxiv for the full v$2.x$ results)}
\label{fig:academy_ppo_checkpoint}
\end{figure*}
\begin{table*}[h!]
\caption{Description of the default \emph{Football Academy}\xspace scenarios. If not specified otherwise, all scenarios end after 400 frames or if the ball is lost, if a team scores, or if the game is stopped (\emph{e.g.} if the ball leaves the pitch or if there is a free kick awarded).The difficulty level is 0.6 (\emph{i.e.}, medium).}
\renewcommand*{\arraystretch}{1.5}
\begin{center}
\begin{tabular}{p{5cm}p{9cm}}
\toprule
\textbf{Name} & \textbf{Description} \\ \midrule
\textit{Empty Goal Close} & Our player starts inside the box with the ball, and needs to score against an empty goal. \\
\textit{Empty Goal} & Our player starts in the middle of the field with the ball, and needs to score against an empty goal. \\
\textit{Run to Score} & Our player starts in the middle of the field with the ball, and needs to score against an empty goal. Five opponent players chase ours from behind. \\
\textit{Run to Score with Keeper} & Our player starts in the middle of the field with the ball, and needs to score against a keeper. Five opponent players chase ours from behind. \\
\textit{Pass and Shoot with Keeper} & Two of our players try to score from the edge of the box, one is on the side with the ball, and next to a defender. The other is at the center, unmarked, and facing the opponent keeper. \\
\textit{Run, Pass and Shoot with Keeper} & Two of our players try to score from the edge of the box, one is on the side with the ball, and unmarked. The other is at the center, next to a defender, and facing the opponent keeper. \\
\textit{3 versus 1 with Keeper} & Three of our players try to score from the edge of the box, one on each side, and the other at the center. Initially, the player at the center has the ball, and is facing the defender. There is an opponent keeper. \\
\textit{Corner} & Standard corner-kick situation, except that the corner taker can run with the ball from the corner. The episode does not end if possession is lost.\\
\textit{Easy Counter-Attack} & 4 versus 1 counter-attack with keeper; all the remaining players of both teams run back towards the ball. \\
\textit{Hard Counter-Attack} & 4 versus 2 counter-attack with keeper; all the remaining players of both teams run back towards the ball. \\
\textit{11 versus 11 with Lazy Opponents} & Full 11 versus 11 game, where the opponents cannot move but they can only intercept the ball if it is close enough to them. Our center-back defender has the ball at first. The maximum duration of the episode is 3000 frames instead of 400 frames.\\
\bottomrule
\end{tabular}
\label{tab:scenario_description}
\end{center}
\end{table*}
| {
"attr-fineweb-edu": 1.703125,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUa97xaJJQnKrAiU7h | \section*{Abstract}
{\small
After a long qualifying process packed with surprises (Italy missing out as the reigning European champions) and last minute drama (both Egypt and Peru missed out on penalties), the FIFA World Cup 2022 kicked off on the 20th of November in Qatar. With 32 countries and over 800 players representing nearly 300 clubs globally, it measured up to more than 12 billion EUR in the players' current estimated market value total. In this short piece, we explore what the small and interconnected world of football stars looks like and even make a few efforts and compare success in soccer to social networks.
}
\vspace{0.5cm}
{\small {\bf Keywords}: network science, social network analysis, soccer, data science}
\vspace{1.0cm}
{\it \hspace{-1cm} Published in Nightingale, Journal of the Data Visualization Society, December 23, 2022~\cite{nightingale}. }
\vspace{1.0cm}
\section{Data}
We are data scientists with a seasoned football expert on board, so we went for one of the most obvious choices of the field – www.transfermarkt.com. We first wrote a few lines of Python code to scrape the list of participating teams~\cite{teams}, the list of each team's players~\cite{players}, and the detailed club-level transfer histories of these players arriving at the impressive stats of our intro by comprising the complete transfer history of 800 players, measuring up to 6,600 transfers and dating back to 1995 with the first events.
\section{Club network}
The majority of players came from the top five leagues (England, Spain, Italy, Germany, and France) and represented household teams such as Barcelona (with 17 players), Bayern Munich (16), or Manchester City (16). While that was no surprise, one of the many wonders of a World Cup is that players from all around the globe can show their talents. Though not as famous as the 'big clubs', Qatari Al Sadd gave 15 players, more than the likes of Real Madrid or Paris Saint-Germain! There are, however, great imbalances when throwing these players' market values and transfer fees into the mix. To outline these, we decided to visualize the typical 'migration' path football players follow – what are the most likely career steps they make one after the other?
A good way to capture this, following the prestige analysis of art institutions~\cite{fraiberger2018quantifying}, is to introduce network science~\cite{netsci} and build a network of football clubs. In this network, every node corresponds to a club, while the network connections encode various relationships between them. These relationships may encode the interplay of different properties of clubs, where looking at the exchange of players (and cash) seems a natural choice. In other words, the directed transfers of players between clubs tie the clubs into a hidden network. Due to its directness, this network also encodes information about the typical pathways of players via the 'from' and 'to' directions, which eventually capture the different roles of clubs as attractors and sinks.
\begin{table}[!hbt]
\centering
\includegraphics[scale=0.5]{table1.png}
\caption{The datafied transfer history of Neymar.}
\label{tab:tab1}
\end{table}
To do this in practice, our unit of measure is the individual transfer history of each player, shown in Table \ref{tab:tab1} for the famous Brazilian player known simply as Neymar. This table visualizes his career trajectory in a datafied format, attaching dates and market values to each occasion he changed teams. His career path looks clean from a data perspective, although football fans will remember that it was anything but – his fee of EUR 222M from Barcelona to PSG still holds the transfer record to this day. These career steps, quantified by the transfers, encode upgrades in the case of Neymar. In less fortunate situations, these prices can go down signaling a downgrade in a player's career.
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.75\textwidth]{Figure1.png}
\caption{The network of the top football clubs based on the total amount of money spent and received on player transfers. Node sizes correspond to these values, while node coloring shows the dominant color of each club's home country flag.}
\label{fig:fig1}
\end{figure}
\clearpage
Following this logic in our analysis, we assumed that two clubs, A and B, were linked (the old and new teams of a player), if a player was transferred between them, and the strength of this link corresponded to the total amount of cash associated with that transaction. The more transactions the two clubs had, the stronger their direct connection was (which can go both ways), with a weight equal to the total sum of transfers (in each direction). In the case of Neymar, this definition resulted in a direct network link pointing from Barcelona to Paris SG with a total value of EUR 222M paid for the left winger.
Next, we processed the more than six thousand transfers of the 800+ players and arrived at the network of teams shown in Figure \ref{fig:fig1}. To design the final network, we went for the core of big money transactions and only kept network links that represented transfer deals worth more than EUR 2.5M in total. This network shows about 80 clubs and 160 migration channels of transfers. To accurately represent the two aspects of transfers (spending and earning) we created two versions of the same network. The first version measures node sizes as the total money invested in new players (dubbed as spenders), while the second version scales nodes as the total money acquired by selling players (dubbed as mentors).
\paragraph{Spenders.}
The first network shows us which clubs spent the most on players competing in the World Cup, with the node sizes corresponding to the total money spent. You can see the usual suspects: PSG, the two clubs from Manchester, United, and City, and the Spanish giants, Barcelona, and Real Madrid. Following closely behind are Chelsea, Juventus, and Liverpool. It's interesting to see Arsenal, who – under Arteta's management – can finally spend on players, and Bayern Munich, who spend a lot of money but also make sure to snatch up free agents as much as possible.
Explore these relationships and the network in more detail by looking at Real Madrid! Los Blancos, as they're called, have multiple strong connections. Their relationship with Tottenham is entirely down to two players who played an integral part in Real Madrid's incredible 3-year winning spell in the Champions League between 2016 and 2018: Croatian Luka Modric cost 35M, and Welsh Gareth Bale cost an at-the-time record-breaking 101M. While Real Madrid paid 94M for Cristiano Ronaldo in 2009 to Man Utd, in recent years there was a turn in money flow, and United paid a combined 186M for three players: Ángel Di María, Raphael Varane, and Casemiro. They also managed to sell Cristiano Ronaldo with a profit to Juventus for 117M.
One can see other strong connections as well, such as Paris SG paying a fortune to Barcelona for Neymar and Monaco for Kylian Mbappé. There are also a few typical paths players take – Borussia Dortmund to Bayern Munich, Atlético Madrid to Barcelona, or vice versa. It's also interesting to see how many different edges connect to these giants. Man City has been doing business worth over EUR 1M with 27 different clubs.
\paragraph{Mentors.}
The second network shows which clubs grow talent instead of buying them and have received a substantial amount of money in return. Node sizes represent the amount of transfer fees received. This paints a very different picture from our first network except for one huge similarity: Real Madrid. In the past, they were considered the biggest spenders. They have since adopted a more business-focused strategy and managed to sell players for high fees as mentioned above.
A striking difference, however, is while the top spenders were all part of the top five leagues, the largest talent pools came from outside this cohort except for Monaco. Benfica, Sporting, and FC Porto from Portugal, and Ajax from the Netherlands are all famous for their young home-grown talents, and used as a stepping stone for players from other continents. Ajax has sold players who competed in this World Cup for over EUR 560M. Their highest received transfer fees include 85.5M for Matthijs de Ligt from Juventus and 86M for Frenkie de Jong from Barcelona. Ajax signed de Jong for a total of EUR 1 from Willem II in 2015 when he was 18, and de Light grew up in Ajax's famous academy. Not to mention that they recently sold Brazilian Antony to Manchester United for a record fee of 95M. They paid 15.75M for him just 2 years ago – that's almost 80M in profit. Insane!
Benfica earned close to 500M, most recently selling Uruguayan Darwin Nunez for 80M to Liverpool. The record fee they received is a staggering 127M for Portuguese Joao Félix from Atlético Madrid, who grew up at Benfica. Monaco earned 440M from selling players such as Kylian Mbappé (180M) and Aurélien Tchouameni (80M), Portuguese Bernardo Silva (50M), Brazilian Fabinho and Belgian Youri Tielemans (both for 45M). These clubs have become incredible talent pools for the bigger clubs, therefore really appealing to young players. It's interesting to see how many edges the nodes for these clubs have, further proving that these teams function as a means for reaching that next level.
\section{Player network}
After looking at the club-to-club relationships, zoom in on the network of players binding these top clubs together. Here, we built on the players' transfer histories again and reconstructed their career timelines. Then we compared these timelines between each pair of World Cup players, noted if they ever played for the same team, and if so, how many years of overlap they had (if any).
To our biggest surprise, we got a rather intertwined network of 830 players connected by about 6,400 former and current teammate relationships, as shown in Figure \ref{fig:fig2}. Additionally, the so-called average path length turned out to be 3 – which means if we pick two players at random, they most likely both have teammates who played together at some point. Node sizes were determined by a player's current market value, and clusters were colored by the league's nation where these players play. It didn't come as a surprise that current teammates would be closer to each other in our network. You can see some interesting clusters here, with Real Madrid, Barcelona, PSG, and Bayern Munich dominating the lower part of the network and making up its center of gravity.
Why is that? The most valuable player of the World Cup was Kylian Mbappé, with a market value of 160M, surrounded by his PSG teammates like Brazilians Marquinhos and Neymar and Argentinian Lionel Messi. Messi played in Barcelona until 2021, with both Neymar and Ousmane Dembélé connecting the two clusters strongly. Kingsley Coman joined Bayern Munich in 2017, but he played for PSG up until 2014, where they were teammates with Marquinhos, thus connecting the two clusters.
You can discover more interesting patterns in this graph, such as how the majority of the most valuable players have played together directly or indirectly. You can also see Englishmen Trent Alexander-Arnold (Liverpool) or Declan Rice (West Ham United) further away from the others. Both of those players only ever played for their childhood clubs. But the tight interconnectedness of this network is also evident with how close Alexander-Arnold actually is to Kylian Mbappé. During the 2017–2018 season, Mbappé played at Monaco with Fabinho behind him in midfield, who signed for Liverpool at the end of the season, making him and Alexander-Arnold teammates.
\clearpage
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.90\textwidth]{Figure2.png}
\caption{The player-level network showing previous and current teammate relationships. Note size corresponds to the players' current market values, while color encodes their nationality based on their country's flag's primary color. See the interactive version of this network here~\cite{interactive}.}
\label{fig:fig2}
\end{figure}
With the World Cup hosting hundreds of teams' players from various nations, there are obviously some clusters that won't connect to these bigger groups. Many nations have players who have only played in their home league, such as this World Cup's host nation Qatar (maroon cluster in the top left corner). Saudi Arabia (green cluster next to Qatar) beat Argentina, causing one of this year's biggest surprises. Morocco (red cluster in the top right corner) delivered the best-ever performance by an African nation in the history of the World Cups. Both of those nations join Qatar in this category of home-grown talent. These players will only show connections if they play in the same team – in the case of the Moroccan cluster, that team is Wydad Casablanca. The Hungarian first league's only representative at the World Cup, Tunisian Aissa Laidouni from Ferencváros hasn't played with anyone else on a club level who has made it to the World Cup. He became a lone node on our network. That shouldn't be the case for long, considering how well he played in the group stages.
\section{Success and networks}
The potential success of different teams and the outcomes of championships have been majorly interesting for data and statistics people since the era of Moneyball. Ever since, a wide range of efforts came to light about the possibilities to predict the outcomes of sporting events, from asking actual animals like Paul the Octopus to serious academic research, and even companies specializing in this domain.
Here we are not attempting to overcome such elaborate methods and solutions but intend to show the quantitative drivers of success in soccer from a different angle - as network science sees it. For that, we follow some of the earlier work of Barabasi et. al and others on the quantification of success and the role of networks in team success~\cite{guimera2005team, barabasi2018formula, janosov2020quantifying}.
\paragraph{Player-level success.} To capture the success of players, one could pick a large number of different KPIs depending on the goal they wish to study. Here, to illustrate the role of networks, we picked their current market value as a proxy of their success. We also note that we do not take sides on how much the player's financial success may proxy their talent or their sense of branding, just expect what the market shows.
To assess the players' characteristics, we computed several measures describing them. First, we added measures describing their career: the year of their first transfer, their first public market value, and the number of transfers they had. Then we added several network measures capturing their network positions: Weighted Degree measuring the total weight of connections a player has, Closeness Centrality measuring how few hops a player is from the others, Betweenness Centrality centrality capturing the network bridge positions, and Clustering tells us what fraction of a player's connections know each other.
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.80\textwidth]{Figure3.png}
\caption{The number of players, and the correlations between their market value and the different descriptive features as a function of the number of transfers they had.}
\label{fig:fig3}
\end{figure}
Then, we conducted a series of correlation analyses where we correlated these measures against each player's current market value. Additionally, we differentiated between players based on their seniority, measured by the number of transfers they had. We realize having played for more teams doesn't necessarily mean that a player has had a longer career, younger players can be loaned out to gain experience, but this approach works well with our data set. Figure \ref{fig:fig3} shows the effect of this on the number of players analyzed: less than 1\% of the players have less than 2 transfers, while the line between 7 and 8 transfers splits
the players into two roughly equal parts. Also, slightly less than 5\% of the players have more than 15 transfers which translates into about 30 veterans.
Furthermore, Figure \ref{fig:fig3} also shows the correlations between the players' descriptors against their seniority. We sorted this chart based on the full population of players, meaning that the features in the correlation matrix are put in a descending order based on the correlation matrix when the number of transfers equals zero. The changing patterns in the matrix indicate the different correlation trends as we restrict our analysis to more and more senior players.
The most striking feature of the correlation values, ranging from 0.53 to -0.62, is that no matter the player's seniority, Weighted Degree plays a superior role. One can also notice that the negative correlation values typically occur for the most senior players, and are probably more accounted for noise than signal. For younger players, the graph also shows that after the Weighted Degree comes to their first market value, and then Closeness Centrality. However, at the tenure of about five transfers, the picture changes, and the top three most correlated features to the current market value end up being derived from the network of players. While we are not claiming pinpoint accuracy here, these findings certainly indicate that the role of networks shows significant potential for the expected success of a soccer player.
\paragraph{Country-level success.} Last but not least, we also shoot our shot at understanding some aspects of the final - and certainly not unsurprising - ranking of the World Cup. For this, we used the final ranking of all 32 teams at the 2022 FIFA World Cup~\cite{sportingnews}. Then, we went back to the previously introduced player-level statistics covering the total number of transfers, the first year of transfer, the total market value, and several network measures first attached to each player, and then aggregated to the level of the country's teams. The aggregation, depending on the distribution of the underlying values, happened either by taking the mean or the median values of a country's team members.
Next, we ventured into the world of predicting algorithms. To be more precise, we built a binary XGBoost~\cite{chen2016xgboost} classifier aimed to distinguish between those teams which made it to the Rd of 16 versus those that didn't make it longer than the Groups. We note that even in this scenario, the number of data points was very low. In future research, this part of the analysis could be improved by adding other World Cups to the data, which may also allow one to do more elaborate predictions, covering the Quarterfinals or even the grand finale.
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.7\textwidth]{Figure4.png}
\caption{Feature importance expressed by relative values when differentiating the teams tho dropped out after the first round from the rest by using binary classification.}
\label{fig:fig4}
\end{figure}
We optimized our XGBoost model using 250 estimators, a maximum tree depth of 6, a learning rate varying between 0.1 and 0.005, a grid search algorithm, and 5-fold cross-validation. On the one hand, the overall prediction accuracy resulted in a fairly modest value of 60\%. Still, it is worth taking a look at the feature importance analysis shown in Figure \ref{fig:fig4}. This figure shows that - probably not so - surprising, the most important driver of a team's position is the total current market value of its players. High paychecks seem to pay off. At least to some extent pays - the correlation between team rank and the total value is still just somewhere between 0.5 and 0.6. This is then followed, somewhat in a tie, between closeness centrality and first market value. While first market value - for more junior players - is certainly correlated with their current value, a network centrality reaching that level certainly emphasizes the role of the network. While the other network measures are performing around 10\% each in terms of their relative importance, we leave it for previous research to nail down the exact meaning of these figures.
Finally, to outline the signal behind the noise caused by the low number of data points, we created Figure \ref{fig:fig5} showing how closeness centrality and market value guide the teams in terms of their final rankings.
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.5\textwidth]{Figure5.png}
\caption{On this scatter plot, each team is represented by a dot colored according to its final rank, while its position is determined by the median closeness centrality and the total current market value of its players.}
\label{fig:fig5}
\end{figure}
\section{Conclusion}
In conclusion, we saw in our analysis how network science and visualization can uncover and quantify things that experts may have a gut feeling about but lack the hard data. This depth of understanding of internal and team dynamics that is possible through network science can also be critical in designing successful and stable teams and partnerships. Moreover, this understanding can lead to exact applicable insights on transfer and drafting strategies or even spotting and predicting top talent at an early stage. While this example is about soccer, you could very much adapt these methods and principles to other collaborative domains that require complex teamwork and problem solving with well-defined goals, from creative production to IT product management.
| {
"attr-fineweb-edu": 1.583984,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbCY241xg-GrQntCW | \section{Conclusion} We have introduced an approach to ball tracking and
state estimation in team sports. It uses Mixed Integer Program that allows to
account for second order motion of the ball, interaction of the ball and the
players, and different states that the ball can be in, while ensuring globally
optimal solution. We showed our approach on several real-world sequences from
multiple team sports. In future, we would like to extend this approach to more
complex tasks of activity recognition and event detection. For this purpose, we
can treat events as another kind of objects that can be tracked through time,
and use interactions between events and other objects to define their state.
\section{Experiments}
\label{sec:experiments}
In this section, we compare our results to those of several state-of-the-art
multi-view ball-tracking algorithms~\cite{Wang14a,Wang14b,Parisot15}, a
monocular one~\cite{Gomez14}, as well as two tracking methods that could easily
be adapted for this purpose~\cite{Zhang12a,Berclaz11}.
\comment{
\pfrmk{This sort of contradicts the introduction statement about all other
state-of-the-art methods being specific to a particular sport. What do we say
about those?} \amrmk{The~\cite{Parisot15,Gomez14} is sport-specific, at least it
is intended for basketball. Other works are used in multiple sports, and this
makes sense since we are trying to compare to approaches that are, as our,
multisport. Second group, namely~\cite{Zhang12a,Berclaz11} are generic, they are
for object tracking, not necessarily ball tracking. We compare to them for the
purpose of showing that generic approaches don't work too well. Another point is
that many approaches from the related work don't have publicly available code.}}
We first describe the datasets we use for evaluation purposes. We then briefly
introduce the methods we compare against and finally present our results.
\subsection{Datasets}
We use two volleyball, three basketball, and one soccer sequences, which we
detail below.
\vspace{-3mm}
\paragraph{\basket{1} and \basket{2}} comprise a 4000- and a 3000-frame
basketball sequences captured by 6 and 7 cameras, respectively. These
synchronized 25-frame-per-second cameras are placed around the court. We
manually annotated each 10th frame of \basket{1} and 500 consecutive frames of
\basket{2} that feature flying ball, passed ball, possessed ball and ball out of
play. We used the \basket{1} annotations to train our classifiers and the
\basket{2} ones to evaluate the quality of our results, and vice versa.
\vspace{-3mm}
\paragraph{\basket{APIDIS}} is also a basketball dataset~\cite{De08} captured
by seven unsynchronized 22-frame-per-second cameras. A pseudo-synchronized
25-frame-per-second version of the dataset is also available and this is what we
use. The dataset is challenging because the camera locations are not good for
ball tracking and lighting conditions are difficult. We use 1500 frames with
manually labeled ball locations provided by~\cite{Parisot15} to train the ball
detector, and \basket{1} sequence to train the state classifier. We report
our results on another 1500 frames that were annotated manually
in~\cite{De08}.
\vspace{-3mm}
\paragraph{\volley{1} and \volley{2}} comprise a 10000- and a 19500-frame
volleyball sequences captured by three synchronized 60-frame-per-second cameras
placed at both ends of the court and in the middle. Detecting the ball is often
difficult both because on either side of the court the ball can be seen by at
most two cameras and because, after a strike, the ball moves so fast that it is
blurred in middle camera images. We manually labeled each third frame in
1500-frame segments of both sequences. As before, we used one for training and
the other for evaluation.
\vspace{-3mm}
\paragraph{\soccer{ISSIA}} is a soccer dataset~\cite{DOrazio09} captured by six
synchronized 25-frame-per-second cameras located on both sides of the field. As
it is designed for player tracking, the ball is often out of the field of view
when flying. We train on the 1000 frames and report results on another 1000.
\\\\
In all these datasets, the apparent size of the ball is often so small that
state-of-the-art monocular object tracker~\cite{Zhang12a} was never able to track
the ball reliably for more than several seconds.
\subsection{Baselines}
We use several recent multi-camera ball tracking algorithms as baselines.
To ensure a fair comparison, we ran all publicly available approaches
with\comment{, or asked their authors to run them \pfrmk{Have to explain
how we got the Xinchao results using an algorithm that is not online}}
the same set of detections, which were produced by the ball detector described in
Sec.~\ref{sec:ballGraph}. We briefly describe these algorithms below. %
\begin{itemize}
\vspace{-2mm}
\item{{\bf InterTrack}{}~\cite{Wang14b}} introduces an Integer Programming approach to
tracking two types of interacting objects, one of which can contain another.
Modeling the ball as being ``contained'' by the player in possession of it
was demonstrated as a potential application. In~\cite{Wang15}, this
approach is shown to outperform several multi-target tracking
approaches~\cite{Pirsiavash11,Leal-Taixe14} for ball tracking task.
\vspace{-2mm}
\item{{\bf RANSAC}{}~\cite{Parisot15}} focuses on segmenting ballistic trajectories
of the ball and was originally proposed to track it in the \basket{APIDIS}
dataset. Approach is shown to outperform the earlier graph-based filtering
technique of~\cite{Parisot11}. We found that it also performs well in
our volleyball datasets that feature many ballistic trajectories. For the
\soccer{ISSIA} dataset, we modified the code to produce linear rather than
ballistic trajectories.
\vspace{-2mm}
\item{{\bf FoS}{}~\cite{Wang14a}} focuses on modeling the interaction between the
ball and the players, assuming that long passes are already segmented. In
the absence of a publicly available code, we use the
numbers reported in the article for \basket{1-2-APIDIS} and on
\soccer{ISSIA}. \comment{Authors show that their approach outperforms
simple trajectory-growing approach.} \comment{\pfrmk{That was really done
manually?} It uses information about the players to decide which one is
in possession. \pf{The authors report their results on FIBAW-1, FIBAW-2,
APIDIS and ISSIA dataset.} \pfrmk{Do we care since we allegedly re-ran
the algorithm?}\amrmk{I am sorry for being misleading, for FoS I was
planning to report the results from the original paper, since they are on
the same dataset. However, as is shown in Xinchao's thesis, InterTrack
approach surpasses that of FoS on several datasets. Since this is also
an `internal' work, and we compare to a better work, should we maybe omit
this approach? And when presenting InterTrack, we could describe that it
is better than several approaches.}}
\vspace{-2mm}
\item{{\bf Growth}{}~\cite{Gomez14}} greedily grows the trajectories
instantiated from points in consecutive frames. Heuristics are used to
terminate trajectories, extend them and link neighbouring ones. It is
based on the approach of~\cite{Chen07} and shown to outperform
approaches based on the Hough transform. \comment{ that are straight lines
when projected onto the ground plane. We reprojected our detections from 3D
to a single view of each camera to compare with other methods. We selected
the best results among on the views. } Unlike the other approaches, it is
monocular and we used as input our 3D detections reprojected into the camera
frame.
\end{itemize}
To refine our analysis and test the influence of specific element of our
approach, we also used the following approaches.
\begin{itemize}
\vspace{-2mm}
\item{{\bf MaxDetection}{}.} To demonstrate the importance of tracking the ball,
we give the results obtained by simply choosing the detection with maximum
confidence.
\vspace{-2mm}
\item{{\bf KSP}{}~\cite{Berclaz11}.} To demonstrate the importance of modeling
interactions between the ball and the players, we use the publicly available
KSP tracker to track only the ball, while ignoring the players. \comment{
only allowing the ball to be in possession when a player is nearby, we use
the publicly available KSP tracker to track separately the ball and the
players. \pfrmk{I don't understand the previous sentence.}}
\vspace{-2mm}
\item{{\bf OUR-No-Physics}{}.} To demonstrate the importance of physics-based
second-order constraints of Eq.~\ref{eq:secondOrder}, we turn them off.
\vspace{-2mm}
\item{{\bf OUR-Two-States}{}.} Similarly, to demonstrate the impact of keeping track of many
ball states, we assume that the ball can only be in one of two states,
possession and free motion.
\end{itemize}
\comment{For free movement, we generate ball candidates by
extending ball trajectories both parabolically and linearly. \pfrmk{Don't
understand the last sentence. Why do you need a new generating mechanism?}
\amrmk{When we had multiple states, we had a physical model associated with each
of them. Therefore, for flying ball all trajectories were parabolas. Near the
ground all trajectories were straight lines. Here, since we don't have different
states, we generate both types of trajectories for the single state of free
motion. This is not a new generating mechanism, it is just putting together all
different trajectories of different kinds that were previously attributed to
different states.}
\pfrmk{Shouldn't {\bf OUR-No-Physics}{} and FoS peform almost
the same? If not, why not?} \amrmk{Did you mean InterTrack? FoS has little in
common with our approach at all. As for InterTrack, yes, it performs mostly
similarly or better. When it performs better, it does so in cases where there is
a lot of interaction and not much free motion. In such cases by not following
physically possible trajectory when transitioning between players, it is able to
recover faster.}}
\subsection{Metrics}
Our method tracks the ball and estimates its state. We use a different metric
for each of these two tasks.
\vspace{-3mm}
\paragraph{Tracking accuracy} at distance $d$ is defined as the percent
of frames in which the location of the tracked ball is closer than
$d$ to the ground truth location.
\comment{More formally:
\begin{small}
\begin{equation}
TA(d)={1 \over T}\sum^T\limits_{t=1}\mathbb{1}(||Pr(t)-Gt(t)||_2 \le d).
\end{equation}
\end{small}}
The curve obtained by varying $d$ is known as the ``precision
plot''~\cite{Babenko11}. When the ball is
\textit{in\_possession}, its location is assumed to be that of the player
possessing it. \comment{\pf{We take $d$ to be the Euclidean distance in the
ground plane except in the two followings cases.} If the ball is reported to be
\textit{in\_possession} and really is, we take $d$ to be the ground plane
distance between the player who has it and the one who is believed to.
\pfrmk{What happens if the ball is reported \textit{in\_possession} and isn't?}}
If the ball is reported to be \textit{not\_present} while it really is present, or vice
versa, the distance is taken to be infinite.
\vspace{-3mm}
\paragraph{Event accuracy} measures how well we estimate the state of the ball.
We take an \textbf{event} to be a maximal sequence of consecutive frames with
identical ball states. Two events are said to match if there are not more than
$5$ frames during which one occurs and not the other. Event accuracy then is
a symmetric measure we obtain by counting recovered events that matched ground
truth ones, as well as the ground truth ones that matched the recovered ones,
normalized by dividing it by the number of events in both sequences.
\comment{plus the number of events from ground truth that match with recovered
evant, over the total number of events in both sequences. We measure this metric
only for {\bf OUR}{}, {\bf OUR-No-Physics}{}, {\bf OUR-Two-States}{}, and {\bf InterTrack}{}. To make the comparison
possible with the last two when they report the ball as being free, we apply our
state classifier to get the specific state of the ball (\eg \textit{flying} or
\textit{rolling}).}
\comment{For approaches that do not report the ball state, we use the output of
our classifier to predict it. We give further details in the supplementary
material.\pfrmk{Does it make any sense to do this?}}
\subsection{Comparative Results}
We now compare our approach to the baselines in terms of the above
metrics. As mentioned in Sec.~\ref{eq:MIP}, we obtain the players trajectories
by first running the code of ~\cite{Fleuret08a} to compute the player's
probabilities of presence in each separate fame and then that
of~\cite{Berclaz11} to compute their trajectories. We first report accuracy
results when these are treated as being correct, which amounts to fixing the
$p_i^j$ in Eq.~\ref{eq:finalEq}, and show that our approach performs well. We
then perform joint optimization, which yields a further improvement.
\vspace{-3mm}\paragraph{Tracking and Event Accuracy.}
\input{Figures/curves.tex}
\input{Figures/results.tex}
As shown in Fig.~\ref{fig:TAFigure}(a-f), {\bf OUR}{} complete approach, outperforms
the others on all 6 datasets. Two other methods that explicitly model the
ball/player interactions, {\bf OUR-No-Physics}{} and {\bf InterTrack}{}, come next. {\bf FoS}{} also
accounts for interactions but does markedly worse for small distances, probably
due to the lack of an integrated second order model.
\vspace{-3mm}\subparagraph{Volleyball.}
The differences are particularly visible in the
Volleyball datasets that feature both interactions with the players and
ballistic trajectories. Note that {\bf OUR-Two-States}{} does considerably worse, which
highlights the importance of modeling the different states accurately.
\vspace{-3mm}\subparagraph{Basketball.}
The differences are less obvious in the basketball datasets where {\bf OUR-No-Physics}{} and
{\bf InterTrack}{}, which model the ball/player interactions without imposing global
physics-based constraints, also do well. This reflects the fact that the ball
is handled much more than in volleyball. As a result, our method's ability to
also impose strong physics-based constraints has less overall impact.
\vspace{-3mm}\subparagraph{Soccer.}
On the soccer dataset, the ball is only present in about 75\% of the frames and
we report our results on those. Since the ball is almost never seen flying, the
two states (\textit{in\_possession} and \textit{rolling}) suffice, which
explains the very similar performance of {\bf OUR}{} and {\bf OUR-Two-States}{}. {\bf KSP}{} also
performs well because in soccer occlusions during interactions are less common
than in other sports. Therefore, handling them delivers less of a benefit.
\comment{On the soccer dataset, due to camera positioning, tracking the ball in
flight and during out of the game segments is very difficult, as it is often not
seen by any camera. We therefore provide results on the frames without such
conditions. We give additional results in supplementary materials. Since we
almost never track the flying ball, two states (\textit{in\_possession} and
\textit{rolling}) are enough to explain the ball motions, which explains similar
performance of {\bf OUR}{} and {\bf OUR-Two-States}{}. {\bf KSP}{} also performs well, as in soccer
occlusions during interactions are less common than in other sports, and
explicitly handling them explicitly provides less improvement.}
Our method also does best in terms of event accuracy, among the methods that
report the state of the ball, as shown in Fig.~\ref{fig:TAFigure}(g). As can be
seen in Fig.~\ref{fig:results}, both the trajectory and the predicted state are
typically correct. Most state assignment errors happen when the ball is briefly
assigned to be \textit{in\_possession} of a player when it actually flies
nearby, or when the ball is wrongly assumed to be in free motion, while is is
really \textit{in\_possession} but clearly visible.
\comment{
\am{ {\bf OUR-No-Physics}{} also works well, as in many scenarios
enforcing local physically viable trajectories (by our graph construction) is
already good enough. {\bf InterTrack}{} approach effectively uses the interaction and also
obtains good results, especially on basketball datasets, which include lots of
interactions. The cost function of this approach does not differentiate between
possession by different players, resulting in some errors our approach does not
make. {\bf RANSAC}{} performs best on volleyball dataset due to abundance of
parabolic trajectories, but often fails it situations with many interactions.
{\bf Growth}{} performs on par or worse when it can't make use of detections
that are not in adjacent frames. {\bf OUR-Two-States}{} performs equivalent to {\bf OUR}{} on
\soccer{ISSIA}, as tracking of the flying ball was difficult due to camera
positioning, and the presence of third state \textit{flying}, in addition to
\textit{in\_possession} and \textit{rolling}, did not affect the results.
Reported {\bf FoS}{} results are worse for small tracking distances, probably due to
absence of integrated second order model.
}
}
\comment{
. We
noticed that it often assigns possession to the wrong player, due to the fact
that its cost function does not have terms that correspond to the possession.
Our solution, on the other hand, uses rare information about the ball detections
while it is in possession to assign the possession correctly. {\bf RANSAC}{}
approach, although originally designed for basketball dataset, shows better
results on the volleyball dataset due to abundance of parabolic trajectories.
However, in basketball setting, it does not allow reliable tracking when the
ball is not shot towards the basket. {\bf CompTrack}{} is not able to track the ball
for more than several dosen frames and therefore performs very poorly, below
10\% in terms of tracking accuracy. It is not shown on the plots. For football
dataset, results of {\bf OUR-Two-States}{} are equivalent to {\bf OUR}{}, as positions of the
cameras did not allow tracking the flying ball, or creating the appropriate
training data, and the best results were obtained by using just 2 states of
\textit{in\_possession} and \textit{rolling}. {\bf Growth}{} performs on par with
{\bf RANSAC}{} for \volley{1}, but in more difficult sequence of \volley{2} {\bf RANSAC}{}
is able to make use of detections not in the neighbouring frames, predicting
location of the ball more often. }
\vspace{-3mm}\paragraph{Simultaneous tracking of the ball and players.}
All the results shown above were obtained by processing sequences of at
least 500 frames. In such sequences, the people tracker is very reliable and
makes few mistakes. This contributes to the quality of our results at the cost
of an inevitable delay in producing the results. Since this could be damaging
in the live-broadcast situation, we have experimented with using shorter
sequences. We show here that simultaneously tracking the ball and the players
can mitigate the loss of reliability of the people tracker, albeit to a small
extent.
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Metric & MODA~\cite{Kasturi09},\% & Tracking acc. @ 25 cm,\% \\
\hline
50 & 94.1 / 93.9 / 0.26 & 69.2 / 67.2 / 2.03 \\
\hline
75 & 94.5 / 94.2 / 0.31 & 71.4 / 69.4 / 2.03 \\
\hline
100 & 96.5 / 96.3 / 0.21 & 72.5 / 71.0 / 1.41 \\
\hline
150 & 97.2 / 97.1 / 0.09 & 73.8 / 73.0 / 0.82 \\
\hline
200 & 97.3 / 97.4 / 0.00 & 74.1 / 74.1 / 0.00 \\
\hline
\end{tabular}
\end{center}
\vspace{-0.3cm}
\hspace{2.5cm} (a) \hspace{3cm} (b)
\vspace{-0.3cm}
\caption{Tracking the ball given the players' locations vs. simultaneous
tracking of the ball and players. The three numbers in both columns
correspond to simultaneous tracking of the players and ball / sequential
tracking of the players and then the ball / improvement, as function of the lengths
of the sequences. {\bf (a)} People tracking accuracy in terms of the MODA
score. {\bf (b)} Ball tracking accuracy.}
\vspace{-0.6cm}
\label{tab:simtrack}
\end{table}
As shown in Tab.~\ref{tab:simtrack} for the \volley{1} dataset, we need 200-long
frames to get the best people tracking accuracy when first tracking the people
by themselves first, as we did before. As the number of frames decreases, the
people tracker becomes less reliable but performing the tracking simultaneously
yields a small but noticeable improvement both for the ball and the players. The
case of Fig.~\ref{fig:motivation} is an example of this. We identified 3 similar
cases in 1500 frames of the volleyball sequence used for the experiment.
\vspace{-0.3cm}
\comment{ Tracking the players and the ball simultaneously yields improvements over
tracking first the players, and the tracking the ball, assuming the players
positions fixed. While this improvement vanishes when the size of the batch in
which we do the tracking is large, there are apparent benefits for smaller batch
sizes, as shown by the Tab.~\ref{tab:simtrack}. Using smaller batch sizes is a
necessary prerequisite for a near real-time performance system.}
\comment{ When tracking was done on small batches, we saw an improvement both
for people tracking and ball tracking when we tracked both simultaneously,
rather than sequentially. In particular, for batches of 70 frames on
\volley{1} dataset we observed 0.31\% improvement in people tracking in
MODA~\cite{Kasturi09} metric, in situations similar to the one depicted in
Fig.~\ref{fig:motivation}. For ball tracking we observed a 2\% improvement of
tracking accuracy at the distance of 25cm, as well as better event accuracy.
While similar results were not observed on longer batches, we consider this an
important result, as in tracking systems using small batches is often required
to obtain close to real-time performance.}
\comment{We have observed that people detectors are generally more reliable than
ball detectors, and the graph for people tracking has approximately order of
magnitude less nodes than the graph for ball tracking, due to huge number
of hypothesized trajectories. Presence of continous variables also makes
optimization harder for the ball. Solving the problem for both people and the
ball virtually does not affect the computation time, but sometimes yields
improvements. When tracking is done in long enough batches (hundreds of frames),
we have seen no improvement in the people tracking quality, whether the
problem was solved both for ball and players or for players only. However,
to obtain close to real-time performance using small batches is required.
For batches of size 70 on \volley{1} dataset we have witnessed 0.31\%
improvement in people tracking, in scenarios similar to the ones described in
Fig.~\ref{fig:motivation}. For the ball tracking, the difference was more
significant, both because errors in trajectory building propagate through
frames, and because there is only one ball, and reached 2\%.}
\comment{Given long enough sequences of frames, we have not observed any
improvement in ball tracking in the scenario where ball and people are tracked
together, compared to the scenario when they are tracked separately. However,
for situations where computation time is an issue and the size of the frame
batch in which tracking is done is small (\eg 50 frames), we observed both
improvement in ball and people tracking. One such scenario is shown in
Fig.~\ref{fig:simTracking}. For such short sequences we saw an improvement in
tracking accuracy at the level of 25cm by X\%, and improvement in event accuracy
at the margin of 5 frames by X\%.}
\comment{\paragraph{Continuous constraints and solution time}
As we observed, the time required to solve the program~\eqref{eq:mainFlow} is
significantly decreased compared to the version that does not introduce physical
constraints \eqref{eq:secondOrder}. For better performance, we propose a
relaxation of these constraints that require only integer variables, and
evaluate it in the terms of quality of the solution and the solution time. More
formally, we substite the continous variables $P^t$ in \eqref{eq:secondOrder} by
their definition in \eqref{eq:discreteCont}. By doing so, we allow the change of
acceleration at a margin of at most $D$, as defined in \eqref{eq:discreteCont}.
This does not enforce a trajectory with zero velocity change, but still prevents
rapid velocity changes. As Fig.~\ref{fig:solutionTime} shows, we see only a
slight decrease in the tracking and event accuracy, and the method still
outperforms all other approaches. Computation time, however, decreased to that
of the problem without the second order constraints.
}
\section{Building the Graphs}
\label{sec:Graphs}
Recall from Sections~\ref{sec:ip} and~\ref{eq:MIP}, that our
algorithm operates on a ball and player graph. We build them as follows.
\subsection{Player Graph}
\label{sec:playerGraph}
To detect the players, we first compute a Probability Occupancy
Map on a discretized version of the court or field using the
algorithm of~\cite{Fleuret08a}. We then follow the promising approach
of~\cite{Wang14b}. We use the K-Shortest-Path (KSP)~\cite{Berclaz11} algorithm
to produce tracklets, which are short trajectories with high confidence
detections. To hypothesize the missed detections, we use the Viterbi algorithm
on the discretized grid to connect the tracklets. Each individual location in a
tracklet or path connecting tracklets becomes a node of the player graph $G_p$,
it is then connected by an edge to the next location in the tracklet or path.
\subsection{Ball Graph}
\label{sec:ballGraph}
To detect the ball, we use a SVM~\cite{Hearst98} to classify image patches
in each camera view based on Histograms of Oriented Gradients, HSV color
histograms, and motion histograms. We then triangulate these detections to
generate candidate 3D locations and perform non-maximum suppression to remove
duplicates. We then aggregate features from all camera view for each remaining
candidate and train a second SVM to only retain the best.
Given these high-confidence detections, we use KSP tracker to produce ball
tracklets, as we did for people. However, we can no longer use the Viterbi
algorithm to connect them as the resulting connections may not obey the required
physical constraints. We instead use an approach briefly described below. More
details in supplementary materials.
To model the ball states associated to a physical model, we grow the
trajectories from each tracklet based on the physical model, and then join the
end points of the tracklets and grown trajectories, by fitting the physical
model. An example of such procedure is shown in Fig.~\ref{fig:prunning}. To model
the state \textit{in\_possession}, we create a copy of each node and edge in the
players graph. To model the state \textit{not\_present}, we create one node in
each time instant and connect it to the node in the next time instant, and
nodes for all other states in the vicinity of the tracking area border. Finally,
we add edges between pairs of nodes with different states, as long as they are
in the vicinity of each other (bold in Fig.~\ref{fig:factorGraph}(b)).
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/Prunning.pdf}
\caption{An example of ball detections, hypothesized ball locations when it is missed, and graph construction.}
\label{fig:prunning}
\vspace{-0.5cm}
\end{figure}
\comment{
For the states associated with physical model, we grow trajectories starting
from every pair of detections in every tracklet, and incorporating all following
detections that fit the physical model with the maximum error of $D_l$ (from
Eq.~\ref{eq:discreteCont}). We only keep trajectories to which maximal sets of
detections fit. After that, we join the endpoints of tracklets and newly
built trajectories by additional trajectories that fit the physical model.
Intuitively, first step accounts for missed detections, and the second - for
missed trajectories, as shown by Fig.~\ref{fig:prunning}. Each trajectory
produces a path of nodes connected by edges in the ball tracking graph.}
\comment{
For state \textit{in\_possession}, we create a copy of each node and edge
in the players graph. \comment{As detector output, we use the largest
detection score within the distance of $D_p$ from the players location (from
Eq.~\ref{eq:possConst}.)} For state \textit{not\_present}, we create one node in
each time instant, and connect it to the node in the next time instant, and
nodes for all other states in the vicinity of the tracking area border. Finally,
we add edges between pairs of nodes with different states, as long as they are
in the vicinity of each other (bold in Fig.~\ref{fig:factorGraph},b).
Note that while we generate trajectories that obey physical constraints, they
can not enforce a physical model - a ball can transit from one trajectory to
another in the location in a common node of two. Structure of ball tracking
graph can be seen as enforcing physical model `locally', while
second-order constraints prevent the ball from breaking the physical model in
the ``intersections''.
}
\comment{ For the possession state, for each node of the people graph, 5 nodes
for the ball graph are created. These nodes correspond to the state of the ball
being possessed, and are associated with the location that projects to the
player position on the ground plane, and has a height of 50,100,\ldots,250cm.
For the absent ball state, we create only a single node in each time instance. %
Write this as an actual algorithm \paragraph{Tracklet breakdown} Tracklets
generated by KSP may contain some false detections and not obey physical
constraints. For each tracklet we generate maximally long time segments such
that all points in the time segment lie on the trajectory specified by the
constraints of the ball state (parabola or straight line) with an error of at
most $D$. Detections in such segments become the new tracklets.
\paragraph{Tracklet extension} For each tracklet, we find the next (time-wise)
tracklet such that both tracklets can lie on the same trajectory with an error
of at most $D$. We continue extending tracklets as long as possible, and
afterwards extend the trajectory until it reaches the borders of the tracking
area. We only keep trajectories that include a maximum possible set of segments.
\paragraph{Trajectory generation} Additionally, we link endpoints of
tracklets and endpoints of the trajectories, for all pairs of tracklets and
endpoints inside a time window of a certain length. We experimentally found that
extending the length of the time window over 2 seconds does not bring benefits in
terms of better tracking, but becomes a heavier computational burden.
\paragraph{Detection association} For each node, we define the output of the
ball detector to be the largest output of the ball detector that fired within
the distance $D$ of this particular location.
}
\comment{
\subsection{Obtaining solution}
It has been recently reported in~\cite{Kappes14} that integer
program solvers for structured energy minimization problems are competitive both
in terms of runtime and performance. We use Gurobi~\cite{Gurobi}
solver to obtain the solution or our problem. The result is obtained by minimizing
the gap between the lower bound of LP relaxation and upper bound of feasible
integer solutions. We require this gap to be below $1e-4$, obtaining the
solution close to the global optimum.
}
\comment{\subsection{Specific states} We used different predefined datasets for
different sports. For all sports, we have states \textit{flying},
\textit{in\_possession}, \textit{not\_present}. For volleyball, we additionally
have state \textit{strike}. For basketball, we additionally have state
\textit{pass}. For football, we additionally have state \textit{rolling}. More
details can be found in the supplementary materials.}
\comment{
\subsection{Post-processing} Our approach does not encode any domain knowledge
in the problem formulation. While generating the videos accompanying the paper
we used domain-specific post-processing for better appearance. Results we report
further are obtained without post-processing, and for videos the tracking
accuracy was not affected by more than 1\%. Details are given in supplementary
materials.
}
\comment{: enforced linear,
rather then ballistic trajectory when the ball is freely moving near the ground
in football, changed the state of the ball from `possessed' to `flying' when it
was flying near the person for several frames but did not change the velocity in
volleyball, etc. We report the results without the post-processing in the paper.
The post-processing did not affect the tracking accuracy more than 1\%. Details
of post-processing are given in supplementary materials.}
\comment{As we are solving our problem independently
of the sport, we do not encode any domain knowledge in the problem
formulation. However, after having obtained the results, we do domain-specific
post-processing. In particular, for volleyball we saw that the ball is sometimes
assumed to be possessed by the player it flies nearby. We, therefore, remove
possessions of the ball that are short and do not change the velocity of the
ball significantly. \comment{Additionally, while we don't bound the Z axis
acceleration of the ball near the ground, in the post-processing step we force
the ball that bounces of the ground to have only one point of contact with
the ground.} Details about specific rules for every sport can be found
in the supplemental materials. We report results with and without domain
post-processing.\amrmk{Post-processing only affects event accuracy, by which we
are on the top anyway. Should we maybe just make a small note saying that we
apply post-processing when generating video results, and detail this in
supplementary materials, for the sake of saving space?}}
\comment
\section{Introduction}
Tracking the ball accurately is critically important to analyze and understand
the action in sports ranging from tennis to soccer, basketball, volleyball, to
name but a few. While commercial video-based systems exist for the first,
automation remains elusive for the others. This is largely attributable to the
interaction between the ball and the players, which often results in the ball
being either hard to detect because someone is handling it or even completely
hidden from view. Furthermore, since the players often kick it or throw it
in ways designed to surprise their opponents, its trajectory is largely
unpredictable.
There is a substantial body of literature about dealing with these
issues, but almost always using heuristics that are specific to a
particular sport such as soccer~\cite{Zhang08b}, volleyball~\cite{Gomez14}, or
basketball~\cite{Chen09a}.\comment{\pfrmk{We'll need to argue that we outperform
them someplace.} \amrmk{For the first, unfortunately, there is not available
code. For the last two, I hope to be able to use part of the code of
~\cite{Gomez14}, which implements exactly the approach of ~\cite{Chen09a}, but
for another sport. This way, we would be able to claim that we are better than
both of them.}} A few more generic approaches explicitly account for the
interaction between the players and the ball~\cite{Wang14b} while others impose
physics-based constraints on ball motion~\cite{Parisot15}. However, neither of
these things alone suffices in difficult cases, such as the one depicted by
Fig.~\ref{fig:motivation}.
In this paper, we, therefore, introduce an approach to simultaneously accounting
for ball/player interactions and imposing appropriate physics-based constraints.
Our approach is generic and applicable to many team sports. It involves
formulating the ball tracking problem in terms of a Mixed Integer Program (MIP)
in which we account for the motion of both the players and the ball as well as
the fact the ball moves differently and has different visibility properties in
flight, in possession of a player, or while rolling on the ground. We
model the ball locations in $\mathbb{R}^3$ and impose first and second-order
constraints where appropriate. The resulting MIP describes the ball behaviour
better than previous approaches~\cite{Wang14b,Parisot15} and yields superior
performance, both in terms of tracking accuracy and robustness to occlusions.
Fig.~\ref{fig:motivation}(c) depicts the improvement resulting from doing this
rather than only modeling the interactions or only imposing the physics-based
constraints.
\input{Figures/motivation.tex}
In short, our contribution is a principled and generic formulation of
the ball tracking problem and related physical constraints in terms
of a MIP. We will demonstrate that it outperforms state-of-the-art
approaches~\cite{Wang14a,Wang14b,Parisot15,Gomez14} in soccer, volleyball, and
basketball.
\comment{\pfrmk{Will you have soccer results as well? Apparently, we have
only one baseline that is not us, i.e.~\cite{Parisot15}. \cite{Zhang12a}
doesn't really count since it was never designed for this.} \amrmk{Yes, I
will upload the results of experiments with football in a couple of days.
There is also a work of~\cite{Ren08}, with which we can make strong parallels,
as one of our baselines does something quite similar-first tracking, then
classification of ball states -but their code is not publicly
available. Also~\cite{Gomez14} has a 5k line code without the sample data,
examples of usage or comments. I will hopefully be able to extract the ball
tracking part from it.}}
\comment{For more than half of this short sequence, the player catching and then
throwing back the ball was on the ground. As a result the player detector we
use failed to find her because it models people as vertical
cylinders~\cite{Fleuret08a}. Furthermore, while the ball was near the player,
it was occluded in the views of cameras 1 and 3, and, therefore, not detected.
A model that does not consider interactions (Fig.~\ref{fig:motivation}.(b)),
is able to track the ball for the parts of ballistic trajectories, but it
provides implausible solution in the middle. An approach that considers
interactions but has no notion of physical model
(Fig.~\ref{fig:motivation}.(a)), also chooses a solution that does not require
presence of an undetected person, building physically impossible solution for
the frames where the ball is not detected. Physical model of our solution does
not allow the acceleration change for the flying ball and therefore reasons
correctly about the presence of a person. Additional cues are provided by the
ball state classifiers and leared priors: the most likely state of the ball in
the first part of the trajectory is `Strike', while in the second part of the
trajectory it is `Flying'. The probability of transition between these two
states is zero, as it requires the ball to be possessed in between.}
\comment{Our approach tracks the players and the ball and assigns a state to the
ball at each time instant simultaneously. By exploiting multiple sources of
information, our system finds the trajectory of the ball and the sequence of
states that is highly likely with respect to the location of the ball
(\eg in basketball higher location of the ball above the ground is often
correlated with the locations near the basket and the state of being shot),
physical constraints, and visibility (\eg it is often likely not to see the
basketball during dribling). Furthermore, our novel problem formulation allows
to effectively use detections, that are discrete by nature, to form a trajectory
in continuous space, while preserving global optimality of the solution. Our
contributions are three-fold: \begin{itemize} \item Novel scheme of object
tracking and state estimation under physical constraints that allow for
globally optimal solution without the discretization of the search space; \item
Domain-independent approach for tracking the ball that requires only the
training data annotated with the state of the ball but does not include any
domain-specific assumptions in the optimization; \item Ball tracking framework
that compares favourably to several baselines and state-of-the-art approaches on
real-life basketball and volleyball datasets. \end{itemize}
The rest of the paper is organized as follows. Related work is discussed in
Section~\ref{sec:related}. In Section~\ref{sec:problem}, we formulate the
problem of object tracking and state estimation under physical constraints. In
Section~\ref{sec:learning} we describe the specific form of factors we use and
the process of learning them. Experimental results are shown in
Section~\ref{sec:experiments}.
}
\comment{Precise ball tracking in sports is one of the starting points for
automatic game analysis, results of which are of huge importance to referees,
coaches and athletes. Ball trajectories are often used for understanding of the
higher-level semantics of the game. With the advances in computer vision, many
applications for automated detection, tracking of ball and players and analysis
of the game have been developed. While detection and tracking of players can
often be done reliably, same can not be said for the ball tracking in many
sports. Rare occlusions and discriminative colour features of the tennis ball
allow it to be tracked with great precision, but for team sports, such as
volleyball, basketball or soccer, the situation is very different. Multiple
occlusions by the players, fast speed and often unpredictable trajectory of the
ball make tracking the ball much more difficult compared to typical generic
object tracking. As a result, most applications concentrate on one specific
sport or even setup, incorporating as much domain knowledge as possible for
better tracking, sacrificing the ability to apply the system for other sports.
Having universal solutions for tracking people but using domain-specific ball
trackers often means that tracking the ball, tracking the players and extracting
semantic information (\eg passes between players) are all done separately. Error
on any stage of this process propagates further and it is hard to recover from
it. }
\section{Learning the Potentials}
\label{sec:learning}
In this section, we define the potentials introduced in Eq.~\ref{eq:mainEq} and
discuss how their parameters are learned from training data. They are computed
on the nodes of the ball graph $G_b$ and are used to compute the cost of
the edges, according to Eq.~\ref{eq:ipEq}. We discuss its construction in
Sec.~\ref{sec:ballGraph}.
\vspace{-3mm}
\paragraph{Image evidence potential $\Psi_I$.}
It models the agreement between location, state, and the
image evidence. We write
\begin{small}
\begin{eqnarray}
\Psi_I(x_i,s_i,I) & = & \psi(x_i,s_i,I)\prod_{\mathclap{\substack{j \in V_b:t_j=t,\\ (x_j,s_j)\not=(x_i,s_i)}}}\;\Big(1-\psi(x_j,s_j,I)\Big)\;,\nonumber \\
\psi(x,s,I) & = & \sigma_s(P_b(x|I)P_c(s|x,I))\;, \label{eq:factorZ}\\
\sigma_s(y) & = & {1\over 1+e^{-\theta_{s0}-\theta_{s1}y}}\;, \nonumber
\end{eqnarray}
\end{small}%
where $P_b(x)$ represents the output of a ball detector for location $x$,
$P_c(s|x,I)$ the output of multiclass classifier that predicts the state $s$
given the position and the local image evidence. $psi(x,s,I)$ is close to
one when the ball is likely to be located at $x$ in state $s$ with great
certainty based on image evidence only and its value decreases as the
uncertainty of either estimates increases.
In practice, we train a Random Forest classifier~\cite{Breiman01} to
estimate $P_c(s|x,I)$. As features, it uses the 3D location of the ball.
Additionally, when the player trajectories are given, it uses the number of
people in its vicinity as a feature. When simultaneously tracking the players and
the ball, we instead use the integrated outputs of the people detector in the vicinity of the
ball. We give additional details in the supplementary materials.
The parameters $\theta_{s0},\theta_{s1}$ of the logistic function $\sigma_s$
are learned from training data for each state $s$. Given the specific ball
detector we rely on, we use true and false detections in the training
data as positive and negative examples to perform a logistic regression.
\vspace{-3mm}
\paragraph{State transition potential $\Psi_S$.}
We define it as the transition probability between states, which we learn
from the training data, that is:
\vspace{-4mm}
\begin{small}
\begin{equation}
\label{eq:factorS}
\Psi_S(s_i,s_j)=P(S^{t}=s_i|S^{t-1}=s_j)\;.
\end{equation}
\end{small}%
As noted in Sec.~\ref{sec:graphModel}, potential for the first time frame
has a special form $P(S^2=s_i|S^1=s_j)P(S^1=s_j)$, where $P(S^1=s_j)$ is the
probability of the ball being in state $s_j$ at arbitrary time instant; it is
learned from the training data.
\comment{We do not use any prior on the probability to ensure that some
transitions are forbidden (\eg probability of transitioning from rolling to
flying is 0, as it would require ball to be possessed by the player in between).
For $Psi_S(S^1)$ we use the prior probability $P(S^t)$ also learned from the
training data.}
\vspace{-3mm}
\paragraph{Location change potential $\Psi_X$.} It models the transition of
the ball between two time instants.
Let $D^s$ denote the maximum speed of the ball when in state $s$.
We write it as
\vspace{-3mm}
\begin{small}
\begin{equation}
\label{eq:factorX}
\Psi_X(x_i,s_i,x_j)=\mathbb{1}(||x_i-x_||_2 \le D^{s_i})\;.
\end{equation}
\end{small}%
For the \textit{not\_present} state, we only allow transitions between the
node representing the absent ball and the nodes near the border of the tracking area.
For the first frame the potential has an additional factor of
$P(X^1=x_i)$, ball location prior, which we assume to be uniform inside of the
tracking area.
\comment{
$P_c(x|s,I)$ uses as the features the 3D location
of the ball and the number of people in certain vicinities (\eg 50cm, 2m for
basketball). It is learned using Random Forest~\cite{Breiman01}. We make use of
the symmetry of playing field to reduce overfitting. We use one variable to
learn each tree, due to small amount of training data and because intuitively
one feature is often enough to classify the ball state (\eg height above 5m
$\to$ \textit{flying}; no people around $\to$ not \textit{in\_possession}).
}
\comment{We learn the state classifier using random forest~\cite{Breiman01}. As
features, we use the location of the ball $(x,y,z)$, and the number of people in
certain vicinities (\eg 50cm, 2m for basketball). To reduce overfitting, we make
use of the fact that for sports games the field is symmetric with respect to the
rotation of 180 degrees over the center of the field. We use one variable to
learn each tree, due to small amount of training data and the fact that
intuitively one can often classify the ball state ball based on the single
feature (\eg if the ball is high in the air, it is likely flying; if there are
no people in the vicinity of the ball, it is likely not possessed, etc.). When
applying the classifier, if we track the ball and the players simultaneously, we
don't know the actual locations of the people. In that case we substitute the
number of people by the output of the people detector, integrated in the
vicinity of the ball location. \am{Example of a learned classifier output is
presented in supplementary materials.}}
\comment{\input{Figures/classifierExample.tex}}
\comment{An example output of such classifier can be seen in
Fig.~\ref{fig:classifierExample}.}
\comment{
For each state we train a logistic regression on detections from training data
with a single input $P_b(x|I)P_c(s|x,I)$, and use the output as the value of
$\psi(x,s,I)$. $P_c(s|x,I)$ is the output of ball state classifier we describe
later. Such function has two desired properties. First, it is higher when it is
likely that the ball is indeed located at $x$, and when it is likely that the
ball is indeed in state $s$. Second, learned regression parameters serve as an
adjustment for different visibility of the ball and confidence in detections for
different states. This reflects our uncertainty about the output of the ball
detector: we expect for the state \textit{in\_possession} to have many false
detections on moving people around the ball, as well as many missed detections
due to occlusions, compared to \textit{flying}; the cost function should
produce values of smaller magnitude for the former state. Such behaviour
of ball detectors is not specific to ours - \eg~\cite{Zhang08b}. For the
\textit{not\_present} state, $\psi(x,s=\emptyset,I)=\mathbb{1}(x=\infty)$.
}
\comment{
We wantψ(x,s,I) to be higher when it is likely that the ball is indeed located
at x, and when it is likely that the ball is indeed in state s. Additionally, we
want ψ to reflect our uncertainty when in particular state the output of the
ball detector is less informative (e.g. we are more confident in the detections
when the ball is flying, and want the function to be higher when the ball
detector output is high, compared to the same output of the ball detector in the
vicinity of the person, as movement of people often produce spurious
detections. Similarly, when want to penalize 'invisible' flying ball more than
an 'invisible' ball in possession, as in the latter case it can often be
occluded). Function ψ(x, s, I) = σs(Pb(x|I)Pc(s|x, I)) possesses both of those
properties (σ(x) = 1 ). P denotes the output of 1+e−x c the classifier of the
ball states. The parameters of a sigmoid serve as an adjustment for different
visibility of the ball and confidence of detections in different states. To
learn the parameters of the sigmoid for each of the states, we use all true and
false detections of the ball in the training data as positive and negative
examples. For the state of ball being absent, we define ψ(x, s = ∅, I) = 1(x =
∞). Note that training data is based on a specific ball detector and therefore
learning has to be done with the same ball detector as used for the
application. However, it is not specific to our particular detector - it has
been observed that other detectors also show a different distribution of outputs
depending on the ball state (e.g. [35]).}
\comment{ when in particular state the output of the ball detector is less
informative (\eg we are more confident in the detections when the ball is
flying, and want the function to be higher when the ball detector output is
high, compared to the same output of the ball detector in the vicinity of the
person, as movement of people often produce spurious detections. Similarly, when
want to penalize `invisible' flying ball more than an `invisible' ball in
possession, as in the latter case it can often be occluded). Logistic function
$\psi(x,s,I)={1\over 1 + e^{-\theta_{s0} - \theta_{s1}P_b(x|I)P_c(s|x,I)}}$
possesses the first property, and its parameters $\theta_{s0}, \theta_{s1}$
serve as an adjustment for different visibility of the ball and confidence of
detections in different states. To learn them for each of the states, we
use all true and false detections of the ball in the training data as
positive and negative examples. For the state of ball being absent, we define
$\psi(x,s=\emptyset,I)=\mathbb{1}(x=\infty)$. Note that training data is based
on a specific ball detector and therefore learning has to be done with the same
ball detector as used for the application. However, it is not specific
to our particular detector - it has been observed that other detectors
also show a different distribution of outputs depending on the ball state
(\eg~\cite{Zhang08b}). }
\comment{and detectors and classifiers and describe the process of learning it
from the training data. After that we describe the detectors and classifier we
use as part of the potentials. Training data contains information about the
state of the ball and its location at a particular time instance, as well as
the locations of the players.}
\section{Problem Formulation}
\label{sec:problem}
We consider scenarios where there are several calibrated cameras with
overlapping fields of view capturing a substantial portion of the play area,
which means that the apparent size of the ball is generally small. In this
setting, trajectory growing methods do not yield very good results both because
the ball is occluded too often by the players to be detected reliably and
because its being kicked or thrown by them result in abrupt and unpredictable
trajectory changes.
To remedy this, we explicitly model the interaction between the ball and the
players as well as the physical constraints the ball obeys when far away from
the players. To this end, we first formulate the ball tracking problem in terms
of a maximization of a posteriori probability. We then reformulate it in terms
of an integer program. Finally, by adding various constraints, we obtain the
final problem formulation that is a Mixed Integer Program.
\comment{We then show how this maximization can be reformulated as a constrained
network flow problem, which eventually results in a Mixed Integer Program (MIP).
\pfrmk{Should we mention here that we track the players?} \amrmk{We can mention
here that while tracking the ball, we also simultaneously track the players,
based on their tracklets or ground truth trajectories?} \pfrmk{It's mentioned in
the intro and the caption of Fig. 1. Whathever you say, it should be
consistent.}}
\input{Figures/factorGraph.tex}
\input{Figures/notations.tex}
\subsection{Graphical Model for Ball Tracking}
\label{sec:graphModel}
We model the ball tracking process from one frame to the next in terms of the
factor graph depicted by Fig.~\ref{fig:factorGraph}(a). We associate to each
instant $t \in \left\{1\ldots T\right\}$ three variables $X^t$, $S^t$, and
$I^t$, which respectively represent the 3D ball position, the state of the ball,
and the available image evidence. When the ball is within the capture volume,
$X^t$ is a 3D vector and $S^t$ can take values such as \textit{flying} or
\textit{in\_possession}, which are common to all sports, as well as
sport-dependent ones, such as \textit{strike} for volleyball or \textit{pass}
for basketball. When the ball is not present, we take $X^t$ and $S^t$ to be
$\infty$ and \textit{not\_present} respectively. These notations as well as all
the others we use in this paper are summarized in Table~\ref{tab:notations}.
Given the conditional independence assumptions implied by the structure of the
factor graph of Fig.~\ref{fig:factorGraph}(a), we can formulate our tracking
problem as one of maximizing the energy function%
\vspace{-3mm}
\begin{small}
\begin{eqnarray}
\Psi(X,S,I) & = & {1\over Z} \Psi_I(X^1,S^1,I^1) \prod\limits_{t=2}^T \Big[
\Psi_X(X^{t-1},S^{t-1},X^t) \nonumber \\
& & \Psi_S(S^{t-1},S^t)\Psi_I(X^{t},S^{t},I^{t}) \Big] \label{eq:factors}
\end{eqnarray}
\end{small}%
expressed in terms of products of the following potential functions:\comment{ over the
maximal cliques of the \am{appropriate
MRF}\comment{graph}~\cite{Bishop06}. \pfrmk{In~\cite{Bishop06}, they are the
cliques of the factor graph. Why did you change it?}}
\begin{itemize}
\item $\Psi_I(X^{t},S^{t},I^{t})$ encodes the correlation between the ball
position, ball state, and the \comment{observed} image evidence.
\item $\Psi_S(S^{t-1},S^t)$ models the temporal smoothness of states across
adjacent frames.
\item $\Psi_X(X^{t-1}, S^{t-1}, X^t)$ encodes the correlation between the
state of the ball and the change of ball position from one frame to the next
one.
\item $\Psi_X(X^1,S^1,X^2)$ and $\Psi_S(S^1,S^2)$ include priors on the state
and position of the ball in the first frame.
\end{itemize}%
In practice, as will be discussed in Sec.~\ref{sec:learning}, the $\Psi$
functions are learned from training data.
Let $\mathbb{F}$ be the set of all possible sequences of ball positions and
states. We consider the log of Eq.~\ref{eq:factors} and drop the constant
normalization factor $\log Z$. We, therefore, look for the most likely sequence of
ball positions and states as%
\vspace{-5mm}
\begin{small}
\begin{align}
\label{eq:mainEq}
& (X^*, S^*) = \arg\max \limits_{(X,S) \in \mathbb{F}} \sum \limits_{t=2}^{T} \Big[ \log \Psi_X(X^{t-1},S^{t-1},S^t) + \\ \nonumber
& \log \Psi_S(S^{t-1},S^t) + \log \Psi_I(X^{t},S^{t},I^{t}) \Big] + \log
\Psi_I(X^1,S^1,I^1) \; . \\ \nonumber
\end{align}
\end{small}%
\vspace{-1cm}
In the following subsections, we first reformulate this maximization problem
as an integer program and then introduce additional physics-based and
\textit{in\_possession} constraints.
\subsection{Integer Program Formulation}
\label{sec:ip}
To convert the maximization problem of Eq.~\ref{eq:mainEq} into an Integer
Program (IP), we introduce the {\it ball graph} $G_b=(V_b,E_b)$ depicted by
Fig.~\ref{fig:factorGraph}(b). $V_b$ represents its nodes, whose elements each
correspond to a location $x_i \in \mathbb{R}^3$, state $s_i \in \{1,\cdots,K\}$,
and time index $t_i \in \{1,\cdots,T\}$. {In practice, we instantiate as many
as there are possible states at every time step for every actual and potentially
missed ball detection. Our approach to hypothesizing such missed detections is
described in Sec.~\ref{sec:Graphs}.} $V_b$ also contains an additional
node $S_b$ denoting the ball location before the first frame. $E_b$ represents
the edges of $G_b$ and comprises all pairs of nodes corresponding to consecutive
time instants and whose locations are sufficiently close for a transition to be
possible.
Let $f_i^j$ denote the number of balls moving from $i$ to $j$
and $c_{bi}^j$ denote the corresponding cost.
The maximization problem of Eq.~\ref{eq:mainEq} can be
rewritten as%
\vspace{-1.5mm}
\begin{small}
\begin{equation}
\text{maximize} \displaystyle\sum\limits_{(i,j) \in E_b}f_i^jc_{bi}^j \; , \label{eq:ipEq}
\end{equation}
\vspace{-2mm}
\text{where} \\[3mm]
$c_{bi}^j = \log \Psi_X(x_i,s_i,x_j) + \log \Psi_S(s_i,s_j) + \log\Psi_I(x_j,s_j,I^{t_j}) ,$\\[2mm]
\text{subject to} \\
\vspace{-3mm}
\begin{equation*}
\begin{array}{ll@{}ll}
&\textit{(a)} &\mbox{\hspace{3mm}}f_i^j \in \{0,1\} &\mbox{\hspace{-0mm}}\forall (i,j) \in E_b\\
&\textit{(b)} &\mbox{\hspace{3mm}}\sum\limits_{(i,j) \in E_b, t_j=1}f_i^j = 1 & \\
&\textit{(c)} &\mbox{\hspace{3mm}}\sum\limits_{(i,j) \in E_b}f_i^j=\sum\limits_{(j,k) \in E_b}f_j^k &\mbox{\hspace{-0mm}}\forall j \in V_b: 0 < t_j < T \\
&\textit{(d)} &\mbox{\hspace{3mm}}X^t = \sum\limits_{(i,j) \in E_b,t_j=t}f_i^jx_j &\mbox{\hspace{-0mm}}\forall t \in 1,\cdots,T \\
&\textit{(e)} &\mbox{\hspace{3mm}}S^t = \sum\limits_{(i,j) \in E_b,t_j=t}f_i^js_j &\mbox{\hspace{-0mm}}\forall t \in 1,\cdots,T \\
&\textit{(f)} &\mbox{\hspace{3mm}}(X,S) \in \mathbb{F} & \\
\end{array}
\end{equation*}
\end{small}%
We optimize with respect to the $f_i^j$, which can be considered as flow
variables. The constraints of Eqs.\ref{eq:ipEq}(a-c) ensure that at every
time frame there exists only one position and one state to which the only ball
transitions from the previous frame. The constraint of Eq.\ref{eq:ipEq}(f) is
intended to only allow feasible combinations of locations and states as
described by the set $\mathbb{F}$, which we define below.
\subsection{Mixed Integer Program Formulation}
\label{eq:MIP}
Some ball states impose first and second order constraints on ball motion, such
as zero acceleration for the freely flying ball or zero vertical velocity and
limited negative acceleration for the rolling ball. Possession implies that the
ball must be near the player.
In this section, we assume that the players' trajectories are available in the
form of a {\it player graph} $G_p=(V_p,E_p)$ similar to the ball graph of
Sec.~\ref{sec:ip} and whose nodes comprise locations $x_i$ and time indices
$t_i$. In practice, we compute it using publicly available code as described in
Sec.~\ref{sec:playerGraph}.
Given $G_p$, the physics-based and possession constraints can be imposed by
introducing auxiliary continuous variables and expanding constraint of Eq.~\ref{eq:ipEq}(f), as follows.
\vspace{-3mm}\paragraph{Continuous Variables.}
The $x_i$ represent specific 3D locations where the ball could potentially be,
that is, either actual ball detections or hypothesized ones as will be discussed
in Sec.~\ref{sec:ballGraph}. Since they cannot be expected to be totally
accurate, let the continuous variables $P^t=(P_x^t,P_y^t,P_z^t)$ denote the true
ball position of at time $t$. We impose%
\vspace{-4mm}
\begin{small}
\begin{equation}
\label{eq:discreteCont}
||P^t - X^t|| \le D_l
\end{equation}
\end{small}%
where $D_l$ is a constant that depends on the expected accuracy of the $x_i$.
These continuous variables can then be used to impose ballistic constraints when
the ball is in flight or rolling on the ground as follows.
\vspace{-3mm}\paragraph{Second-Order Constraints.}
For each state $s$ and coordinate $c$ of $P$, we can formulate a
second-order constraint of the form%
\vspace{-4mm}
\begin{small}
\begin{align}
\label{eq:secondOrder}
& A^{s,c} (P^t_c - 2 P^{t-1}_c + P^{t-2}_c) + B^{s,c} (P^t_c - P^{t-1}_c) + \\
& C^{s,c} P^t_c - F^{s,c} \le K (3 - M^t_{s,c} - M^{t-1}_{s,c} - M^{t-2}_{s,c})
\; ,
\nonumber \\
& \mbox{where} \, \, M^t_{s,c} = \sum\limits_{(i,j)\in E_b, t_j=t,s_j=s, x_j \not \in
O^{s,c}}f_i^j \; , \nonumber
\end{align}
\end{small}%
$K$ is a large positive constant and $O^{s,c}$ denotes the locations
where there are scene elements with which the ball can collide,
such as those near the basketball hoops or close to the ground.
Given the constraints of
Eq.~\ref{eq:ipEq}, $M^t_{s,c}$, $M^{t-1}_{s,c}$, and $ M^{t-2}_{s,c}$ must be
zero or one. This implies that right side of the above inequality is either zero
if $M^t_{s,c} = M^{t-1}_{s,c} = M^{t-2}_{s,c} = 1$ or a large number
otherwise. In other words, the constraint is only effectively active in the
first case, that is, when the ball consistently is in a given state. When this
is the case, $(A^{s,c},B^{s,c},C^{s,c}$,$F^{s,c})$ model the corresponding
physics. For example, when the ball is in the \textit{flying} state, we use
$(1,0,0,{-g \over fps^2})$ for the $z$ coordinate to model the parabolic motion
of an object subject to the sole force of gravity whose intensity is $g$. In the
\textit{rolling} state, we use $(1,0,0,0)$ for both the $x$ and $y$ coordinates
to denote a constant speed motion in the $xy$ plane. In both cases, we neglect
the effect of friction. We give more details for all states we represent in the
supplementary materials. Note that we turn off these constraints altogether
at locations in $O^{s,c}$.
\comment{
We turn off these constraints altogether at locations where there are scene
elements which the ball can collide, such as very near the basketball hoops or
very close to the ground. \am{Such locations are denoted by $O^{s,c}$ in
Eq.~\ref{eq:secondOrder}.}
}
\comment{
Finally, $M^t_{s,x}\equiv \mathbb{1}(S^t=s \land X^t \not \in O^s_x)$. If $O^s_x
=\emptyset$, then $M^t_{s,x}\equiv \mathbb{1}(S^t=s)$, which means that we are
placing the constraint iff. for the last 3 consecutive frames the ball was in
the state $s$. However, there are particular locations in which we do not want
to place the constraint on the ball. They correspond to the known scene elements
with which the ball can collide. For example, for basketball we do not want
to impose constraints when the ball is in the vicinity of the basket.
Furtherwore, we do not want to impose constraints on the $z$ axis when the
ball is flying and is near the ground (height less that $H$, this means
the ball will bounce off the ground and continue to fly). We express
that by saying that set of model-breaking locations $O^{\textit{flying}}_z =
\{x_i=(x_{i,1},x_{i,2},x_{i,3})^\intercal \in V_b: x_{i,3} < H\}$. Also note
that since such sets are defined separately for each axis, we will continue to
impose zero acceleration in horizontal plane while the ball is flying. Detailed
definition of $O^s_x$ for all states and axes are given in supplementary
materials. \am{There could be multiple constraints of such form for a single
state, for example to limit the acceleration both from below and from above.}
}
\comment{ For each state $s$ and axis $x$ we express second order constaints on
the motion of the ball in this state on this axis as a linear constraint as
follows: $M >> 0$ is a number far greater than any value in the expression on
the left hand side of the inequality and $M^t_{s,x}$ is the state of the ball at
time $t$. $A^{s,x}, B^{s,x}, C^{s,x}, F^{s,x}$ are constants that describe the
trajectory of the ball. Therefore, contraint will only be enforced iff
for 3 consecutive frames the ball was in the state $s$, corresponding to
$M^t_{s,x}=M^{t-1}_{s,x}=M^{t-2}_{s,x}=1$. \subparagraph{Collision with scene
elements} To account for scenarios when the ball does not follow a single
physically viable model, but collides with the element of the scene, such as the
basket, net or the floor, we modify the definition of $M^t_{s,x}$ to include the
flow of the ball not through all nodes with state $s$ at time $t$, but only
through the nodes that do not lie near the elements of the scene. More formally,
if $O_x$ is the set of nodes that correspond to ball locations near the elements
of the scene that allow the ball to change the trajectory in axis $x$, then
\begin{equation}
\label{eq:sceneElements}
M^t_{s,x} = \sum\limits_{(i,j)\in E, t(j)=t,s(j)=s, j \not \in O_x}f_i^j \; .
\end{equation}
By defining the set of scene element nodes with respect to the axis, we can
allow the change in velocity only in a single axis, such as when the ball hits
the floor, the motion should stay linear, but the Z velocity may change. }
\vspace{-3mm}\paragraph{Possession constraints.}
While the ball is in possession of a player, we do not impose any physics-based
constraints. Instead, we require the presence of someone nearby. The algorithm
we use for tracking the players~\cite{Berclaz11} is implemented in terms of
people flows that we denote as $p_i^j$ on a player graph $G_p=(V_p,E_p)$
that plays the same role as the ball graph. The $p_i^j$ are taken to be those
that
\vspace{-3mm}
\begin{small}
\begin{equation}
\text{maximize} \displaystyle\sum\limits_{(i,j) \in E_p}p_i^jc_{pi}^j \;
, \label{eq:peopleEq}
\end{equation}
\text{where} \hspace{1.2mm} $c_{pi}^j = {\log P_p(x_i|I^{t_i})\over 1-\log P_p(x_i|I^{t_i})} \; ,$\\
\text{subject to} \\
\begin{equation*}
\vspace{-1mm}
\begin{array}{ll@{}ll}
&\textit{(a)} &\mbox{\hspace{3mm}}p_i^j \in \{0,1\} &\forall (i,j) \in E_p\\
&\textit{(b)} &\mbox{\hspace{3mm}}\sum\limits_{i:(i,j) \in E_p}p_i^j \le 1 &\forall j \in V_p\setminus\{S_p\} \\
&\textit{(c)} &\mbox{\hspace{3mm}}\sum\limits_{(i,j) \in E_p}p_i^j=\sum\limits_{(j,k) \in E_p}p_j^k &\forall j \in V_p\setminus\{S_p,T_p\} \ . \\
\end{array}
\end{equation*}
\end{small}%
Here $P_p(x_i|I^{t_i})$ represents the output of probabilistic people detector
at location $x_i$ given image evidence $I^{t_i}$. $S_p,T_p \in V_p$ are the
source and sink nodes that serve as starting and finishing points for people
trajectories, as in~\cite{Berclaz11}. In practice we use the publicly
available code of~\cite{Fleuret08a} to compute the probabilities $P_p$ in each
grid cell of discretized version of the court.
Given the ball flow variables $f_i^j$ and people flow ones $p_i^j$, we express the
\textit{in\_possession} constraints as
\begin{small}
\begin{equation}
\sum\limits_{\substack {(k,l) \in E_p,t_l=t_j, \\ ||x_j-x_l||_2 \le D_p}}\hspace{-1mm}p_k^l \ge \sum\limits_{i:(i,j) \in E_b}\hspace{-1mm}f_i^j \hspace{4.2mm} \forall j:s_j\equiv{\scalebox{0.7}{in\_possession}}\; ,
\label{eq:possConst}
\end{equation}
\end{small}%
where $D_p$ is the maximum possible distance between the player and the ball location
when the player is in control of it, which is sport-specific.
\comment{
physical constraints on the ball trajectory have to be imposed. However,
possession requires a person present near the position of the ball. We assume
that information about players is given in the form of the graph of possible
player locations (as before, each node in the graph is identified by the
location $x_i$ and time index $t_i$. $s_i$ are not defined for people graph)
along with allowed transitions between those states and their cost. Note that
this definition includes a set of detections (with allowed transitions in a
certain spacial vicinity in adjacent frames, \eg~\cite{Berclaz11}), a set of
tracklets (with a set of additional linking paths between them,
\eg~\cite{Wang14b}), or simply a set of true player trajectories. We denote a
set of all nodes as $V_p$, and all possible transitions as $E_p$. $V_p$ also
includes two additional nodes $S_p$ and $T_p$. $E_p$ includes edges from $S_p$
to all locations where player trajectories can possibly begin, and edges from
all locations where player trajectories could possibly end to $T_p$. $f_i^j$ are
binary variables defined for all $(i,j) \in E_p$, and represent the transition
of a player from location $i$ to location $j$. $c_i^j$ is a cost associated
with the transition.
}
\vspace{-3mm}\paragraph{Resulting MIP.}
Using the physics-based constraints of Eq.~\ref{eq:discreteCont}
and~\ref{eq:secondOrder} and the possession constraints of
Eq.~\ref{eq:possConst} along with the formulation of people tracking from
Eq.~\ref{eq:peopleEq} to represent the feasible set of states $F$ of
Eq.~\ref{eq:ipEq}(f) yields the MIP
\vspace{-1mm}
\begin{small}
\begin{equation}
\begin{array}{ll@{}ll}
\mbox{\hspace{-5mm}}&\text{maximize}\displaystyle\sum\limits_{(i,j) \in E_b} f_i^jc_{bi}^j +
\sum\limits_{(i,j) \in E_p} p_i^jc_{pi}^j \\
\mbox{\hspace{-5mm}}&\text{subject to the constraints of
Eqs.\ref{eq:ipEq}(a-e),~\ref{eq:discreteCont},~\ref{eq:secondOrder},~\ref{eq:peopleEq}(a-c), and~\ref{eq:possConst}.}
\end{array}
\label{eq:finalEq}
\end{equation}
\end{small}
In practice, we use the Gurobi~\cite{Gurobi} solver to perform the
optimization. Note that we can either consider the people flows as given and
optimize only on the ball flows or optimize on both simultaneously. {We will
show in the results section that the latter is only slightly more expensive
but yields improvements in cases such as the one of
Fig.~\ref{fig:motivation}.}
\comment{
\begin{small}
\begin{equation}
\begin{array}{ll@{}ll}
&\text{maximize}\displaystyle\sum\limits_{(i,j) \in E_b \cup E_p}f_i^jc_i^j \\
&\text{subject to} \\
&\text{(\ref{eq:ipEq}b-f,~\ref{eq:discreteCont},~\ref{eq:secondOrder})} & \\
&f_i^j \in \{0,1\} &\forall (i,j) \in E_p \\
&\sum\limits_{i:(i,j) \in E_p}f_i^j \le 1 &\forall j \in V_p\setminus\{S_p\} \\
&\sum\limits_{(i,j) \in E_p}f_i^j=\sum\limits_{(j,k) \in E_p}f_j^k &\forall j \in V_p\setminus\{S_p,T_p\} \\
&\sum\limits_{\substack {(k,l) \in E_p,t_l=t_j, \\ ||x_j-x_l||_2 \le D_p}}\hspace{-1mm}f_k^l \ge \sum\limits_{i:(i,j) \in E_b}\hspace{-1mm}f_i^j \hspace{4.2mm}
&\forall j:s_j\equiv\small{\scalebox{0.7}{\texit{in\_possession}}} \\
\end{array}
\label{eq:finalEq}
\end{equation}
\end{small}%
where $D_p$ in the last constraint indicate maximum distance of ball, that is
possessed by the player, to the player. It is different for different sports and
is learned from the training data. Last constraint allows ball to transition to
a position in a vicinity of position where the player has moved to. Three
previous constraints force equal number of people to enter and leave location,
and limit this number to 1. Note that in the case if true trajectories of the
players are given, we set $c_i^j$ in players graph to same (arbitrary) positive
value, and in the optimal solution $f_i^j=1 \forall (i,j) \in E_p$. In other
case we define of the edge to be $c_i^j=\log({P_p(x_j|I^{t_j}) \over 1 -
P_p(x_j|I^{t_j})})$, where $P_p(x|I)$ is the output of the people detector in
location $x$. More details can be found in the original work of~\cite{Berclaz11}
describing this approach. Eq.~\ref{eq:finalEq} is the final formulation of our
tracking and state estimation problem.
}
\comment{
\subparagraph{Simultaneous tracking of the ball and the players} Note that we
have defined $E_p$ as a graph of all transitions of players in the ground truth,
and $f_p$ are constants representing those transitions. In the fasion similar
to~\cite{Wang14b}, we can also define $V_p$ to be the set of all possible
locations of the players, $E_p$ to be the set of all possible transitions, and
$f_p$ to be the binary variables representing the transitions. To those
variables, we apply constraints similar to (\ref{eq:ipEq}b,d) to ensure that
number of people that enter and exit the location is equal. Constraint
(\ref{eq:ipEq}c) for people takes the following form: $\sum\limits_{i:(i,j) \in
E_p}f_i^j \le 1,\forall j \in V_p\setminus\{S_p\}$, ensuring that only one
person can occupy each certain location, but not limiting total number of people
in the scene. For locations $i,j$ in graph of people locations $c_i^j$ would
then represent the cost associated with the person being present at $j$,
and the objective function would take the form of $\sum\limits_{(i,j) \in
E_b}f_i^jc_i^j+\sum\limits_{(i,j) \in E_p}f_p(i,j)c(i,j)$.
}
\comment{
We consider two possible scenarios here. In one,
ground truth positions of the players are given. In the second, we use the
approach of~\cite{Wang14b}, where people detections are joined using the KSP
algorithm~\cite{Berclaz11} into tracklets, which are connected using the Viterbi
algorithm to create a graph of possible ball locations. In both cases, we can
treat all possible players locations as the nodes in the graph $P(V_p,E_p)$ with
flows $f_p(i,j)$ representing the trajectories of the players. To this graph,
same constraints of flow conservation and unit flow apply. The possession
constraint can therefore be expressed as follows:
\begin{equation}
\label{eq:possessionConstr}
|L^t - \sum\limits_{(i,j) \in E_p}x(j)f_p(i,j)| \le D_p
\end{equation}
$D_p$ is the maximum distance from the ball to the ground plane location of the
player possessing it and it is learned from the ground truth data. The edge
costs are computed as $c_i^j = log {P_p(x(j)|I^{t(j)}) \over 1 -
P_p(x(j)|I^{t(j)})}$ for edges between the frames, with $P_p$ being the output
of the people detector. For edges from the sink and to the source, a constant
penalty associated with the prior probability of person entering or exiting the
scene is used as a weight. More details can be found in~\cite{Wang14b}. In case
of a given ground truth, we can assume all edges to have any arbitrary positive
weight.
\subsection{Final formulation}
Adding constraints
(\ref{eq:discreteCont},~\ref{eq:secondOrder},~\ref{eq:sceneElements},~\ref{eq:possessionConstr})
finishes the formulation of the integer linear program equivalent to
(\ref{eq:mainEq}-\ref{eq:locationConstr}):
\begin{align}
\text{maximize} \sum\limits_{(i,j)\in \{E,E_p\}}f_i^jc_i^j
\label{eq:mainFlow} \\
\text{\textit{subject to}
(\ref{eq:conservationConstr},\ref{eq:unitFlow},\ref{eq:discreteCont},\ref{eq:secondOrder},\ref{eq:sceneElements},\ref{eq:possessionConstr})}
\end{align}
Deterministic relationship between $X^t$ and the flow is given by $X^t=P^t$, and
between $S^t$ and the flow is given by $S^t=s:M^t_s=1$.
}
\section{Related work}
\label{sec:related}
\comment{ Some approaches to game understanding exploit for adversarial behavior
discovery~\cite{Lucey13}, role assignment~\cite{Lan12}, activity recognition
handball~\cite{Direkoglu12,Gupta09}, tracking players~\cite{Liu13}, or
discovering regions of interest~\cite{Kim12}. Many such approaches could
benefit from tracking the ball~\cite{Direkoglu12}. Others~\cite{Li09c,Li09d}
use manually annotated ball/puck data for game element recognition in hockey
and American football. The rest rely on tracking the ball. }
While there are approaches to game understanding, such
as~\cite{Lan12,Liu13,Lucey13,Gupta09,Direkoglu12,Kim12}, which rely on the
structured nature of the data without any explicit reference to the location of
the ball, most others either take advantages of knowing the ball position or
would benefit from being able to~\cite{Direkoglu12}. However, while the problem
of automated ball tracking can be considered as solved for some sports such as
tennis or golf, it remains difficult for team sports. This is particularly true
when the image resolution is too low to reliably detect the ball in individual
frames in spite of frequent occlusions. \comment{\pfrmk{Last sentence is
right.}}
Current approaches to detecting and tracking can be roughly classified as those
that build physically plausible trajectory segments on the basis of sets of
consecutive detections and those that find a more global trajectory by
minimizing an objective function. We briefly review both kinds below.
\subsection{Fitting Tracjectory Segments}
Many ball-tracking approaches for soccer~\cite{Ohno00,Leo08},
basketball~\cite{Chen09a}, and volleyball~\cite{Chen07,Gomez14,Chakraborty13}
start with a set of successive detections that obey a physical model. They
then greedily extend them and terminate growth based on various
heuristics. \comment{~\cite{Leo08,Chen07,Chen09a,Gomez14} grow the trajectory by
finding the next candidate that fits the model, while~\cite{Chakraborty13} use
Kalman filter for the same purpose. ~\cite{Leo08} use intersections of ball
and players trajectories to identify interaction events,
while~\cite{Chen07,Gomez14} grow the neighbouring pairs of trajectories until
intersection to identify events of ball-player contact.} In~\cite{Ren08},
Canny-like hysteresis is used to select candidates above a certain confidence
level and link them to already hypothesized trajectories. Very recently, RANSAC
has been used to segment ballistic trajectories of basketball shots towards the
basket~\cite{Parisot15}. These approaches often rely heavily on domain
knowledge, such as audio cues to detect ball hits~\cite{Chen07} or model
parameters adapted to specific sports~\cite{Chakraborty13,Chen09a}.
While effective when the initial ball detections are sufficiently reliable,
these methods tend to suffer from their greedy nature when the quality of these
detections decreases. We will show this by comparing our results to those
of~\cite{Gomez14,Parisot15}, for which the code is publicly available and have
been shown to be a good representatives of this set of methods.
\comment{\pfrmk{Can you say why you chose those?}\amrmk{They
are publicly available & do not require external knowledge (audio cues,
etc.),~\cite{Parisot15} shows more promise as it doesn't detections to be
adjacent,~\cite{Gomez14} reimplements the approach which claimed to have over
90\% accuracy}.}
\subsection{Global Energy Minimization}
One way to increase robustness is to seek the ball trajectory as the minimum of
a global objective function. It often includes high-level semantic knowledge
such as players' locations~\cite{Zhu07a,Zhang08b,Wang14a}, state of the game
based on ball location, velocity and acceleration~\cite{Zhang08b,Zhu07a}, or
goal events~\cite{Zhu07a}.
In~\cite{Wang14b,Wang15}, the players {\it and} the ball are tracked
simultaneously and ball possession is explicitly modeled. However, the tracking
is performed on a discretized grid and without physics-based constraints, which
results in reduced accuracy. It has nevertheless been shown to work well on
soccer and basketball data. We selected it as our baseline to represent this
class of methods, because of its state-of-the-art results and publicly available
implementation. \comment{as a representative of this class Since we know of no
other global optimization technique that takes advantage of both context and
physics-based constraints, we use it as one of our baselines.}
\comment{
\subparagraph{Approaches without the ball} concentrate on detecting
interactions without relying on localizing the ball.
\cite{Poiesi10,Kim12} propose a detector-less approach based on the motion flow
and stochastic vector field of motion tendencies, accordingly,~\cite{Poiesi10}
links points of convergence of the motion flow into the trajectories by using
the Kalman filter, while~\cite{Kim12} generate regions of interest but do not
report how often the ball is located within it.
For~\cite{Poiesi10}, authors
report a high accuracy of 82\%, but are unable to predict an actual location of
the ball within the area of flow convergence.
Other approaches concentrate on game element recognition~\cite{Li09d},
role assignment~\cite{Lan12} in hockey, activity recognition
in handball~\cite{Direkoglu12} and baseball~\cite{Gupta09}, plays
recognition~\cite{Li09c} in American football. In all of the above,
authors either manually detect the ball/puck or use ground truth
data~\cite{Li09d,Li09c}, or don't use the location in the problem formulation
and mention that their approach could benefit from it~\cite{Direkoglu12}.
}
\comment{
provide more principled way of
finding a trajectory optimal with respect to a certain cost function. In this
case cost function often includes~ the use it to classify the state of the
football game.~\cite{Zhang08b} train an Adaboost classifier to filter a set of
candidate trajectories obtained by using a particle filter with the first-order
linear model. The final ball trajectory is generated using both people detection
and candidate ball trajectories, but, unfortunately, no specific details of the
ball tracking formulation are given, and evaluation is done only on the short
sequences. \cite{Zhu07a} intialize the tracking by Viterbi algorithm to link the
detections, and continue by using the SVR particle filter, but use external
information sources to detect the goal events, which are of main interest to the
authors. \cite{Wang14a} concentrate on tracking the ball while it is in
possession, by predicting and exploiting high level state of the game. Approach
can be viewed as simultaneous ball tracking and game state estimation, yet it
relies on correct tracking of the ball while it is not possesseed.
}
\comment{
These approaches tend to rely heavily on domain knowledge, either explicitly
(\cite{Chen07} use audio cues to detect the moment of hitting the
ball;~\cite{Chakraborty13,Chen09a} bound the possible parameters of the physical
model specific to a type of sport;~\cite{Ren08} create classifiers based on
domain knowledge;~\cite{Gomez14,Parisot15} assume the ball always undergoes a
ballistic motion) or implicitly, by proposing an approach for a specific sport.
}
\comment{, one
of \textit{flying, rolling, in\_possession, out\_of\_play}. Classifier uses
information about the location of the ball, likelihood of the detection,
distance to the nearest player and the distance from the edge of the field.
Depending on the selected state, trajectory in the missed frames is estimated
using a physical model specific for a particular type of motion. This is similar
to the classifier that we are using, but in our work classifier is used as a
part of the globally optimal tracking process. Furthermore, authors associate
\textit{in\_possession} with a low detector output, use it as an initializer to
other stages, and focus on tracking the visible ball, while our work uses the
state as an ``equal partner''.}
\comment{Due to the local nature of the optimization in the methods, they often
perform well only when a ball is visible and follows an easy ballistic or linear
trajectory:~\cite{Parisot15} reports the drop of tracking accuracy from 70\% to
33\% when trying to track in the whole field area;~\cite{Gomez14} report below
60\% accuracy when using the method of~\cite{Chen07}, while the original work
claims to have over 90\% of accuracy, a difference that could be attributed to
the fact that~\cite{Chen07} used external cues to detect beginning of the play.
}
\comment{
\subparagraph{Trajectory growth and fitting} selects a trajectory based on the
set of the detections and the appropriate physical model.
One of the earliest works\cite{Ohno00} finds the trajectory of the football by
minimizing the fitting error of ballistic trajectory. No actual evaluation of
tracking accuracy was done.
\cite{Leo08} track the football ball by identifying candidate trajectories that:
join detections that are collinear when projected on the ground plane.
Trajectories are extended greedily while the next detection is found in the
centain small time window. Later intersections of the ball trajectories and the
players are used to identify the interation events.
\cite{Ren08} first generate trajectories in each of the camera views, by using
Canny-like hysteresis to select candidates above a certain threshold or those
connected to already selected candidates. However, only the most likely
candidate from each single view is passed further on to estimate the 3D location
of the football. Heuristic-based classifiers are used to estimate the state of
the ball, one of \textit{flying, rolling, in\_possession, out\_of\_play}.
Classifiers use information about the location of the ball, likelihood of the
detection, distance to the nearest player and the distance from the edge of the
field. Depending on the selected state, trajectory in the missed frames is
estimated using a physical model specific for a particular type of motion. This
work associates ``in possession'' with a low detector output. This state is used
as an initializer to other stages and the focus is on the stages when the ball
is more visible.
\cite{Chen09a} link basket ball candidates on the image plane and grow
trajectories based on the physical model with a maximum of 5 frames miss. The
set of candidates is filtered through a thorough analysis of possible physical
characteristics of the trajectory based on the height of the basket and other
domain knowledge, and sorted based on heuristics taking into account the length,
fitting error and ratio of isolated candidates. Afterwards, 2D trajectories are
mapped into 3D trajectories with the aim of estimating the shooting location.
Authors also proposed variations of such approach for volleyball, baseball and
football. Proposed solutions heavily rely on domain knowledge (\eg audio cues
are used to identify the moment when the player hits the ball or the beginning
of the game, ballistic trajectories of the ball are filtered based on the
knowledge of the physical properties of the given ball, etc.). While authors
claim to have typically over 90\% of tracking accuracy in a small vicinity
of the ball, no publicly available implementation of their approach is
given. However, the comparison of their method to some other methods, made
by~\cite{Gomez14}, while identified its superiority, reveals the tracking
accuracy of below 60\%. Latter authors also provide their implementation.
\cite{Chakraborty13} start from close pairs of volleyball candidates in nearby
frames. They use Kalman filter to generate trajectory candidate, which is
terminated based on the number of missed candidates in the trajectory. Afterwards,
candidates are selected starting from the longest one, subject to having
appropriate physical parameters. Gaps are filled by interpolating and
extrapolating the trajectory, but only up to 5 frames. This information is
afterwards used to classify shots based on the geometric properties of
trajectories, specified by the authors.
More recent work~\cite{Parisot15} uses RANSAC on the 3D candidates to form
ballistic trajectories in basketball. As the ball is often invisible while
possessed by the player, authors concentrate or court shot retrieval.
\subparagraph{Energy function minimization} provide more principled way of
finding a trajectory optimal with respect to a certain cost function.
\cite{Theobalt04} use very controlled environment of darkened room to track the
baseball location and rotation by formulating an appropriate energy function.
\cite{Zhang08b} use the information about the nearest player, football
position, velocity, acceleration and trajectory length to train the
Adaboost classifier to filter a set of candidate trajectories, obtained by
applying the particle filter with the first-order linear motion to the detections.
The obtained trajectories were then taken into account when tracking people.
This approach uses people detection and tracking to generate ball trajectories
and obtain the final ball trajectory, but, unfortunately, no specific details of
ball tracking are given, and the evaluation is done only on short sequences.
\cite{Zhu07a} use the external information to detect the goal events
and their time stamp in football. Tracking is initialized using Viterbi algorithm
to link the detections and continues by using SVR particle filter until it is
lost. Based on ball and player locations and distances between each other,
attack is classified in one of six types.
\cite{Poiesi10} use a detector-less approach and generate a set of basketball
candidates by finding the points of convergence of the motion flow. Such
candidates in each frame are joined into trajectories using a Kalman filter.
\cite{Wang14a} concentrate on a difficult part of tracking the ball while
it is in possession. Authors assume that ballistic parts of the ball
trajectories have already been extracted and concentrate on predicting which
person holds the ball based on the analysis of the game phase, player locations,
distances between them. Approach can be viewed as simultaneous ball tracking and
game phase estimation, but it is based on the correct extraction of ball
locations while it is not possessed, and the errors introduced at this stage can
not be recovered from. Authors show results on basketball and football datasets.
\cite{Wang14b} propose an integer programming formulation for tracking two types
of objects, one of which contains the other, in a globally optimal way. Authors
show their results on several datasets that include basketball and football
games. Since tracking is done on a discrete grid, precision of 3D tracking of
small objects such as a ball is limited. Furthermore, assumptions that a
containee object (ball) is not visible while it is possessed by a container
object (person) is clearly violated in the sports domain.
\subparagraph{Approaches without the ball} concentrate on detecting
interactions without relying on localizing the ball.
\cite{Kim12} predict regions of interest by generating a stochastic vector field
of motion tendencies, somewhat similar to~\cite{Poiesi10}. While these regions
can often contain the ball, no information about the tracking accuracy is given.
In hockey, for the purposes of game elements recognition \cite{Li09d} and
role assignment \cite{Lan12}, hockey puck is either manually detected
or ignored.
In handball \cite{Direkoglu12}, ball location is not used as a feature for
activity recognition and authors mention that having it might allow to
recognize ``more complex activity classes''.
In American football for the problem of recognition of plays
\cite{Li09c}, ball location is not present in the formulation and for
people locations ground truth data was used.
In baseball \cite{Gupta09}, action elements recognition is based
on the roles of the players. Role assignment was done without tracking
the ball and could benefit from it.
\subparagraph{Our approach}
To sum up, many of the approaches
\cite{Chen09a,Chakraborty13,Ohno00,Leo08,Zhu07a} use domain knowledge-based
heuristics but do not exploit external information such as the location
of the players or the state of the ball. In many sports authors
\cite{Li09d,Direkoglu12,Lan12,Li09c,Gupta09} concentrate on features based on
the motion flow, player locations and roles, etc. to understand the higher level
semantic of the game. Such applications would benefit greatly from the automatic
ball tracking that agrees with higher level semantics and is domain-independent.
Our work is most similar to the following works:
\begin{itemize}
\item We exploit the ability to express container-containee relations as
an integer program, similarly to \cite{Wang14b}. However,
our formulation of the tracking problem involves simultaneous tracking and
state estimation, physical constraints and tracking in the continuous domain,
that are not present in this work.
\item We have the ball states similar to the ones in \cite{Ren08}, but
we estimate them simultaneously with tracking and learn rather than define
their features. Similarly to \cite{Zhang08b}, we weight the
detections based on the features of location and distance to the nearest
player and learn the likelihood of each individual detection, but our
formulation includes the state of the ball.
\item In a more general scheme of things, our approach learns the motion and
appearance contexts of a ball and uses them to predict the location when it is
unobserved, which is similar to what \cite{Ali07a} does in aerial
videos. However, we don't operate under the assumption that the context can be
unambiguously recovered from each frame. Ball states call also be viewed as
learned \textit{supporters}, as defined by \cite{Grabner10}, but the
our case of ball tracking we have full occlusions much more often and
partial occlusions much more rarely.
\end{itemize}
}
\comment{
some sports (\eg tennis~\cite{Yan08} and golf~\cite{Lepetit03a}, due to rare
occlusions of the ball and its discriminative appearance features), this is not
the case for many team sports. Structured nature of data in such sports
gave rise to many approaches of tracking objects with the help of high
level semantic cues. Several works enhance tracking by estimaing the state
of the game~\cite{Liu13}, or the role of the player and the formations
of players~\cite{Lucey13}. For the ball tracking,~\cite{Wang14a} proposed an
approach to track the ball while it is in possession by using information about
the state of the game and players location. Our approach is similar to the above
in the sense that we are estimating the state of the ball to enhance
tracking it. It also has similarities to more general tracking approaches
of~\cite{Ali07a}, as we learn the motion and appearance context of the ball to
predict its location while it is unobserved. However, we don't assume that
the context can be unambiguously recovered from each frame. Our states of
the ball can also be viewed as learned \textit{supporters}, as defined
by~\cite{Grabner10}, but for the sports scenario we can not claim that the
relative target / features location is fixed over short time intervals. Below we
describe approaches more specific to ball tracking.
}
| {
"attr-fineweb-edu": 1.739258,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdyA5qoTDtvejzx7F | \section{case studies}
\label{sec:case_study}
We conduct a case study on a basketball dataset to demonstrate the effectiveness of our proposed visual design. It contains 321 NBA player statistics, who are active players in both Season 2018 and 2019. We adopt the application as evaluating the progress of players is of great importance in the league, which is attributed to the foundation of the annual award Most Improved Player~\cite{martinez2019method}. Following the methodology proposed by Dumitrescu et al.~\cite{dumitrescu2000evolutionary}, we use a rank-based statistical category Points per Game (abbreviated as PPG) instead of row data to address the discrepancy between players' performance and the highly-aggregated PPG records.
The preceding conventional designs, such as slope graph (Figure \ref{fig:teaser}b), shows a PPG trend story by connecting two PPG states of Season 2018 and 2019, while another target design grouped bar charts (Figure \ref{fig:teaser}c) plots items with 321 clustered bars.
Apparently, both designs have limitations to visualize PPG changes. First, they encode changes by ineffective visual channels. For slope graphs (Figure \ref{fig:teaser}b), line slopes across different players are difficult to be compared, especially there are distractors between two target lines. Also, as shown in the detailed view of Figure \ref{fig:teaser}c, bar height differences indicating PPG changes can not be perceived effectively.
Furthermore, both designs are beyond their respective visual scalability to plot over 300 items. Specifically, for grouped bar charts, the perception suffers from the visual clutter in terms of the narrow width of bars and a variety of distractor bars. For slope graphs, serious line overlapping makes it hard to distinguish different lines and compare line slopes.
On the contrary, the proposed visual design \textit{Delta Map}{} improves state comparisons in terms of better graphical perceptions. Figure \ref{fig:teaser}a-left is used to visualize the top 30 players of rise and drop PPG changes by setting the \textit{residue-item} number to 30 interactively. It is clear that the 30 \textit{residue-items} for both rise and drop trends are arranged sparsely within the inner circular axes, which mitigates visual clutter issues significantly. If, instead, the user is interested in sets of top 10 candidates of MIP selection, simply setting the \textit{residue-item} number to 10 will fulfill the needs (Figure \ref{fig:teaser}a-right).
Also, based on the length mapping of state changes, it is apparent to recognize that Kawhi Leonard (Line \textit{Kawhi L.}) has a larger PPG progress than Stephen Curry (Line \textit{Stephen C.}), \modify{both of which are plotted as red line segments due to the decrease of the rank values (e.g., from the third to the first), which actually indicates an improvement of their PPGs.}
Also, the PPG of Courtney Lee (Line \textit{Courtney L.}) drops much more than that of DeMarcus Cousins (Line \textit{DeMarcus C.}) due to the longer line segment length.
More interesting findings can be revealed by \textit{Delta Map}. Here, we introduce a statistical measure \textit{percentage difference}, according to a prior study~\cite{cole2017statistics}, to reflect differences of two lengths of intercepted line segments.
As shown in Figure \ref{fig:teaser}b and Figure \ref{fig:teaser}c, Line \textit{Walter L. JR.} and Line \textit{R.J. H.} (highlighted in dark blue annotations) have a percentage difference of slopes and bar height differences of 8.9\% (213 and 234 places risen respectively) due to the linear visual mapping.
However, our approach (Figure \ref{fig:teaser}a-right) magnifies the length ratio of intercepted line segments to 18.3\% (100.9 pixels to 123.4 pixels) through image software measurement.
Another pair of target players \textit{Andrew H.} and \textit{Tyreke E.} for drop trend of PPG ranking (highlighted in crimson annotations) have the percentage differences of 8.1\% (114 and 124 places dropped) and 19.2\% (54.8 pixels to 67.8 pixels) for two preceding designs and our \textit{Delta Map}{} respectively. It is clear that both results magnify the original linear mapping over two times, which makes the original linear mapping apparent enough to make judgments.
\section{conclusion}
\label{sec:conclusion}
In this paper, we present a novel visual design \textit{Delta Map}{} for context-aware comparison of state changes. \modify{Instead of focusing on visualizing the exact change quantities, \textit{Delta Map}{} is mainly designed for facilitating the \textit{comparison} of state changes across multiple data items via more effective interaction.
We compared \textit{Delta Map}{} with widely-used established tools (i.e., slope graphs and grouped bar charts)}. A case study on a two-season basketball dataset shows that our design can quickly filter large state changes and amplify the difference of similar state changes for an accurate comparison through smooth interactions.
In future work, we plan to conduct more case studies and user studies on real datasets to further evaluate the effectiveness of \textit{Delta Map}{}. Also, it would be interesting to explore how to \textit{automatically} determine the optimal default inner circular radius for more efficient comparison of state changes.
\section{Preliminary Survey}
There are few prior studies specifically investigating the visualization of state changes.
Thus, to identify what visualizations have been applied to visualizing state changes,
we conducted a preliminary survey to determine the mostly used visualization types for statistical change comparison.
Following the methodology used by Segel and Heer~\cite{segel2010narrative}, we first gathered figures from existing research papers that need to compare multiple state changes.
We used the permutation of ``state'', ``change'', ``comparison'' as search keywords and manually harvested 100 top query results from Google Scholar.
Since each study may include multiple figures for state change comparison, we further split them into 156 individual figure units.
We then categorized all figure units into the five main groups (Table \ref{table:2.1.1}) introduced by Borkin et al.~\cite{borkin2013makes}.
Note that the \textit{Heatmap} category is designed to visualize changes with regard to spatial information such as the physical position, which is beyond the scope of our study and thus excluded from our survey.
\begin{table}[hbtp]
\begin{center}
\begin{tabular}{@{}c|c|r@{}}
\toprule
\multicolumn{2}{c|}{Category} & \multicolumn{1}{c}{Percentage} \\ \midrule
\multirow{2}{*}{Bar} & Grouped Bar Chart & 44.9\% \\
& Stacked Bar Chart & 0.6\% \\ \midrule
Line & Slope Graph & 28.2\% \\ \midrule
\multirow{2}{*}{Circle} & Pie Chart & 3.8\% \\
& Donut Chart & 0.6\% \\ \midrule
Grid \& Matrix & Heatmap & 19.9\% \\ \midrule
Points & Scatter Plot & 1.9\% \\ \bottomrule
\end{tabular}
\end{center}
\caption{Categories of figure units and respective percentages collected from related research papers.
}
\label{table:2.1.1}
\end{table}
\section{related work}
\label{sec:related_work}
The related work of this paper can be categorized into two groups: state change visualization
and
radial visual design.
\subsection{State Change Visualization}
According to the preliminary survey shown in Table \ref{table:2.1.1}, we finally decide to target grouped bar charts and slope graphs due to their dominance in our harvested data set.
Slope graph~\cite{tufte1985visual} (Figure \ref{fig:2}a) is an appropriate visual design when the nature of the task is to compare state changes across items based on comparing their line slope in time. A positive value of the slope implies that the dependent variable increases, while a negative value implies that the variable decreases.
Grouped bar chart~\cite{beniger1978quantitative} (Figure \ref{fig:2}c) is another approach to display state changes with the context of initial and final values, which encodes the initial and final values by respective categorical bars within each group.
However, distractors between two target bar groups inevitably affect graphical perception when the amount of items exceeds its scalability~\cite{talbot2014four, doi:10.1198/106186002317375604},
grouped bar chart is the most common method to show state changes.
Stacked bar chart~\cite{donnelly2009humongous} (Figure \ref{fig:2}d) is the most straightforward solution when we previously interviewed domain experts, which
indicates change counts by the stacked sub-bars on lower sub-bars denoting a context state.
However, if the data set includes data items of both rise and drop trends, the representation may suffer from visual complexity with an increasing number of items, which would significantly affect the human perception. Also, viewers can not determine relative bar height accurately on such unaligned bar chart variants~\cite{cleveland1987graphical}. As shown in Table \ref{table:2.1.1}, researchers rarely utilize this visualization type to compare state changes.
In this paper, the state changes are encoded by the \textit{lengths} of different line segments,
which is more accurate than \textit{height difference} (for grouped bar charts) and \textit{slopes} (for slope graphs)~\cite{cleveland1987graphical,cleveland1986experiment,cleveland1985graphical}.
Also, intuitive interactions are enabled in \textit{Delta Map}{} to support a comparison of state changes with better graphical perception.
\subsection{Radial Visual Design}
Visual representations of data that are based on circular shapes are referred to as radial visualizations~\cite{burch2014benefits}.
Draper et al.~\cite{draper2009survey} provided a comprehensive survey on radial visualization and categorize it into three visual themes: \textit{Polar Plot}, \textit{Space Filling} and \textit{Ring Based}.
The earliest use of a radial display in statistical graphics was the pie chart, which was proposed in William Playfair's 1801 treatise, the Statistical Breviary~\cite{playfair1801statistical}.
After that, radial visualization is becoming an increasingly pervasive metaphor in information visualization.
Radviz~\cite{grinstein2002information} is a typical radial visualization-based approach to cluster multidimensional data. Hacıaliefendioğlu et al.~\cite{hacialiefendiouglu2020co} developed a radial technique that allows elaborate visualization of the interplay between different violence types and subgroups. Additionally, prior studies further discussed the strengths and weaknesses of radial visualization through various methodologies~\cite{diehl2010uncovering,goldberg2011eye}.
According to the taxonomy presented by Draper et al.~\cite{draper2009survey}, \textit{Delta Map}{} belongs to the subtype \textit{Connected Ring Pattern} under \textit{Ring Based}. Accordingly, \textit{Delta Map}{} preserves the advantages of radial visualization and further extends static radial methods via flexible interactions, making it available to compare items more accurately and effectively.
\section{visualization method}
\section{Visual Design}
We describe the composition of \textit{Delta Map}{}, the approach of adjusting the radius of the inner circular axis, and the user interaction.
\subsection{Visualising an Intercept Graph}
\label{subsec:basic_method}
\textbf{Intercept Graph} uses line segments to facilitate the comparison of state changes across multiple data items.
The inner and outer circular axes are used to locate ``initial'' and ``final'' states respectively. Note that \textit{Delta Map}{} is not an intact dual-circle design, since the left and right semi-circular axis are separated apart intrinsically, which are used to visualize data items with drop and rise trends of state values respectively.
\modifY{\textbf{Line segments} (e.g., \textit{Line AB, Line CD, Line EF} in Figure \ref{fig:4}a) are a set of lines generally drawn from the inner circular axis to the outer circular axis, which are used to implicitly encode the change quantity of each item. The central angle between radii representing initial and final values is proportional to the state changes as the scale of both inner and outer circular axes are linearly distributed. For example, suppose that there are two data items. One data item changes from 33 to 35 and the other from 37 to 40. Then the ratio of the central angles of Intercept Graph is 3:2 as shown in the angles $\alpha$ and $\beta$ subtended to line segments \textit{AB} and \textit{CD} in Figure \ref{fig:4}a. Also, following the Lows of Cosines, the line segment $c$ is determined as follows in terms of $\theta$:}
\begin{equation}
c = \sqrt{r^2+R^2-2 \cdot r \cdot R \cdot \cos{\theta}}
\label{equ:1}
\end{equation}
\modifY{where constants $r, R$ denote the radii of the inner and outer circular axis respectively (as shown in the line segment \textit{EF} in Figure \ref{fig:4}a). $\theta \in [0,\pi]$ denotes the central angle subtended to the line segment. Equation \ref{equ:1} is monotonic increasing in terms of $\theta$, which indicates that the central angle of \textit{Delta Map}{} is correlated positively with the line segment length. So, according to the two conclusions illustrated above, the line segment length is positively correlated with the change quantity.}
\textbf{Axis range} is determined by the minimum and maximum of the ``initial'' and ``final'' states of all the data items.
\modify{
With such a setting, \textit{Delta Map}{} can have more space to highlight the state changes, facilitating an easy comparison of different state changes.
As shown in Figure \ref{fig:3}c, both the left and right parts of \textit{Delta Map}{} have a fixed radius of outer circular axis and adjustable radius for the inner circular axis.
}
\begin{figure}[t]
\centering
\begin{mdframed}
\includegraphics[width=\columnwidth]{figures/method2.pdf}
\end{mdframed}
\caption{(a) An example showing that state changes are linearly encoded by central angles. (b) Analytic geometry diagram of \textit{Delta Map}{} for the calculation of the radius of inner circular axis.
}
\label{fig:4}
\end{figure}
\begin{figure}[t]
\centering
\begin{mdframed}
\includegraphics[width=\columnwidth]{figures/variants.pdf}
\end{mdframed}
\caption{Alternative designs of \textit{Delta Map}. (a) A draft with lines in the same semi-circular axis. (b) Extending (a) by introducing the inner circular axis for item filtering. (c)
The final visual design.
}
\label{fig:3}
\end{figure}
\textbf{Residue-items} are the remaining data items indicated by the line segments who intersect with the inner circular axis, as shown by the line segments with a bold portion in Figure \ref{fig:3}c.
The set of \textit{residue-items} varies according to the adjustment of the radius of the inner circular axis, which serves as a filter which keeps only the items with a relatively large change. \modify{More specifically, the smaller the inner circular axis, the fewer \textit{residue-items}. Otherwise, more data items with relatively small state changes will also be kept.}
\textbf{Alternative designs:}
Before we come up with the current design, we also considered several alternative designs (Figures \ref{fig:3}a and b).
Figure \ref{fig:3}a can visualize the initial and final states of multiple data items, but they cannot support interactively filter data items with a large change.
Figure \ref{fig:3}b enables interactive filtering of \textit{residue-items}, but still suffers from serious visual clutter.
\textit{Delta Map}{} is preferred,
as it mitigates the visual clutter by plotting increasing and decreasing data items in the left and right circular axes, respectively.
\subsection{Radius of the Inner Circular Axis}
\label{subsection:4.2}
\modify{
With the decrease of radius of the inner circular axis, all the data items with a smaller state change will be excluded from the residue items, i.e., \textit{the state changes of all the \textit{residue-items} are always larger than those excluded from the \textit{residue-items}}.
Figure \ref{fig:4}b provides an intuitive illustration for this.
As introduced in Section \ref{subsec:basic_method}, the line segment length is positively correlated with the change quantity.
Suppose we decrease the inner circular axis outward until it is tangent to \textit{Line} $c_{k}$, which corresponds to the data item with $k$-th largest state changes.
Then, \textit{Line} $c_{k-1}$ (representing the $k-1$ largest state changes) should always be included in the \textit{residue-items}, while \textit{Line} $c_{k+1}$ (representing the $k+1$ largest state changes) is already excluded from the \textit{residue-items}.
}
\modifY{
Given the above properties of the radius of the inner circular axis, users can interactively adjust the radius of the inner circular axis to focus on the data items with higher state changes.
Also, we provide an automated way to help users quickly filter the data items with top-$k$ state changes by automatically determining the corresponding radius of inner axis.
As shown in Figure \ref{fig:4}b,
the corresponding radius of inner circular axis $r$ can be calculated as follows:
\begin{equation}
r = R \cdot \cos (\vert \Phi - \phi \vert)
\label{equ:2}
\end{equation}
where $\Phi,\phi \in [0, 2\pi]$ denote the angles between the vertical separating line \textit{MN} and the corresponding radii indicating the initial and final states of the data item with the $k$-th largest state change.
}
\subsection{User Interaction}
The user interaction extends \textit{Delta Map}{} from static radial visualization. Specifically, two features called \textit{large change accentuation} and \textit{close change magnification} are proposed to supports more advanced features over the basic nature plotting change counts.
\textbf{Large change accentuation} allows quick filtering for the data items with large state changes of user interests. For example, through shrinking the radius of the inner circular axis, items with larger change counts would be more likely to be filtered (the flow is shown from Figure \ref{fig:3}a to Figure \ref{fig:3}b). Otherwise, all data items will turn into \textit{residue-items} when the inner circular axis radius is equivalent to that of the outer circular axis. This feature performs well with an increasing number of data items.
\textbf{Close change magnification} enhances the human graphical perception of state change comparison through amplifying the difference of similar change quantities interactively (as shown in pairwise items highlighted in dark blue and crimson in Figure \ref{fig:teaser}). Through shrinking the inner circular axis inward, the ratio of pairwise line segments will be magnified, which makes the comparison of relative state changes more effective.
| {
"attr-fineweb-edu": 1.944336,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeQ04dbghfPG8YUAD | \section{Introduction}
\label{sec:introduction}
The \textit{RoboCup 3D Soccer Simulation} environment provides a dynamic, real-time, complex, adversarial, and stochastic
multi-agent environment for simulated agents. The simulated agents formalize their
goals in two layers: \begin{inparaenum}\item the physical layers, where controls related to walking,
kicking etc. are conducted; and \item the decision layers, where high level actions are taken to emerge behaviors. \end{inparaenum}
In this paper, we investigate a mechanism suitable for decision layers to use recently introduced Off-Policy Gradient
Decent Algorithms in Reinforcement Leaning (RL) that illustrate learnable knowledge representations
to learn about \textit{a dynamic role assignment function}.
In order to learn about an effective dynamic role assignment function, the agents need to
consider the dynamics of agent-environment interactions. We consider these interactions as the
agent's knowledge. If this knowledge is represented in a formalized form
(e.g., first-order predicate logic) an agent could infer many aspects about its interactions
consistent with that knowledge. The knowledge representational forms show different degrees of
computational complexities and expressiveness \cite{DBLP:conf/atal/SuttonMDDPWP11}. The
computational requirements increase with the extension of expressiveness of the representational
forms. Therefore, we need to identify and commit to a representational form, which is scalable for
on-line learning while preserving expressivity. A \textit{human} soccer player knows a lot
of information about the game before (s)he enters onto the field and this prior knowledge influences the
outcome of the game to a great extent. In addition, human soccer players dynamically change their
knowledge during games in order to achieve maximum rewards. Therefore, the knowledge of the human
soccer player is to a certain extent either \textit{predictive} or \textit{goal-oriented}. Can a
\textit{robotic} soccer player collect and maintain predictive and goal-oriented knowledge? This is a challenging problem for agents with time constraints
and limited computational resources.
We learn the role assignment function using a framework that is developed based on the concepts of
Horde, the real-time learning methodology, to express knowledge using General Value
Functions (GVFs) \cite{DBLP:conf/atal/SuttonMDDPWP11}. Similar to Horde's sub-agents, the agents in
a team are treated as independent RL sub-agents, but the agents take actions based on their belief of the
world model. The agents may have different world models due to noisy perceptions and communication
delays. The GVFs are constituted within the RL framework. They are predictions or off-policy controls
that are answers to questions. For example, in order to make a prediction a question must be asked
of the form ``If I move in this formation, would I be in a position to score a goal?'', or ``What set
of actions do I need to block the progress of the opponent agent with the number 3?''. The question
defines what to learn. Thus, the problem of prediction or control can be addressed by learning value functions. An
agent obtains its knowledge from information communicated back and forth between the agents and
the agent-environment interaction experiences.
There are primarily two algorithms to learn about the GVFs, and these algorithms are based on Off-Policy Gradient Temporal Difference (OP-GTD) learning: \begin{inparaenum}
\item with action-value methods,
a prediction question uses GQ($\lambda$) algorithm \cite{Maei_Sutton_2010}, and a control or a goal-oriented question
uses Greedy-GQ($\lambda$) algorithm \cite{DBLP:conf/icml/MaeiSBS10}. These algorithms learned about a deterministic target policies and the control algorithm finds the greedy action with respect to the action-value function; and
\item with policy-gradient methods, a goal-oriented question can be answered using Off-Policy Actor-Critic algorithm \cite{DBLP:journals/corr/abs-1205-4839}, with an extended state-value function, GTD($\lambda$) \cite{MaeiHRPhdThesis2011}, for GVFs. The policy gradient methods are favorable for problems having stochastic optimal policies, adversarial environments, and problems with large action spaces. \end{inparaenum}
The OP-GTD algorithms possess a number of
properties that are desirable for on-line learning within the RoboCup 3D Soccer Simulation environment: \begin{inparaenum}
\item off-policy updates;
\item linear function approximation;
\item no restrictions on the features used;
\item temporal-difference learning;
\item on-line and incremental;
\item linear in memory and per-time-step computation costs; and
\item convergent to a local optimum or equilibrium point \cite{DBLP:conf/nips/SuttonSM08,DBLP:conf/icml/MaeiSBS10}.
\end{inparaenum}
In this paper, we present a methodology and an implementation to learn about a dynamic role assignment
function considering the dynamics of agent-environment interactions based on
GVFs. The agents ask questions and approximate value functions answer to those questions. The
agents independently learn about the role assignment functions in the presence of an adversary team.
Based on the interactions, the agents may have to change their roles in order to continue in the
formation and to maximize rewards. There is a finite number of roles that an agent can commit to, and
the GVFs learn about the role assignment function. We have conducted all our experiments in the RoboCup 3D
Soccer Simulation League Environment. It is based on the general purpose multi-agent simulator
SimSpark\footnote{\url{http://svn.code.sf.net/p/simspark/svn/trunk/}}. The robot agents in the simulation are modeled
based on the Aldebaran NAO\footnote{\url{http://www.aldebaran-robotics.com/}} robots. Each robot has
22 degrees of freedom. The agents communicate with the server through message passing and each agent
is equipped with noise free joint perceptors and effectors. In addition to this, each agent has a
noisy restricted vision cone of $120^o$. Every simulation cycle is limited to $20~ms$, where
agents perceive noise free angular measurements of each joint and the agents stimulate the necessary
joints by sending torque values to the simulation server. The vision information from the server is
available every third cycle ($60~ms$), which provides spherical coordinates of the perceived
objects. The agents also have the option of communicating with each other every other simulation
cycle ($40~ms$) by broadcasting a $20~bytes$ message. The simulation league competitions
are currently conducted with 11 robots on each side (22 total).
The remainder of the paper is organized as follows: In Section \ref{sec:RelatedWork}, we briefly
discuss knowledge representation forms and existing role assignment formalisms. In
Section \ref{sec:LearnableknowledgeRepresentationForRoboticSoccer}, we introduce GVFs within the
context of robotic soccer. In
Section \ref{sec:DynamicRoleAssignment}, we formalize our mechanisms of dynamic role assignment
functions within GVFs. In Section \ref{sec:GVFQandA}, we identify the question and answer functions to represent GVFs, and Section \ref{sec:Experiments} presents the experiment results and the discussion. Finally, Section
\ref{sec:ConclusionAndFutureWork} contains
concluding remarks, and future work.
\section{Related Work}
\label{sec:RelatedWork}
One goal of multi-agent systems research is the investigation of the prospects of efficient cooperation among a set of agents in
real-time environments. In our research, we focus on the cooperation of a set of agents in a
real-time robotic soccer simulation environment, where the agents learn about an optimal or a
near-optimal role assignment function within a given formation using GVFs. This subtask is
particularly challenging
compared to other simulation leagues considering the limitations of the environment, i.e. the limited locomotion capabilities, limited communication bandwidth, or crowd management rules. The role assignment is a part of the hierarchical machine
learning paradigm \cite{AIJ99,Stone99layeredlearning}, where a formation defines the role space.
Homogeneous agents can change roles flexibly within a formation to maximize a given reward function.
RL framework offerers a set of tools to design sophisticated and hard-to-engineer
behaviors in many different robotic domains (e.g., \cite{Bagnell_2013_7451}). Within the domain of \textit{robotic soccer}, RL has been successfully applied in learning the keep-away subtask in the RoboCup 2D \cite{AB05} and 3D \cite{Andreas2011}
Soccer Simulation Leagues. Also, in other RoboCup leagues, such as the Middle Size League, RL
has been applied successfully to acquire competitive behaviors \cite{Gabel06bridgingthe}. One
of the noticeable impact on RL is reported by the Brainstormers team, the RoboCup 2D
Simulation League team, on learning different subtasks \cite{DBLP:conf/cig/RiedmillerG07}. A comprehensive analysis of a general batch RL
framework for learning challenging and complex behaviors in robot soccer is reported in
\cite{DBLP:journals/arobots/RiedmillerGHL09}. Despite convergence guarantees, Q($\lambda$)
\cite{sutton98a} with linear function approximation has been used in role assignment in robot soccer
\cite{Kose_2004} and faster learning is observed with the introduction of heuristically accelerated
methods \cite{DBLP:conf/epia/GurzoniTB11}. The dynamic role allocation framework based on dynamic
programming is described in \cite{AAMAS12-MacAlpine} for real-time soccer environments. The role
assignment with this method is tightly coupled with the agent's low-level abilities and does not take
the opponents into consideration. On the other hand, the proposed framework uses the knowledge of the opponent
positions as well as other dynamics for the role assignment function.
Sutton et al. \cite{DBLP:conf/atal/SuttonMDDPWP11} have introduced a real-time learning
architecture, Horde, for expressing knowledge using General Value Functions (GVFs). Our
research is built on Horde to ask a set of questions such that the agents assign optimal or near-optimal
roles within formations. In addition, following researches describe methods and
components to build strategic agents: \cite{ScalingUp2012} describes a methodology to build
a cognizant robot that possesses vast amount of situated, reversible and expressive knowledge.
\cite{DBLP:journals/corr/abs-1112-1133} presents a methodology to ``next'' in real time predicting
thousands of features of the world state, and \cite{conf/smc/ModayilWPS12} presents methods predict about temporally extended consequences of a robot's behaviors in general forms of knowledge. The GVFs are successfully used (e.g.,
\cite{6290309,DBLP:conf/icdl-epirob/WhiteMS12}) for switching and prediction tasks in assistive biomedical robots.
\section{Learnable knowledge representation for Robotic Soccer}
\label{sec:LearnableknowledgeRepresentationForRoboticSoccer}
Recently, within the context of the RL framework \cite{sutton98a}, a
knowledge representation language has been introduced, that is expressive and learnable from sensorimotor
data. This representation is directly usable for robotic soccer as agent-environment interactions
are conducted through perceptors and actuators. In this approach, knowledge is represented as a large
number of \textit{approximate value functions} each with its \begin{inparaenum} \item \textit{own policy};
\item \textit{pseudo-reward function}; \item \textit{pseudo-termination function}; and
\item \textit{pseudo-terminal-reward function}
\cite{DBLP:conf/atal/SuttonMDDPWP11}. \end{inparaenum}
In continuous state spaces, approximate value functions are learned using function approximation and
using more efficient off-policy learning algorithms. First, we briefly introduce some of the
important concepts related to the GVFs. The complete information about the GVFs are available in
\cite{DBLP:conf/atal/SuttonMDDPWP11,Maei_Sutton_2010,DBLP:conf/icml/MaeiSBS10,MaeiHRPhdThesis2011}.
Second, we show its direct application to simulated robotic soccer.
\subsection{Interpretation}
The interpretation of the approximate value function as a knowledge representation language grounded
on information from perceptors and actuators is defined as:
\begin{definition} \label{def:interpretation}
The knowledge expressed as an \textit{approximate value function} is \textit{true or accurate}, if its numerical
values matches those of the mathematically defined \textit{value function} it is approximating.
\end{definition}
Therefore, according to the Definition (\ref{def:interpretation}), a value function asks a \textit{question}, and an approximate value function is the
\textit{answer} to that question. Based
on prior interpretation, the standard RL framework extends to represent learnable knowledge as
follows. In the standard RL framework \cite{sutton98a}, let the agent and the world interact in discrete
time steps $t=1,2,3,\ldots$. The agent senses the state at each time step $S_t \in \mathcal{S}$, and
selects an action $A_t \in \mathcal{A}$. One time step later the agent receives
a scalar reward $R_{t+1} \in \mathbb{R}$, and senses the state $S_{t+1} \in \mathcal{S}$. The rewards are
generated according to the \textit{reward function} $r:S_{t+1}\rightarrow \mathbb{R}$. The
objective of the standard RL framework is to learn the stochastic action-selection \textit{policy}
$\pi: \mathcal{S} \times \mathcal{A} \rightarrow [0,1]$, that gives the probability of selecting each
action in each state, $\pi(s, a) = \pi(s|a) = \mathcal{P}(A_t = a|S_t = s)$, such that the agent maximizes
rewards summed over the time steps. The standard RL framework extends to include a
\textit{terminal-reward-function}, $z:\mathcal{S} \rightarrow \mathbb{R}$, where $z(s)$ is the terminal
reward received when the termination occurs in state $s$. In the RL framework, $\gamma \in [0,1)$ is used to
discount delayed rewards. Another interpretation of the discounting factor is a constant probability of
$1-\gamma$ termination of arrival to a state with zero terminal-reward. This factor is generalized to
a \textit{termination function} $\gamma:\mathcal{S} \rightarrow [0,1]$, where $1- \gamma(s)$ is the
probability of termination at state $s$, and a terminal reward $z(s)$ is generated.
\subsection{Off-Policy Action-Value Methods for GVFs}
\label{subsec:offpolicyGVFs}
The first method to learn about GVFs, from off-policy experiences, is to use action-value functions. Let $G_t$ be the complete return from state $S_t$ at time $t$, then the sum of the rewards (transient plus
terminal) until termination at time $T$ is:
\[G_t = \sum_{k=t+1}^T r(S_{k}) + z(S_T).\]
The action-value function is:
\[Q^\pi(s,a) = \mathbb{E}(G_t|S_t = s, A_t = a, A_{t+1:T-1}\sim \pi, T \sim \gamma),\]
where, $Q^\pi:\mathcal{S}\times
\mathcal{A}\rightarrow \mathbb{R}$. This is the expected return for a trajectory started from state $s$,
and action $a$, and selecting actions according to the policy $\pi$, until termination occurs
with $\gamma$. We approximate the action-value function with $\hat{Q}:\mathcal{S}\times
\mathcal{A}\rightarrow \mathbb{R}$. Therefore, the action-value function is a precise grounded
question, while the approximate action-value function offers the numerical answer. The complete algorithm for Greedy-GQ($\lambda$) with linear function approximation for GVFs learning is as shown in Algorithm (\ref{alg:gredyGqLambda}).
\begin{algorithm}
\begin{algorithmic}[1]
\State \textbf{Initialize} $w_0$ to $0$, and $\theta_0$ arbitrary.
\State \textbf{Choose} proper (small) positive values for $\alpha_\theta$, $\alpha_w$, and set
values for $\gamma(.) \in (0,1]$, $\lambda(.) \in [0, 1]$.
\Repeat
\State \textbf{Initialize} $e=0$.
\State \textbf{Take} $A_t$ from $S_t$ according to $\pi_b$, and arrive at $S_{t+1}$.
\State \textbf{Observe} sample, ($S_t, A_t,r(S_{t+1}),z(S_{t+1}), S_{t+1},$) at time step $t$ (with their
corresponding state-action feature vectors), where $\hat{\phi}_{t+1} = \phi(S_{t+1}, A_{t+1}^*),
A_{t+1}^* = \operatornamewithlimits{argmax}_b {\bf \theta}_t^\mathrm{T} \phi(S_{t+1}, b)$.
\For{each observed sample}
\State $\delta_t \leftarrow r(S_{t+1}) + (1-\gamma(S_{t+1}))z(S_{t+1}) +
\gamma(S_{t+1})
\theta_t^\mathrm{T}
\hat{\phi}_{t+1} - \theta_t^\mathrm{T} \phi_{t}$.
\State \textbf{If} {$A_t = A_t^*$}, \textbf{then} $\rho_t \leftarrow
\frac{1}{\pi_b(A_t^*|S_t)}$; \textbf{otherwise} $\rho_t \rightarrow 0$.
\State $e_t \leftarrow I_t \phi_t + \gamma(S_t)\lambda(S_t)\rho_t e_{t-1}$.
\State $\theta_{t+1} \leftarrow \theta_t + \alpha_\theta[\delta_t e_t - \gamma(S_{t+1})(1
-
\lambda(S_{t+1}))(w_t^\mathrm{T} e_t) \hat{\phi}_{t+1}]$.
\State $w_{t+1} \leftarrow w_t + \alpha_w [\delta_t e_t - (w_t^\mathrm{T} \phi_t)
\phi_t)]$.
\EndFor
\Until{ each episode.}
\end{algorithmic}
\caption{Greedy-GQ($\lambda$) with linear function approximation for GVFs learning \cite{MaeiHRPhdThesis2011}.}
\label{alg:gredyGqLambda}
\end{algorithm}
The GVFs are defined over four functions: $\pi, \gamma, r,\mbox{and }z$. The functions $r\mbox{ and
}z$ act as pseudo-reward and pseudo-terminal-reward functions respectively. Function $\gamma$ is also in
pseudo form as well. However, $\gamma$ function is more substantive than reward functions as the termination interrupts
the normal flow of state transitions. In pseudo termination, the standard termination is omitted. In
robotic soccer, the base problem can be defined as the time until a goal is scored by either the home or the opponent team. We can consider a pseudo-termination has occurred when the striker is changed.
The GVF with respect to a state-action function is defined as: \[Q^{\pi,\gamma, r,z}(s,a)= \mathbb{E}(G_t|S_t =s, A_t =a,
A_{t+1:T-1}\sim \pi, T
\sim \gamma).\]
The four functions, $\pi, \gamma, r,\mbox{and }z$, are the \textit{question functions}
to GVFs, which in return defines the general value function's semantics. The RL agent learns an approximate
action-value function, $\hat{Q}$, using the four auxiliary functions
$\pi,\gamma, r$ and $z$. We assume that the state space is continuous and the action space is
discrete. We approximate the action-value function using a linear function approximator. We use a
feature extractor $\mathcal{\phi}: S_t \times A_t \rightarrow \{0,1\}^N, N \in \mathbb{N}$, built on tile coding
\cite{sutton98a} to generate feature vectors from state variables and actions. This is a sparse
vector with a constant number of ``1'' features, hence, a constant norm. In addition, tile coding has
the key advantage of real-time learning and to implement computationally efficient algorithms to
learn approximate value functions. In linear function approximation, there exists a weight vector,
$\theta \in \mathbb{R}^N, N \in \mathbb{N}$, to be learned. Therefore, the approximate GVFs are defined as:
\[\hat{Q}(s,a,\theta)=\theta^\mathrm{T}\phi(s,a),\] such that, $\hat{Q}:\mathcal{S} \times \mathcal{A}
\times \mathbb{R}^N \rightarrow \mathbb{R}$. Weights are learned using the gradient-descent temporal-difference
Algorithm (\ref{alg:gredyGqLambda}) \cite{MaeiHRPhdThesis2011}. The Algorithm learns stably and efficiently using linear
function approximation from \textit{off-policy} experiences. Off-policy experiences are generated
from a \textit{behavior policy}, $\pi_b$, that is different from the policy being learned about named
as \textit{target policy}, $\pi$. Therefore, one could learn multiple target policies from the same
behavior policy.
\subsection{Off-Policy Policy Gradient Methods for GVFs}
The second method to learn about GVFs is using the off-policy policy gradient methods with actor-critic architectures that use a state-value function suitable for learning GVFs. It is defined as:
\[
V^{\pi,\gamma, r,z}(s) = \mathbb{E}(G_t|S_t =s,A_{t:T-1}\sim \pi, T
\sim \gamma),
\]
where, $V^{\pi,\gamma, r,z}(s)$ is the true state-value function, and the approximate GVF is defined as:
\[\hat{V}(s,v)=v^\mathrm{T}\phi(s),\]
where, the functions $\pi,\gamma, r, \mbox{and }z$ are defined as in the subsection (\ref{subsec:offpolicyGVFs}). Since our the target policy $\pi$ is discrete stochastic, we use a Gibbs distribution of the form:
\[
\pi(a | s) = \frac{e^{u^\mathrm{T} \phi(s, a)}}{\sum_{b}e^{u^\mathrm{T} \phi(s, b)}},
\]
where, $\phi(s,a)$ are state-action features for state $s$, and action $a$, which are in general unrelated to state features $\phi(s)$, that are used in state-value function approximation. $u \in \mathbb{R}^{N_u}, N_u \in \mathbb{N}$, is a weight vector, which is modified by the actor to learn about the stochastic target policy. The log-gradient of the policy at state $s$, and action $a$, is:
\[
\frac{\nabla_u \pi(a | s)}{\pi(a | s)} = \phi(s,a) - \sum_b \pi(b|s) \phi(s,b).
\]
The complete algorithm for Off-PAC with linear function approximation for GVFs learning is as shown in Algorithm (\ref{alg:offPACAlgorithm}).
\begin{algorithm}
\begin{algorithmic}[1]
\State \textbf{Initialize} $w_0$ to $0$, and $v_0$ and $u_0$ arbitrary.
\State \textbf{Choose} proper (small) positive values for $\alpha_v$, $\alpha_w$, $\alpha_u$, and set
values for $\gamma(.) \in (0,1]$, $\lambda(.) \in [0, 1]$.
\Repeat
\State \textbf{Initialize} $e^v=0, \mbox{and } e^u = 0$.
\State \textbf{Take} $A_t$ from $S_t$ according to $\pi_b$, and arrive at $S_{t+1}$.
\State \textbf{Observe} sample, ($S_t, A_t,r(S_{t+1}),z(S_{t+1}), S_{t+1}$) at time step $t$ (with their
corresponding state ($\phi_t, \phi_{t+1}$) feature vectors, where $\phi_t = \phi(S_t)$).
\For{each observed sample}
\State $\delta_t \leftarrow r(S_{t+1}) + (1-\gamma(S_{t+1}))z(S_{t+1}) + \gamma(S_{t+1}) v_t^\mathrm{T} \phi_{t+1} - v_t^\mathrm{T} \phi_{t}$.
\State $\rho_t \leftarrow \frac{\pi(A_t |S_t)}{\pi_b(A_t|S_t)}$.
\State Update the critic (GTD($\lambda$) algorithm for GVFs).
\State \hspace{5mm} $e^v_t \leftarrow \rho_t(\phi_t + \gamma(S_t)\lambda(S_t)e^v_{t-1})$.
\State \hspace{5mm} $v_{t+1} \leftarrow v_t + \alpha_v[\delta_t e^v_t - \gamma(S_{t+1})(1
-
\lambda(S_{t+1}))({e^v_t}^\mathrm{T} w_t) \phi_{t+1}]$.
\State \hspace{5mm} $w_{t+1} \leftarrow w_t + \alpha_w [\delta_t e_t - (w_t^\mathrm{T} \phi_t)
\phi_t)]$.
\State Update the actor.
\State \hspace{5mm} $e^u_t \leftarrow \rho_t \left[ \frac{\nabla_u \pi (A_t | S_t)}{\pi(A_t | S_t)} + \gamma(S_t) \lambda(S_{t+1}) e^u_{t-1}\right]$.
\State \hspace{5mm} $u_{t+1} \leftarrow u_t + \alpha_u \delta_t e^u_t$.
\EndFor
\Until{ each episode.}
\end{algorithmic}
\caption{Off-PAC with linear function approximation for GVFs learning \cite{MaeiHRPhdThesis2011,DBLP:journals/corr/abs-1205-4839}.}
\label{alg:offPACAlgorithm}
\end{algorithm}
We are interested in finding optimal policies for the dynamic role assignment, and henceforth, we use
Algorithms (\ref{alg:gredyGqLambda}), and (\ref{alg:offPACAlgorithm}) for control purposes\footnote{We use an C++ implementation of Algorithm (\ref{alg:gredyGqLambda}) and (\ref{alg:offPACAlgorithm}) in all of our experiments. An implementation is available in \url{https://github.com/samindaa/RLLib}}. We
use linear function approximation for continuous state spaces, and discrete actions are used within
options. Lastly, to summarize, the definitions of the question functions and the answer functions are given as:
\begin{definition}
The question functions are defined by:
\begin{enumerate}
\item $\pi:S_t \times A_t \rightarrow [0, 1]$ \tabto{35mm} (target policy
is greedy w.r.t. learned value function);
\item $\gamma:S_t \rightarrow [0, 1]$ \tabto{35mm} (termination function);
\item $r:S_{t+1} \rightarrow \mathbb{R}$ \tabto{35mm} (transient reward function); and
\item $z:S_{t+1} \rightarrow \mathbb{R}$ \tabto{35mm} (terminal reward function).
\end{enumerate}
\end{definition}
\begin{definition}
The answer functions are defined by:
\begin{enumerate}
\item $\pi_b: S_t \times A_t \rightarrow [0, 1]$ \tabto{35mm} (behavior policy);
\item $I_t:S_t \times A_t \rightarrow [0, 1]$ \tabto{35mm} (interest function);
\item $\phi:S_t \times A_t \rightarrow \mathbb{R}^N$ \tabto{35mm} (feature-vector function); and
\item $\lambda:S_t \rightarrow [0, 1]$ \tabto{35mm} (eligibility-trace decay-rate function).
\end{enumerate}
\end{definition}
\section{Dynamic Role Assignment}
\label{sec:DynamicRoleAssignment}
A \textit{role} is a specification of an internal or an external behavior of an agent. In our soccer
domain, roles select behaviors of agents based on different reference criteria: the agent close
to the ball becomes the striker. Given a role space, $\mathcal{R}=\{r_1, \ldots, r_n\}$, of size $n$, the
collaboration among $m \leq n$ agents, $\mathcal{A}=\{a_1, \dots, a_m\}$, is obtained through
\textit{formations}. The role space consists of active and reactive roles. For example, the striker is an
active role and the defender could be a reactive role. Given a reactive role, there is a function, $R
\mapsto T$, that maps roles to target positions, $T$, on the field. These target positions are
calculated with respect to a reference pose (e.g., ball position) and other auxiliary criteria
such as crowd management rules. A role assignment function, $R \mapsto A$, provides a
mapping from role space to agent space, while maximizing some reward function. The role assignment
function can be static or dynamic. Static role assignments often provide inferior performance in
robot soccer \cite{AAMAS12-MacAlpine}. Therefore, we learn a dynamic role assignment function
within the RL framework using off-policy control.
\begin{figure}[!h]
\centering
\includegraphics[width=.6\textwidth]{roles2}
\caption{Primary formation, \protect\cite{StoeckerV11}}
\label{fig:primaryFormation}
\end{figure}
\subsection{Target Positions with the Primary Formation}
Within our framework, an agent can choose one role among thirteen roles. These roles are part of a
primary formation, and an agent calculates the respective target positions according to its belief of
the absolute ball position and the rules imposed by the 3D soccer simulation
server. We have labeled
the role space in order to describe the behaviors associated with them. Figure
(\ref{fig:primaryFormation}) shows the target positions for the role space before the kickoff state.
The agent closest to the ball takes the striker role ({\sf SK}), which is the only active role.
Let us assume that the agent's belief of the absolute ball position is given by $(x_b,y_b)$. Forward left ({\sf FL})
and forward right ({\sf FR}) target positions are offset by $(x_b,y_b) \pm (0,2)$. The extended
forward left ({\sf EX1L}) and extended forward right (({\sf EX1R})) target positions are offset by
$(x_b,y_b) \pm (0,4)$. The stopper ({\sf ST}) position is given by $(x_b-2.0,y_b)$. The extended
middle ({\sf EX1M}) position is used as a blocking position and it is calculated based on the
closest opponent to the current agent. The other target positions, wing left ({\sf WL}), wing right
({\sf WR}), wing middle ({\sf WM}), back left ({\sf BL}), back right ({\sf BR}), and back middle
({\sf BM}) are calculated with respect to the vector from the middle of the home goal to the ball and offset
by a factor which increases close to the home goal. When the ball is within the reach of goal keeper, the
({\sf GK}) role is changed to goal keeper striker ({\sf GKSK}) role. We slightly change the positions when
the ball is near the side lines, home goal, and opponent goal. These adjustments are made in order
to keep the target positions inside the field. We allow target positions to be overlapping. The
dynamic role assignment function may assign the same role during the learning period. In order to
avoid position conflicts an offset is added; the feedback provides negative rewards for such
situations.
\subsection{Roles to RL Action Mapping}
The agent closest to the ball becomes the striker, and only one agent is allowed to become the
striker. The other agents except the goalie are allowed to choose from twelve roles.
We map the available roles to discrete actions of the RL algorithm. In order to use Algorithm
\ref{alg:gredyGqLambda}, an agent must formulate a question function using a
value function, and the answer function provides the solution as an approximate value function. All
the agents formulate the same question: \textit{What is my role in this formation in order to
maximize future rewards?} All agents learn independently according to the question, while
collaboratively aiding each other to maximize their future reward. We make the assumption that the
agents do not communicate their current role. Therefore, at a specific step, multiple agents may
commit to the same role. We discourage this condition by modifying the question as \textit{What is
my role in this formation in order to maximize future rewards, while maintaining a completely
different role from all teammates in all time steps?}
\begin{figure}[!b]
\centering
\includegraphics[width=.8\textwidth]{state_var_reps}
\caption{State variable representation and the primary function. Some field lines are omitted due to clarity.}
\label{fig:featureRepresentation}
\end{figure}
\subsection{State Variables Representation}
Figure \ref{fig:featureRepresentation} shows the schematic diagram of the state variable
representation. All points and vectors in Figure \ref{fig:featureRepresentation} are defined with
respect to a global coordinate system. $h$ is the middle point of the home goal, while $o$ is the
middle point of the opponent goal. $b$ is the ball position. $\parallel$.$\parallel$ represents the
vector length, while $\angle
pqr$ represents the angle among three points $p,~q,\mbox{ and }r$ pivoted at $q$. $a_i$ represents
the self-localized point of the $i=1,\ldots,11$ teammate agent. $y_i$ is some point in the direction
of the robot orientation of teammate agents. $c_j$, $j=1,\ldots,11$, represents the mid-point of the
tracked opponent agent. $x$ represents a point on a vector parallel to unit vector $e_x$. Using
these labels, we define the state variables as:
\begin{eqnarray*}
\{\parallel \vec{v}_{hb} \parallel, \parallel \vec{v}_{bo} \parallel, \angle hbo, \{
\parallel
\vec{v}_{a_ib} \parallel, \angle y_i a_i b, \angle a_i b x \}_{i=n_{start}}^{n_{end}},
\{ \parallel
\vec{v}_{c_jb} \parallel, \angle c_j b x, \}_{j=1}^{m_{max}}\}.
\label{eqn:stateVariables}
\end{eqnarray*}
$n_{start}$ is the teammate starting id and $n_{end}$ the ending id. $m_{max}$ is the number of
opponents considered. Angles are normalized to [$-\frac{\pi}{2}, \frac{\pi}{2}$].
\section{Question and Answer Functions}
\label{sec:GVFQandA}
There are twelve actions available in each state. We have left out the striker role from the action set.
The agent nearest to the ball becomes the striker. All agents communicate their belief to other
agents. Based on their belief, all agents calculate a cost function and assign the closest agent as
the striker. We have formulated a cost function based on relative distance to the ball, angle of the
agent, number of teammates and opponents within a region near the ball, and whether the agents are
active. In our formulation, there is a natural termination condition; scoring goals.
With respect to the striker role assignment procedure, we define a pseudo-termination condition. When
an agent becomes a striker, a pseudo-termination occurs, and the striker agent does not participate
in the learning process unless it chooses another role. We define the question and answer
functions as follows:
\subsection{GVF Definitions for State-Action Functions}
\textit{Question functions:}
\begin{enumerate}
\item $\pi=$ greedy w.r.t. $\hat{Q}$,
\item $\gamma(.)=0.8$,
\item $r(.)=$
\begin{inparaenum}
\item the change of $x$ value of the absolute ball position;
\item a small negative reward of $0.01$ for each cycle;
\item a negative reward of $5$ is given to all agents within a radius of 1.5 meters;
\end{inparaenum}
\item $z(.)=$
\begin{inparaenum}
\item $+100$ for scoring against opponent;
\item $-100$ for opponent scoring; and
\end{inparaenum}
\item $\mbox{time step}= 2$ seconds.
\end{enumerate}
\textit{Answer functions:}
\begin{enumerate}
\item $\pi_b=$ $\epsilon$-greedy w.r.t. target state-action function,
\item $\epsilon=0.05$,
\item $I_t(.)=1$,
\item $\phi(., .)=$
\begin{inparaenum}
\item we use tile coding to formulate the feature vector.
$n_{start}=2$ and $n_{end}=3,5,7$. $m_{max}=3,5,7$. Therefore, there are $18,28,30$ state variables.
\item state variable is independently tiled with 16 tilings with approximately each with
$\frac{1}{16}$ generalization. Therefore, there are $288+1,448+1,608+1$ active tiles (i.e.,
tiles with feature 1) hashed to a binary vector dimension $10^6+1$. The bias feature is always
active, and
\end{inparaenum}
\item $\lambda(.)=0.8$.
\end{enumerate}
Parameters:\\
\begin{inparaenum}
\tabto{6mm}\item $\parallel{\bf{\theta}}\parallel=\parallel {\bf w} \parallel = 10^6+1$;
\item $\parallel {\bf e} \parallel=2000$ (efficient trace implementation);
\tabto{6mm} \item $\alpha_\theta=\frac{0.01}{289},\frac{0.01}{449},\frac{0.01}{609}$; and
\item $\alpha_w=0.001\times \alpha_\theta$.
\end{inparaenum}
\subsection{GVF for Gradient Descent Functions}
\textit{Question functions:}
\begin{enumerate}
\item $\pi=$ Gibbs distribution,
\item $\gamma(.)=0.9$,
\item $r(.)=$
\begin{inparaenum}
\item the change of $x$ value of the absolute ball position;
\item a small negative reward of $0.01$ for each cycle;
\item a negative reward of $5$ is given to all agents within a radius of 1.5 meters;
\end{inparaenum}
\item $z(.)=$
\begin{inparaenum}
\item $+100$ for scoring against opponent;
\item $-100$ for opponent scoring; and
\end{inparaenum}
\item $\mbox{time step}= 2$ seconds.
\end{enumerate}
\textit{Answer functions:}
\begin{enumerate}
\item $\pi_b=$ the learned Gibbs distribution is used with a small perturbation. In order to provide exploration, with probability $0.01$, Gibbs distribution is perturbed using some $\beta$ value. In our experiments, we use $\beta=0.5$. Therefore, we use a behavior policy: $\frac{e^{u^\mathrm{T} \phi(s, a) + \beta}}{\sum_{b}e^{u^\mathrm{T} \phi(s, b) + \beta}}$
\item $\phi(.)=$
\begin{inparaenum}
\item the representations for the state-value function, we use tile coding to formulate the feature vector.
$n_{start}=2$ and $n_{end}=3,5,7$. $m_{max}=3,5,7$. Therefore, there are $18,28,30$ state variables.
\item state variable is independently tiled with 16 tilings with approximately each with
$\frac{1}{16}$ generalization. Therefore, there are $288+1,448+1,608+1$ active tiles (i.e.,
tiles with feature 1) hashed to a binary vector dimension $10^6+1$. The bias feature is always
set to active;
\end{inparaenum}
\item $\phi(., .)=$
\begin{inparaenum}
\item the representations for the Gibbs distribution, we use tile coding to formulate the feature vector.
$n_{start}=2$ and $n_{end}=3,5,7$. $m_{max}=3,5,7$. Therefore, there are $18,28,30$ state variables.
\item state variable is independently tiled with 16 tilings with approximately each with
$\frac{1}{16}$ generalization. Therefore, there are $288+1,448+1,608+1$ active tiles (i.e.,
tiles with feature 1) hashed to a binary vector dimension $10^6+1$. The hashing has also considered the given action. The bias feature is always set to active; and
\end{inparaenum}
\item $\lambda_{\mbox{critic}}(.)=\lambda_{\mbox{actor}}(.)=0.3$.
\end{enumerate}
Parameters:\\
\begin{inparaenum}
\tabto{6mm}\item $\parallel{\bf{u}}\parallel=10^6+1$;
\item $\parallel{\bf{\theta}}\parallel=\parallel {\bf w} \parallel = 10^6+1$;
\tabto{6mm}\item $\parallel {\bf e^v} \parallel=\parallel {\bf e^u} \parallel=2000$ (efficient trace implementation);
\tabto{6mm}\item $\alpha_v=\frac{0.01}{289},\frac{0.01}{449},\frac{0.01}{609}$;
\item $\alpha_w=0.0001\times \alpha_v$; and
\item $\alpha_v=\frac{0.001}{289},\frac{0.001}{449},\frac{0.001}{609}$.
\end{inparaenum}
\section{Experiments}
\label{sec:Experiments}
We conducted experiments against the teams {\sf Boldhearts} and {\sf MagmaOffenburg}, both semi-finalists of the RoboCup 3D Soccer Simulation competition in Mexico 2012\footnote{The published binary of the team {\sf UTAustinVilla} showed unexpected behaviors in our tests and is therefore omitted.}. We conducted knowledge learning according to the configuration given in
Section (\ref{sec:GVFQandA}). Subsection (\ref{subsec:expStateActionGVFs}) describes the performance of the Algorithm (\ref{alg:gredyGqLambda}), and Subsection (\ref{subsec:expGradientDescentGVFs}) describes the performance of the Algorithm (\ref{alg:offPACAlgorithm}) for the experiment setup.
\subsection{GVFs with Greedy-GQ($\lambda$)}
\label{subsec:expStateActionGVFs}
The first experiments were done using a team size of five with the RL agents against {\sf Boldhearts}. After 140 games our RL agent increased the chance to win from 30\% to 50\%. This number does not increase more in the next games, but after 260 games the number of lost games (initially ~35\%) is reduced to 15\%.
In the further experiments we used the goal difference to compare the performance of the RL agent.
Figure (\ref{fig:goaldiffs}) shows the average goal differences that the hand-tuned role assignment and the RL agents archive in games against {\sf Boldhearts} and {\sf MagmaOffenburg} using different team sizes. With only three agents per team the RL agent only needs 40 games to learn a policy that outperforms the hand-coded role selection (Figure (\ref{fig:goaldiff3})).
Also with five agents per team, the learning agent is able to increase the goal difference against both opponents (Figure (\ref{fig:goaldiff5})). However, it does not reach the performance of the manually tuned role selection. Nevertheless considering the amount of time spent for fine-tuning the hand-coded role selection, these results are promising.
Furthermore, the outcome of the games depends a lot on the underlying skills of the agents, such as walking or dribbling. These skills are noisy, thus the results need to be averaged over many games (std. deviations in Figure (\ref{fig:goaldiffs}) are between 0.5 and 1.3).
\begin{figure}[!b]
\centering
\subfigure[Three vs three agents.] {\includegraphics[width=0.4\textwidth]{goaldiff3} \label{fig:goaldiff3} }
\subfigure[Five vs five agents.] {\includegraphics[width=0.4\textwidth]{goaldiff5} \label{fig:goaldiff5} }
\subfigure[Seven vs seven agents.] {\includegraphics[width=0.4\textwidth]{goaldiff7} \label{fig:goaldiff7} }
\caption{Goal difference in games with (a) three; (b) five; and (c) seven agents per team using Greedy-GQ($\lambda$) algorithm.}
\label{fig:goaldiffs}
\end{figure}
The results in Figure (\ref{fig:goaldiff7}) show a bigger gap between RL and the hand-coded agent. However, using seven agents the goal difference is generally decreased, since the defense is easily improved by increasing the number of agents. Also the hand-coded role selection results in a smaller goal difference.
Furthermore, considering seven agents in each team the state space is already increased significantly. Only 200 games seem to be not sufficient to learn a good policy. Sometimes the RL agents reach a positive goal difference, but it stays below the hand-coded role selection.
In Section \ref{sec:ConclusionAndFutureWork}, we discuss some of the reasons for this inferior performances for the team size seven. Even though the RL agent did not perform well considering only the goal difference, it has learned a moderately satisfactory policy. After 180 games the amount of games won is increased slightly from initially 10\% to approximately 20\%.
\subsection{GVFs with Off-PAC}
\label{subsec:expGradientDescentGVFs}
With Off-PAC, we used a similar environment to that of Subsection (\ref{subsec:expStateActionGVFs}), but with a different learning setup. Instead of learning individual policies for teams separately, we learned a single policy for both teams. We ran the opponent teams in a round robin fashion for 200 games and repeated complete runs for multiple times. The first experiments were done using a team size of three with RL agents against both teams. Figure (\ref{fig:offpac3}) shows the results of bins of 20 games averaged between two trials. After 20 games, the RL agents have learned a stable policy compared to the hand-tuned policy, but the learned policy bounded above the hand-tuned role assignment function. The second experiments were done using a team size of five with the RL agents against opponent teams. Figure (\ref{fig:offpac5}) shows the results of bins of 20 games averaged among three trials. After 100 games, our RL agent increased the chance of winning to 50\%. This number does not increase more in
the next games. As Figures (\ref{fig:offpac3}) and (\ref{fig:offpac5}) show, the three and five agents per team are able to increase the goal difference against both opponents. However, it does not reach the performance of the manually tuned role selection. Similar to Subsection (\ref{subsec:expStateActionGVFs}), the amount of time spent for fine-tuning the hand-coded role selection, these results are promising, and the outcome of the experiment heavily depends on the underlying skills of the agents.
\begin{figure}[!h]
\centering
\subfigure[Three vs three agents.] { \includegraphics[width=0.4\textwidth]{offpac3} \label{fig:offpac3} }
\subfigure[Five vs five agents.] { \includegraphics[width=0.4\textwidth]{offpac5} \label{fig:offpac5} }
\subfigure[Seven vs seven agents.] { \includegraphics[width=0.4\textwidth]{offpac7} \label{fig:offpac7} }
\caption{Goal difference in games with (a) three; (b) five; and (c) seven agents per team using Off-PAC algorithm.}
\label{fig:goaldiffsOffPAC}
\end{figure}
The final experiments were done using a team size of seven with the RL agents against opponent teams. Figure (\ref{fig:offpac7}) shows the results of bins of 20 games averaged among two trials.
Similar to Subsection (\ref{subsec:expStateActionGVFs}), with seven agents per team, the results in Figure (\ref{fig:offpac7}) show a bigger gap between RL and the hand-tuned agent. However, using seven agents the goal difference is generally decreased, since the defense is easily improved by increasing the number of agents. Also the hand-tuned role selection results in a smaller goal difference. Figure \ref{fig:offpac7} shows an increase in the trend of winning games. As mentioned earlier, only 200 games seem to be not sufficient to learn a good policy. Even though the RL agents reach a positive goal difference, but it stays below the hand-tuned role selection method. Within the given setting, the RL agents have learned a moderately satisfactory policy. Whether the learned policy is satisfactory for other teams needs to be further investigated.
The RoboCup 3D soccer simulation is inherently a dynamic, and a stochastic environment. There is an infinitesimal chance that a given situation (state) may occur for many games. Therefore, it is paramount important that the learning algorithms extract as much information as possible from the training examples. We use the algorithms in the on-line incremental setting, and once the experience is consumed it is discarded. Since, we learned from off-policy experiences, we can save the tuples, $(S_t, A_t, S_{t+1},r(S_{t+1}),\rho_t, z(S_{t+1}))$, and learn the policy off-line. The Greedy-GQ($\lambda$) learns a deterministic greedy policy. This may not be suitable for complex and dynamic environments such as
the RoboCup 3D soccer simulation environment. The Off-PAC algorithm is designed for stochastic environment. The experiment shows that this algorithm needs careful tuning of learning rates and feature selection, as evident from Figure (\ref{fig:offpac3}) after 160 games.
\section{Conclusions}
\label{sec:ConclusionAndFutureWork}
We have designed and experimented RL agents that learn to assign roles in order to maximize expected
future rewards. All the agents in the team ask the question ``What is my role
in this formation in order to maximize future rewards, while maintaining a completely different role
from all teammates in all time steps?''. This is a goal-oriented question. We use
Greedy-GQ($\lambda$) and Off-PAC to learn experientially grounded knowledge encoded in GVFs. Dynamic role
assignment function is abstracted from all other low-level components such as walking engine,
obstacle avoidance, object tracking etc. If the role assignment function selects a passive role and
assigns a target location, the lower-layers handle this request. If the lower-layers fail to
comply to this request, for example being reactive, this feedback is not provided to the role
assignment function. If this information needs to be included; it should become a part of the state
representation, and the reward signal should be modified accordingly. The target positions for
passive roles are created w.r.t. the absolute ball location and the rules imposed by the 3D soccer
simulation league. When the ball moves relatively quickly, the target locations
change more quickly. We have given positive rewards only for the forward ball movements. In order to
reinforce more agents within an area close to the ball, we need to provide appropriate
rewards. These are part of reward shaping \cite{Ng:1999:PIU:645528.657613}. Reward shaping should be
handled carefully as the agents may learn sub-optimal policies not contributing to the overall goal.
The experimental evidences show that agents are learning competitive role assignment functions for
defending and attacking. We have to emphasize that the behavior policy is $\epsilon$-greedy with a
relatively small exploration or slightly perturbed around the target policy. It is not a uniformly distributed policy as used in
\cite{DBLP:conf/atal/SuttonMDDPWP11}. The main reason for this decision is that when an adversary is
present with the intention of maximizing its objectives, practically the learning agent may have to
run for a long period to observe positive samples. Therefore, we have used the
off-policy Greedy-GQ($\lambda$) and Off-PAC algorithms for learning goal-oriented GVFs within on-policy control
setting. Our hypothesis is that with the improvements of the functionalities of lower-layers, the role
assignment function would find better policies for the given question and answer functions. Our
next step is to let the RL agent learn policies against other RoboCup 3D soccer simulation league teams. Beside
the role assignment, we also contributed with testing off-policy learning in high-dimensional state
spaces in a competitive adversarial environment. We have conducted experiments with three, five, and seven
agents per team. The full game consists of eleven agents. The next step is to extend learning
to consider all agents, and to include methods that select informative state variables and features.
\bibliographystyle{splncs03}
| {
"attr-fineweb-edu": 1.863281,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdWA5qoTA_-bStDkW | \section{Introduction}
Given a set of alternatives and binary non-transitive preferences over these alternatives, how can we consistently choose the ``best'' elements from any feasible subset of alternatives? This question has been studied in detail in the literature on tournament solutions \citep[see, \eg][]{Lasl97a,Hudr09a,Mose15a,BBH15a}. The lack of transitivity is typically attributed to the independence of pairwise comparisons as they arise in sports competitions, multi-criteria decision analysis, and preference aggregation.\footnote{Due to their generality, tournament solutions have also found applications in unrelated areas such as biology \citep{Schj22a,Land51a,Slat61a,AlLe11a}.}
In particular, the pairwise majority relation of a profile of transitive individual preference relations often forms the basis of the study of tournament solutions. This is justified by a theorem due to \citet{McGa53a}, which shows that every tournament can be induced by some underlying preference profile. Many tournament solutions therefore correspond to well-known social choice functions such as Copeland's rule, Slater's rule, the Banks set, and the bipartisan set.
Over the years, many desirable properties of tournament solutions have been proposed. Some of these properties, so-called \emph{choice consistency conditions}, make no reference to the actual tournament but only relate choices from different subtournaments to each other. An important choice consistency condition, that goes under various names, requires that the choice set is invariant under the removal of unchosen alternatives. In conjunction with a dual condition on expanded feasible sets, this property is known as \emph{stability} \citep{BrHa11a}. Stability implies that choices are made in a robust and coherent way. Furthermore, stable choice functions can be rationalized by a preference relation on \emph{sets} of alternatives.
Examples of stable tournament solutions are the \emph{top cycle}, \emph{the minimal covering set}, and the \emph{bipartisan set}. The latter is elegantly defined via the support of the unique mixed maximin strategies of the zero-sum game given by the tournament's skew-adjacency matrix.
Curiously, for some tournament solutions, including the \emph{tournament equilibrium set} and the \emph{minimal extending set}, proving or disproving stability turned out to be exceedingly difficult. As a matter of fact, whether the tournament equilibrium set satisfies stability was open for more than two decades before the existence of counterexamples with about $10^{136}$ alternatives was shown using the probabilistic method.
\citet{Bran11b} systematically constructed stable tournament solutions by applying a well-defined operation to existing (non-stable) tournament solutions. \citeauthor{Bran11b}'s study was restricted to a particular class of generating tournament solutions, namely tournament solutions that can be defined via qualified subsets (such as the \emph{uncovered set} and the \emph{Banks set}). For any such generator, \citet{Bran11b} gave sufficient conditions for the resulting tournament solution to be stable.
Later, \citet{BHS15a} showed that for one particular generator, the Banks set, the sufficient conditions for stability are also necessary.
In this paper, we show that \emph{every} stable choice function is generated by a unique underlying simple choice function, which never excludes more than one alternative.
We go on to prove a general characterization of stable tournament solutions that is not restricted to generators defined via qualified subsets. As a corollary, we obtain that the sufficient conditions for generators defined via qualified subsets are also necessary. Finally, we prove a strong connection between stability and a new property of tournament solutions called \emph{local reversal symmetry}. Local reversal symmetry requires that an alternative is chosen if and only if it is unchosen when all its incident edges are inverted. This result allows us to settle two important problems in the theory of tournament solutions.
We provide the first concrete tournament---consisting of 24 alternatives---in which the tournament equilibrium set violates stability. Secondly, we prove that there is no more discriminating stable tournament solution than the bipartisan set. We also axiomatically characterize the bipartisan set by only using properties that have been previously proposed in the literature. We believe that these results serve as a strong argument in favor of the bipartisan set if choice consistency is desired.
\section{Stable Sets and Stable Choice Functions}
\label{sec:stability}
Let $U$ be a universal set of alternatives. Any finite non-empty subset of $U$ will be called a \emph{feasible set}.
Before we analyze tournament solutions in \secref{sec:tsolutions}, we first consider a more general model of choice which does not impose any structure on feasible sets.
A \emph{choice function} is a function that maps every feasible set $A$ to a non-empty subset of $A$ called the \emph{choice set} of $A$. For two choice functions $S$ and $S'$, we write $S'\subseteq S$, and say that $S'$ is a \emph{refinement} of~$S$ and~$S$ a \emph{coarsening} of~$S'$, if $S'(A)\subseteq S(A)$ for all feasible sets~$A$.
A choice function $S$ is called \emph{trivial} if $S(A)=A$ for all feasible sets $A$.
\citet{Bran11b} proposed a general method for refining a choice function~$S$ by defining minimal sets that satisfy internal and external stability criteria with respect to~$S$, similar to von-Neumann--Morgenstern stable sets in cooperative game theory.\footnote{This is a generalization of earlier work by \citet{Dutt88a}, who defined the minimal covering set as the unique minimal set that is internally and externally stable with respect to the uncovered set (see \secref{sec:tsolutions}).}
A subset of alternatives $X\subseteq A$ is called $S$-\emph{stable} within feasible set $A$ for choice function $S$ if it consists precisely of those alternatives that are chosen in the presence of all alternatives in $X$. Formally, $X$ is $S$-stable in $A$ if
\[
X=\{a\in A \colon a\in S(X\cup \{a\})\}\text.
\]
Equivalently, $X$ is $S$-stable if and only if
\begin{gather}
S(X)=X \text{, and} \tag{internal stability}\\
a \notin S(X\cup\{a\}) \text{ for all }a\in A\setminus X\text. \tag{external stability}
\end{gather}
The intuition underlying this formulation is that there should be no reason to restrict the choice set by excluding some alternative from it (internal stability) and there should be an argument against each proposal to include an outside alternative into the choice set (external stability).
An $S$-stable set is \emph{inclusion-minimal} (or simply \emph{minimal}) if it does not contain another $S$-stable set. $\widehat{S}(A)$ is defined as the union of all minimal $S$-stable sets in $A$.
$\widehat{S}$ defines a choice function whenever every feasible set admits at least one $S$-stable set. In general, however, neither the existence of $S$-stable sets nor the uniqueness of minimal $S$-stable sets is guaranteed. We say that $\widehat{S}$ is \emph{well-defined} if every choice set admits exactly one minimal $S$-stable set. We can now define the central concept of this paper.
\begin{definition}
A choice function $S$ is \emph{stable} if $\widehat{S}$ is well-defined and $S=\widehat{S}$.
\end{definition}
Stability is connected to rationalizability and non-manipulability. In fact, every stable choice function can be rationalized via a preference relation on \emph{sets} of alternatives \citep{BrHa11a} and, in the context of social choice, stability and monotonicity imply strategyproofness with respect to Kelly's preference extension \citep{Bran11c}.
The following example illustrates the preceding definitions. Consider universe $U=\{a,b,c\}$ and choice function $S$ given by the table below (choices from singleton sets are trivial and therefore omitted).
\[
\begin{array}{ccc}
X & S(X) & \widehat{S}(X)\\\midrule
\set{a,b} & \set{a} & \set{a}\\
\set{b,c} &\set{b} & \set{b}\\
\set{a,c} &\set{a} & \set{a}\\
\set{a,b,c} &\set{a,b,c} & \set{a}\\
\end{array}
\]
The feasible set $\{a,b,c\}$ admits exactly two $S$-stable sets, $\{a,b,c\}$ itself and $\{a\}$. The latter holds because $S(\{a\})=\{a\}$ (internal stability) and $S(\{a,b\})=S(\{a,c\})=\{a\}$ (external stability).
All other feasible sets $X$ admit unique $S$-stable sets, which coincide with $S(X)$. Hence, $\widehat{S}$ is well-defined and given by the entries in the rightmost column of the table. Since $S\neq \widehat{S}$, $S$ fails to be stable. $\widehat{S}$, on the other hand, satisfies stability.
Choice functions are usually evaluated by checking whether they satisfy choice consistency conditions that relate choices from different feasible sets to each other.
The following two properties, $\widehat\alpha$ and $\widehat\gamma$, are set-based variants of Sen's $\alpha$ and $\gamma$ \citep{Sen71a}.
$\widehat{\alpha}$ is a rather prominent choice-theoretic condition, also known as \citeauthor{Cher54a}'s \emph{postulate~$5^*$} \citep{Cher54a}, the \emph{strong superset property} \citep{Bord79a}, \emph{outcast}~\citep{AiAl95a}, and the \emph{attention filter axiom} \citep{MNO12a}.\footnote{We refer to \citet{Monj08a} for a more thorough discussion of the origins of this condition.}
\begin{figure}[tb]
\[
\scalebox{1}{
\begin{tikzpicture}[scale=1]
\draw ( 0:9pt) node[ellipse,inner xsep=30pt,inner ysep=25pt,draw,dotted](B){} ++(220:57pt) node(){$B$};
\draw (180:9pt) node[ellipse,inner xsep=30pt,inner ysep=25pt,draw](C){} ++(-40:57pt) node(){$C$};
\draw ( 0:0pt) node[ellipse,inner xsep=17pt,inner ysep=11pt,draw](S){\mathwordbox{\scalebox{1}[1]{$S(B)$}}{}} ;
\end{tikzpicture}
}
\qquad
\scalebox{1}{
\begin{tikzpicture}[scale=1]
\draw ( 0:9pt) node[ellipse,inner xsep=30pt,inner ysep=25pt,draw](B){} ++(220:57pt) node(){$B$};
\draw (180:9pt) node[ellipse,inner xsep=30pt,inner ysep=25pt,draw,dotted](C){}++(-40:57pt) node(){$C$};
\draw ( 0:0pt) node[ellipse,inner xsep=17pt,inner ysep=11pt,draw](S){\mathwordbox{\scalebox{1}[1]{$S(C)$}}{}};
\end{tikzpicture}
}
\qquad
\scalebox{1}{
\begin{tikzpicture}[scale=1]
\draw ( 0:9pt) node[ellipse,inner xsep=30pt,inner ysep=25pt,draw,fill=white](B){} ;
\draw (180:9pt) node[ellipse,inner xsep=30pt,inner ysep=25pt,draw,fill=white](C){}++(-40:57pt) node(){$\mathwordbox[l]{B\cup C}{C}$} ;
\draw (0:0pt) node[ellipse,draw=white,fill=white,inner xsep=30pt, inner ysep=24.23pt]{};
\draw (180:9pt) node[ellipse,inner xsep=30pt,inner ysep=25pt,draw,dotted](B){} ;
\draw ( 0:9pt) node[ellipse,inner xsep=30pt,inner ysep=25pt,draw,dotted](C){} ;
\draw ( 0:0pt) node[ellipse,inner xsep=17pt,inner ysep=11pt,draw](S){\mathwordbox{\scalebox{.9}[1]{$S(B\cup C)$}}{}} ;
\end{tikzpicture}
}
\]
\caption{Visualization of stability.
A stable choice function~$S$ chooses a set from both~$B$ (left) and~$C$ (middle) if and only if it chooses the same set from~$B\cup C$ (right). The direction from the left and middle diagrams to the right diagram corresponds to $\widehat{\gamma}$ while the converse direction corresponds to $\widehat{\alpha}$.
}
\label{fig:stability-illustration}
\end{figure}
\begin{definition}
A choice function~$S$ satisfies $\widehat\alpha$ if for all feasible sets~$B$ and $C$,
\[
\tag{$\widehat \alpha$}
\text{
$S(B)\subseteq C\subseteq B$ implies $S(C)= S(B)$.
}
\]
A choice function~$S$ satisfies $\widehat\gamma$ if for all feasible sets~$B$ and $C$,
\[
\tag{$\widehat\gamma$}
\text{
$S(B)=S(C)$ implies $S(B)=S(B\cup C)$.
}
\]
\end{definition}
It has been shown that stability is equivalent to the conjunction of $\widehat\alpha$ and $\widehat\gamma$.
\begin{theorem}[\citealp{BrHa11a}]\label{thm:BrHa}
A choice function is stable if and only if it satisfies $\widehat\alpha$ and $\widehat\gamma$.
\end{theorem}
Hence, a choice function~$S$ is stable if and only if for all feasible sets~$B$, $C$, and~$X$ with $X\subseteq B\cap C$,
\[
\text{
$X=S(B)$ and $X=S(C)$
\quad if and only if \quad
$X=S(B\cup C)$.
}
\]
Stability, $\widehat{\alpha}$, and $\widehat{\gamma}$ are illustrated in \figref{fig:stability-illustration}.
For a finer analysis, we split $\widehat{\alpha}$ and $\widehat{\gamma}$ into two conditions \citep[][Remark 1]{BrHa11a}.
\begin{definition}\label{def:greek-letter-properties}
A choice function $S$ satisfies
\begin{itemize}
\item $\widehat{\alpha}_{_\subseteq}$ if for all $B,C$, it holds that $S(B)\subseteq C\subseteq B$ implies $S(C)\subseteq S(B)$,\footnote{$\widehat\alpha_{_\subseteq}$ has also been called the \emph{A\"izerman property} or the \emph{weak superset property} \citep[\eg][]{Lasl97a,Bran11b}.}
\item $\widehat{\alpha}_{_\supseteq}$ if for all $B,C$, it holds that $S(B)\subseteq C\subseteq B$ implies $S(C)\supseteq S(B)$,
\item $\widehat{\gamma}_{_\subseteq}$ if for all $B,C$, it holds that $S(B)=S(C)$ implies $S(B)\subseteq S(B\cup C)$, and
\item $\widehat{\gamma}_{_\supseteq}$ if for all $B,C$, it holds that $S(B)=S(C)$ implies $S(B)\supseteq S(B\cup C)$.
\end{itemize}
\end{definition}
\begin{figure}[tb]
\centering
\begin{tikzpicture}[node distance=3em]
\tikzstyle{pfeil}=[latex-latex, shorten >=1pt,draw]
\tikzstyle{onlytext}=[]
\node[onlytext] (SS) at (0,0) {stability};
\node[onlytext] (ahat) [below=of SS,yshift=-1em,xshift=-6em] {$\widehat\alpha$};
\node[onlytext] (ahat-incl) [below=of ahat,xshift=-3em] {$\widehat\alpha_{_\subseteq}$};
\node[onlytext] (ahat-sup) [below=of ahat,xshift=3em] {$\widehat\alpha_{_\supseteq}$};
\node[onlytext] (ghat) [below=of SS,yshift=-1em,xshift=6em] {$\widehat\gamma$};
\node[onlytext] (ghat-incl) [below=of ghat,xshift=-3em] {$\widehat\gamma_{_\subseteq}$};
\node[onlytext] (ghat-sup) [below=of ghat,xshift=3em] {$\widehat\gamma_{_\supseteq}$};
\node[onlytext,node distance=2em] (ide) [below=of ahat-sup] {idempotency};
\draw[pfeil] (SS) to[out=270,in=90] (ahat);
\draw[pfeil] (ahat) to[out=270,in=90] (ahat-incl);
\draw[pfeil] (ahat) to[out=270,in=90] (ahat-sup);
\draw[pfeil] (SS) to[out=270,in=90] (ghat);
\draw[pfeil] (ghat) to[out=270,in=90] (ghat-incl);
\draw[pfeil] (ghat) to[out=270,in=90] (ghat-sup);
\draw[pfeil,-latex] (ahat-sup) to[out=270,in=90] (ide);
\end{tikzpicture}
\caption{Logical relationships between choice-theoretic properties.}
\label{fig:stability-properties}
\end{figure}
Obviously, for any choice function $S$ we have
\begin{align*}
S \text{ satisfies }\widehat{\alpha} \quad&\text{ if and only if }\quad S \text{ satisfies }\widehat{\alpha}_{_\subseteq} \text{ and } \widehat{\alpha}_{_\supseteq} \text{, and} \\
S \text{ satisfies }\widehat{\gamma} \quad&\text{ if and only if }\quad S \text{ satisfies } \widehat{\gamma}_{_\subseteq} \text{ and } \widehat{\gamma}_{_\supseteq}.
\end{align*}
A choice function is \emph{idempotent} if the choice set is invariant under repeated application of the choice function,
\ie $S(S(A))=S(A)$ for all feasible sets~$A$.
It is easily seen that $\widehat{\alpha}_{_\supseteq}$ is stronger than idempotency since $S(S(A))\supseteq S(A)$ implies $S(S(A))=S(A)$.
\figref{fig:stability-properties} shows the logical relationships between stability and its weakenings.
\section{Generators of Stable Choice Functions}
\label{sec:generators}
We say that a choice function $S'$ \emph{generates} a stable choice function $S$ if $S=\widehat{S'}$. Understanding stable choice functions can be reduced to understanding their generators.
It turns out that important generators of stable choice functions are \emph{simple} choice functions, \ie choice functions $S'$ with $|S'(A)|\ge |A|-1$ for all $A$. In fact, every stable choice function $S$ is generated by a unique \simple choice function.
To this end, we define the \emph{\rootterm} of a choice function $S$ as
\[
\rootsym[S](A)=\begin{cases}S(A) &\mbox{if } |S(A)|=|A|-1\text{,} \\ A &\mbox{otherwise.}\end{cases}
\]
Not only does $\rootsym[S]$ generate $S$, but any choice function sandwiched between $S$ and $\rootsym[S]$ is a generator of $S$.
\begin{theorem}
\label{thm:generatorS}
Let $S$ and $S'$ be choice functions such that $S$ is stable and $S\subseteq S'\subseteq \rootsym[S]$. Then, $\widehat{S'}$ is well-defined and $\widehat{S'}=S$.
In particular, $S$ is generated by the simple choice function $\rootsym[S]$.
\end{theorem}
\begin{proof}
We first show that any $S$-stable set is also $S'$-stable. Suppose that a set $X\subseteq A$ is $S$-stable in $A$. Then $S(X)=X$, and $a\not\in S(X\cup\{a\})$ for all $a\in A\backslash X$. Since $S$ satisfies~$\widehat{\alpha}$, we have $S(X\cup\{a\})=X$ and therefore $\rootsym[S](X\cup\{a\})=X$ for all $a\in A\backslash X$. Using the inclusion relationship $S\subseteq S'\subseteq \rootsym[S]$, we find that $S'(X)=X$ and $S'(X\cup\{a\})=X$ for all $a\in A\backslash X$. Hence, $X$ is $S'$-stable in $A$.
Next, we show that every $S'$-stable set contains an $S$-stable set. Suppose that a set $X\subseteq A$ is $S'$-stable in $A$. Then $S'(X)=X$ and $a\not\in S'(X\cup\{a\})$ for all $a\in A\backslash X$. Using the relation $S\subseteq S'$, we find that $a\not\in S(X\cup\{a\})$ for all $a\in A\backslash X$. We will show that $S(X)\subseteq X$ is $S$-stable in $A$. Since $S$ satisfies $\widehat{\alpha}$, we have $S(S(X))=S(X)$ and $S(X\cup\{a\})=S(X)$ for all $a\in A\backslash X$. It remains to show that $b\not\in S(S(X)\cup\{b\})$ for all $b\in A\backslash S(X)$. If $b\in A\backslash X$, we already have that $S(X\cup\{b\})=S(X)$ and therefore $S(S(X)\cup\{b\})=S(X)$ by $\widehat{\alpha}$. Otherwise, if $b\in X\backslash S(X)$, $\widehat{\alpha}$ again implies that $S(S(X)\cup\{b\})=S(X)$.
Since $S$ is stable, for any feasible set $A$ there exists a unique minimal $S$-stable set in $A$, which is given by $S(A)=\widehat{S}(A)$. From what we have shown, this set is also $S'$-stable, and moreover any $S'$-stable set contains an $S$-stable set which in turn contains $S(A)$. Hence $S(A)$ is also the unique minimal $S'$-stable set in $A$. This implies that $\widehat{S'}$ is well-defined and $\widehat{S'}=\widehat{S}=S$.
\end{proof}
\thmref{thm:generatorS} entails that in order to understand stable choice functions, we only need to understand the circumstances under which a single alternative is discarded.\footnote{Together with \thmref{thm:ShatdirectedMSSP}, \thmref{thm:generatorS} also implies that, for any stable tournament solution $S$, $\rootsym[S]$ is a coarsest generator of $S$. When only considering generators that satisfy $\widehat{\alpha}_{_\subseteq}$, $\rootsym[S]$ is also \emph{the} coarsest generator of $S$. In addition, since simple choice functions trivially satisfy $\widehat{\alpha}_{_\subseteq}$, the two theorems imply that $\rootsym[S]$ is the unique simple choice function generating $S$.}
An important question is which simple choice functions are \rootterms of stable choice functions. It follows from the definition of \rootterm functions that any \rootterm of a stable choice function needs to satisfy $\widehat{\alpha}$. This condition is, however, not sufficient as it is easy to construct a simple choice function $S$ that satisfies $\widehat{\alpha}$ such that $\widehat{S}$ violates $\widehat{\alpha}$. Nevertheless, the theorem implies that the number of stable choice functions can be bounded by counting the number of simple choice functions that satisfy $\widehat{\alpha}$. The number of simple choice functions for a universe of size $n\ge 2$ is only $\prod_{i=2}^n (i+1)^{\binom{n}{i}}$, compared to $\prod_{i=2}^n (2^i-1)^{\binom{n}{i}}$ for arbitrary choice functions.
In order to give a complete characterization of choice functions that generate stable choice functions, we need to introduce a new property.
A choice function $S$ satisfies local $\widehat{\alpha}$ if minimal $S$-stable sets are invariant under removing outside alternatives.\footnote{It can be checked that we obtain an equivalent condition even if we require that \emph{all} outside alternatives have to be removed. When defining local $\widehat{\alpha}$ in this way, it can be interpreted as some form of transitivity of stability: stable sets of minimally stable sets are also stable within the original feasible set \citep[cf.][Lem.~3]{Bran11b}.}
\begin{definition}
A choice function $S$ satisfies \emph{local} $\widehat{\alpha}$ if for any sets $X\subseteq Y\subseteq Z$ such that $X$ is minimally $S$-stable in $Z$, we have that $X$ is also minimally $S$-stable in $Y$.
\end{definition}
Recall that a choice function $S$ satisfies $\widehat{\alpha}_{_\subseteq}$ if for any sets $A,B$ such that $S(A)\subseteq B\subseteq A$, we have $S(B)\subseteq S(A)$. In particular, every simple choice function satisfies $\widehat{\alpha}_{_\subseteq}$. We will provide a characterization of choice functions $S$ satisfying $\widehat{\alpha}_{_\subseteq}$ such that $\widehat{S}$ is stable. First we need the following (known) lemma.
\begin{lemma}[\citealp{BrHa11a}]
\label{lemma:Shatgammahat}
Let $S$ be a choice function such that $\widehat{S}$ is well-defined. Then $\widehat{S}$ satisfies $\widehat{\gamma}$.
\end{lemma}
\begin{theorem}
\label{thm:ShatdirectedMSSP}
Let $S$ be a choice function satisfying $\widehat{\alpha}_{_\subseteq}$. Then $\widehat{S}$ is stable if and only if $\widehat{S}$ is well-defined and $S$ satisfies local $\widehat{\alpha}$.
\end{theorem}
\begin{proof}
For the direction from right to left, suppose that $\widehat{S}$ is well-defined and $S$ satisfies local $\widehat{\alpha}$. Then Lemma \ref{lemma:Shatgammahat} implies that $\widehat{S}$ satisfies $\widehat{\gamma}$. Moreover, it follows directly from local $\widehat{\alpha}$ and the fact that $\widehat{S}$ is well-defined that $\widehat{S}$ satisfies $\widehat{\alpha}$. Hence, $\widehat{S}$ is stable.
For the converse direction, suppose that $\widehat{S}$ is stable. We first show that $\widehat{S}$ is well-defined.
Every feasible set $A$ contains at least one $S$-stable set because otherwise $\widehat{S}$ is not a choice function.
Next, suppose for contradiction that there exists a feasible set that contains two distinct minimal $S$-stable sets. Consider such a feasible set $A$ of minimum size, and pick any two distinct minimal $S$-stable sets in $A$, which we denote by $B$ and $C$. If $|B\backslash C|=|C\backslash B|=1$, then $\widehat{\alpha}_{_\subseteq}$ implies $S(B\cup C)=B=C$, a contradiction. Otherwise, assume without loss of generality that $|C\backslash B|\geq 2$, and pick $x,y\in C\backslash B$ with $x\neq y$. Then $A\backslash\{x\}$ contains a unique minimal $S$-stable set. As $B$ is also $S$-stable in $A\backslash\{x\}$, it follows that $\widehat{S}(A\backslash\{x\})\subseteq B$. Since $\widehat{S}$ satisfies $\widehat{\alpha}$, we have $\widehat{S}(A\backslash\{x\})=\widehat{S}(A\backslash\{x,y\})$. Similarly, we have $\widehat{S}(A\backslash\{y\})=\widehat{S}(A\backslash\{x,y\})$. But then $\widehat{\gamma}$ implies that $\widehat{S}(A)=\widehat{S}(A\backslash\{x,y\})\subseteq A$, which contradicts the fact that $C$ is minimal $S$-stable in $A$.
We now show that $S$ satisfies local $\widehat{\alpha}$. Since $\widehat{S}$ is well-defined, minimal $S$-stable sets are unique and given by $\widehat{S}$. Since $\widehat{S}$ satisfies $\widehat{\alpha}$, minimal $S$-stable sets are invariant under deleting outside alternatives. Hence, $S$ satisfies local $\widehat{\alpha}$, as desired.
\end{proof}
\begin{remark}
Theorem \ref{thm:ShatdirectedMSSP} does not hold without the condition that $S$ satisfies $\widehat{\alpha}_{_\subseteq}$. To this end, let $U=\{a,b,c\}$, $S(\{a,b,c\})=\{b\}$, and $S(X)=X$ for all other feasible sets $X$. Then both $\{a,b\}$ and $\{b,c\}$ are minimally $S$-stable in $\{a,b,c\}$, implying that $\widehat{S}$ is not well-defined. On the other hand, $\widehat{S}$ is trivial and therefore also stable. This example also shows that a generator of a stable choice function needs not be sandwiched between the choice function and its root.
\end{remark}
Combining Theorem \ref{thm:ShatdirectedMSSP} with \thmref{thm:BrHa}, we obtain the following characterization.
\begin{corollary}
Let $S$ be a choice function satisfying $\widehat{\alpha}_{_\subseteq}$. Then,
\begin{align*}
\widehat{S} \text{ is stable} &\text{ if and only if } \widehat{\widehat{S}} \text{ is well-defined and } \widehat{\widehat{S}}=\widehat{S} \\
&\text{ if and only if } \widehat{S} \text{ satisfies } \widehat{\alpha} \text{ and } \widehat{\gamma} \\
&\text{ if and only if } \widehat{S} \text{ is well-defined and } S \text{ satisfies local $\widehat{\alpha}$}.
\end{align*}
\end{corollary}
Since simple choice functions trivially satisfy $\widehat{\alpha}_{_\subseteq}$, this corollary completely characterizes which simple choice functions generate stable choice functions.
\section{Tournament Solutions}
\label{sec:tsolutions}
We now turn to the important special case of choice functions whose output depends on a binary relation.
\subsection{Preliminaries}
\label{sec:prelims}
A \emph{tournament $T$} is a pair $(A,{\succ})$, where $A$ is a feasible set and~$\succ$ is a connex and asymmetric (and thus irreflexive) binary relation on $A$, usually referred to as the \emph{dominance relation}.
Intuitively, $a\succ b$ signifies that alternative~$a$ is preferable to alternative~$b$. The dominance relation can be extended to sets of alternatives by writing $X\succ Y$ when $a\succ b$ for all $a\in X$ and $b\in Y$.
For a tournament $T=(A,\succ)$ and an alternative $a\in A$,
we denote by \[\dom(a)=\{\,x\in A\mid x \succ a\,\}\] the \emph{dominators} of~$a$
and by \[D(a)=\{\,x\in A\mid a \succ x\,\}\] the \emph{dominion} of~$a$. When varying the tournament, we will refer to $\dom_{T'}(a)$ and $D_{T'}(a)$ for some tournament $T'=(A',\succ')$.
An alternative $a$ is said to \emph{cover} another alternative $b$ if $D(b)\subseteq D(a)$.
It is said to be a \emph{Condorcet winner} if it dominates all other alternatives, and a \emph{Condorcet loser} if it is dominated by all other alternatives.
The order of a tournament $T=(A,\succ)$ is denoted by $|T|=|A|$. A tournament is \emph{regular} if the dominator set and the dominion set of each alternative are of the same size, \ie for all $a\in A$ we have $|D(a)|=|\dom(a)|$. It is easily seen that regular tournaments are always of odd order.
A \emph{tournament solution} is a function that maps a tournament to a nonempty subset of its alternatives.
We assume that tournament solutions are invariant under tournament isomorphisms. For every fixed tournament, a tournament solution yields a choice function. A tournament solution is \emph{trivial} if it returns all alternatives of every tournament.
Three common tournament solutions are the top cycle, the uncovered set, and the Banks set. For a given tournament $(A,{\succ})$, the \emph{top cycle} (\tc) is the (unique) smallest set $B\subseteq A$ such that $B\succ A\setminus B$, the \emph{uncovered set} (\uc) contains all alternatives that are not covered by another alternative, and the \emph{Banks set} (\ba) contains all alternatives that are Condorcet winners of inclusion-maximal transitive subtournaments.
For two tournament solutions $S$ and $S'$, we write $S'\subseteq S$, and say that $S'$ is a \emph{refinement} of~$S$ and~$S$ a \emph{coarsening} of~$S'$, if $S'(T)\subseteq S(T)$ for all tournaments~$T$. The following inclusions are well-known:
\[
\ba \subseteq \uc \subseteq \tc\text.
\]
To simplify notation, we will often identify a (sub)tournament by its set of alternatives when the dominance relation is clear from the context. For example, for a tournament solution $S$ and a subset of alternatives $X\subseteq A$ in a tournament $T=(A,\succ)$ we will write $S(X)$ for $S(T|_{X})$.
The definitions of stability and other choice consistency conditions directly carry over from choice functions to tournament solutions by requiring that the given condition should be satisfied by every choice function induced by the tournament solution and a tournament.
We additionally consider the following desirable properties of tournament solutions, all of which are standard conditions in the literature.
Monotonicity requires that a chosen alternative will still be chosen when its dominion is enlarged, while leaving everything else unchanged.
\begin{definition}
A tournament solution is \emph{monotonic} if for all $T=(A,{\succ})$, $T'=(A,{\succ'})$, $a \in A$ such that ${\succ}_{A\setminus\{a\}} = {\succ'}_{A\setminus\{a\}}$ and for all $b\in A\setminus \{a\}$, $a\succ' b$ whenever $a \succ b$, \[a\in S(T) \quad\text{implies}\quad a\in S(T')\text.\]
\end{definition}
Regularity requires that all alternatives are chosen from regular tournaments.
\begin{definition}
A tournament solution is \emph{regular} if $S(T)=A$ for all regular tournaments $T=(A,\succ)$.
\end{definition}
Even though regularity is often considered in the context of tournament solutions, it does not possess the normative appeal of other conditions.
Finally, we consider a structural invariance property that is based on components of similar alternatives and, loosely speaking, requires that a tournament solution chooses the ``best'' alternatives from the ``best'' components.
A \emph{component} is a nonempty subset of alternatives $B\subseteq A$ that bear the same relationship to any alternative not in the set, i.e., for all $a\in A\backslash B$, either $B\succ\{a\}$ or $\{a\}\succ B$. A \emph{decomposition} of $T$ is a partition of $A$ into components.
For a given tournament $\tilde{T}$, a new tournament $T$ can be constructed by replacing each alternative with a component. Let $B_1,\dots,B_k$ be pairwise disjoint sets of alternatives and consider tournaments $T_1=(B_1,\succ_1),\dots,T_k=(B_k,\succ_k)$, and $\tilde{T} = (\{1,\dots,k\}, \tilde{\succ})$. The \emph{product} of $T_1,\dots,T_k$ with respect to $\tilde{T}$, denoted by $\prod(\tilde{T},T_1,\dots,T_k)$, is the tournament $T=(A,\succ)$ such that $A=\bigcup_{i=1}^kB_i$ and for all
$b_1\in B_i,b_2\in B_j$,
\[b_1 \succ b_2 \text{ \hspace{0.1cm} if and only if \hspace{0.1cm} } i = j \text{ and } b_1\succ_i b_2, \text{ or } i \neq j \text{ and } i \mathrel{\tilde{\succ}} j.\]
Here, $\tilde{T}$ is called the \emph{summary} of T with respect to the above decomposition.
\begin{definition}
A tournament solution is \emph{composition-consistent} if for all tournaments $T,T_1,\dots,T_k$ and $\tilde{T}$ such that $T=\prod(\tilde{T},T_1,\dots,T_k)$,
\[S(T)=\bigcup_{i\in S(\tilde{T})}S(T_i).\]
\end{definition}
All of the three tournament solutions we briefly introduced above satisfy monotonicity. \tc and \uc are regular, \uc and \ba are composition-consistent, and only \tc is stable.
For more thorough treatments of tournament solutions, see \citet{Lasl97a} and \citet{BBH15a}.
\subsection{The Bipartisan set and the Tournament Equilibrium Set}
\label{sec:bpandteq}
We now define two tournament solutions that are central to this paper. The first one, the bipartisan set, generalizes the notion of a Condorcet winner to probability distributions over alternatives.
The \emph{skew-adjacency matrix} $G(T)=(g_{ab})_{a,b\in A}$ of a tournament $T$ is defined by letting
\[
g_{ab} = \begin{cases}
1 & \text{if $a\succ b$}\\
-1 & \text{if $b\succ a$}\\
0 & \text{if $a=b$.}
\end{cases}
\]
The skew-adjacency matrix can be interpreted as a symmetric zero-sum game in which there are two players, one choosing rows and the other choosing columns, and in which the matrix entries are the payoffs of the row player. \citet{LLL93b} and \citet{FiRy95a} have shown independently that every such game admits a unique mixed Nash equilibrium, which moreover is symmetric. Let $p_T\in \Delta(A)$ denote the mixed strategy played by both players in equilibrium. Then, $p_T$ is the unique probability distribution such that
\[
\sum_{a,b\in A} p_T(a)q(b)g_{ab}\ge 0 \quad\text{ for all }q\in\Delta(A)\text{.}
\]
In other words, there is no other probability distribution that is more likely to yield a better alternative than $p_T$.
\citet{LLL93b} defined the bipartisan set~$\bp(T)$ of~$T$ as the support of $p_T$.\footnote{The probability distribution $p_T$ was independently analyzed by \citet{Krew65a}, \citet{Fish84a}, \citet{FeMa92a}, and others. An axiomatic characterization in the context of social choice was recently given by \citet{Bran13a}.}
\begin{definition}
The \emph{bipartisan set} ($\bp$) of a given tournament $T=(A,\succ)$ is defined as
\[ \bp(T) = \{ a\in A \mid p_T(a)>0 \}\text{.}\]
\end{definition}
\bp satisfies stability, monotonicity, regularity, and composition-consistency. Moreover, $\bp\subseteq \uc$ and $\bp$ can be computed in polynomial time by solving a linear feasibility problem.
The next tournament solution, the tournament equilibrium set, was defined by \citet{Schw90a}.
Given a tournament $T=(A,\succ)$ and a tournament solution $S$, a nonempty subset of alternatives $X\subseteq A$ is called $S$-\emph{retentive} if $S(\dom(x))\subseteq X$ for all $x \in X$ such that $\dom(x)\neq \emptyset$.
\begin{definition}
The \emph{tournament equilibrium set} ($\teq$) of a given tournament $T=(A,\succ)$ is defined recursively as the union of all inclusion-minimal $\teq$-retentive sets in $T$.
\end{definition}
This is a proper recursive definition because the cardinality of the set of dominators of an alternative in a particular set is always smaller than the cardinality of the set itself. \bp and \teq coincide on all tournaments of order 5 or less \citep{BDS13a}.\footnote{It is open whether there are tournaments in which \bp and \teq are disjoint.}
\citet{Schw90a} showed that $\teq\subseteq \ba$ and conjectured that every tournament contains a \emph{unique} inclusion-minimal $\teq$-retentive set, which was later shown to be equivalent to $\teq$ satisfying any one of a number of desirable properties for tournament solutions including stability and monotonicity \citep{LLL93a,Houy09a,Houy09b,BBFH11a,Bran11b,BrHa11a,Bran11c}.
This conjecture was disproved by \citet{BCK+11a}, who have non-constructively shown the existence of a counterexample with about $10^{136}$ alternatives using the probabilistic method. Since it was shown that $\teq$ satisfies the above mentioned desirable properties for all tournaments that are smaller than the smallest counterexample to Schwartz's conjecture, the search for smaller counterexamples remains an important problem. In fact, the counterexample found by \citet{BCK+11a} is so large that it has no practical consequences whatsoever for $\teq$. Apart from concrete counterexamples, there is ongoing interest in why and under which circumstances $\teq$ and a related tournament solution called the \emph{minimal extending set} $\me=\widehat{\ba}$ violate stability \citep{MSY15a,BHS15a,Yang16a}.
Computing the tournament equilibrium set of a given tournament was shown to be NP-hard and consequently there does not exist an efficient algorithm for this problem unless P equals NP \citep{BFHM09a}.
\section{Stable Tournament Solutions and Their Generators}
Tournament solutions comprise an important subclass of choice functions. In this section, we examine the consequences of the findings from Sections~\ref{sec:stability} and \ref{sec:generators}, in particular Theorems~\ref{thm:BrHa}, \ref{thm:generatorS}, and \ref{thm:ShatdirectedMSSP}, for tournament solutions.
Stability is a rather demanding property which is satisfied by only a few tournament solutions.
Three well-known tournament solutions that satisfy stability are the top cycle \tc, the minimal covering set \mc defined by $\mc=\widehat{\uc}$, and the bipartisan set \bp, which is a refinement of \mc.
By virtue of \thmref{thm:generatorS}, any stable tournament solution is generated by its \rootterm $\rootsym[S]$.
For example, $\rootsym[\tc]$ is a tournament solution that excludes an alternative if and only if it is the only alternative not contained in the top cycle (and hence a Condorcet loser). Similarly, one can obtain the \rootterms of other stable tournament solutions such as \mc and \bp. In some cases, the generator that is typically considered for a stable tournament solution is different from its \rootterm; for example, $\mc$ is traditionally generated by \uc, a refinement of $\rootsym[\mc]$.
Since tournament solutions are invariant under tournament isomorphisms, a simple tournament solution may only exclude
an alternative $a$ if any automorphism of $T$ maps $a$ to itself.
Note that if a tournament solution $S$ is stable, $\rootsym[S]$ is different from $S$ unless $S$ is the trivial tournament solution.
It follows from \thmref{thm:BrHa} that stable tournament solutions satisfy both $\widehat{\alpha}$ and $\widehat{\gamma}$.
It can be shown that $\widehat{\alpha}$ and $\widehat{\gamma}$ are independent from each other even in the context of tournament solutions.
\begin{remark}\label{rem:alphagamma}
There are tournament solutions that satisfy only one of $\widehat{\alpha}$ and $\widehat{\gamma}$. Examples are given in Appendix~\ref{app:alphagamma}.
\end{remark}
We have shown in \thmref{thm:generatorS} that stable tournament solutions are generated by unique simple tournament solutions.
If we furthermore restrict our attention to \emph{monotonic} stable tournament solutions, the following theorem shows that we only need to consider \rootterm solutions that are monotonic.
\begin{theorem}
\label{thm:mon}
A stable tournament solution $S$ is monotonic if and only if $\rootsym[S]$ is monotonic.
\end{theorem}
\begin{proof}
First, note that monotonicity is equivalent to requiring that unchosen alternatives remain unchosen when they are weakened.
Now, for the direction from left to right, suppose that $S$ is monotonic. Let $T=(A,{\succ})$, $B=\rootsym[S](T)$, and $a\in A\setminus B$. Since $\rootsym[S]$ is simple, we have $\rootsym[S](T)=B\backslash\{a\}$, and therefore $S(T)=B\backslash\{a\}$. Using the fact that $S$ is stable and thus satisfies $\widehat{\alpha}$, we find that $S(T|_{B\backslash\{a\}})=B\backslash\{a\}$. Let $T'$ be a tournament obtained by weakening $a$ with respect to some alternative in $B$. Monotonicity of $S$ entails that $a\not\in S(T')$. Since $T|_{B\backslash\{a\}}=T'|_{B\backslash\{a\}}$, we have $S(T'|_{B\backslash\{a\}})=B\backslash\{a\}$, and $\widehat{\alpha}$ implies that $S(T')=B\backslash\{a\}$ and $\rootsym[S](T')=B\backslash\{a\}$ as well. This means that $a$ remains unchosen by $\rootsym[S]$ in $T'$, as desired.
The converse direction even holds for all generators of $S$ \citep[see][Prop.~5]{Bran11b}.
\end{proof}
Analogous results do \emph{not} hold for composition-consistency or regularity.
Theorem~\ref{thm:ShatdirectedMSSP} characterizes stable choice functions $\widehat{S}$ using well-definedness of $\widehat{S}$ and local $\widehat{\alpha}$ of $S$. These two properties are independent from each other (and therefore necessary for the characterization) even in the context of tournament solutions.
\begin{remark}\label{rem:localalpha}
There is a tournament solution $S$ that satisfies local $\widehat{\alpha}$, but $\widehat{S}$ violates~$\widehat{\alpha}$. There is a tournament solution $S$ for which $\widehat{S}$ is well-defined, but $\widehat{S}$ is not stable.
Examples are given in Appendix~\ref{app:localalpha}.
\end{remark}
\thmref{thm:ShatdirectedMSSP} generalizes previous statements about stable tournament solutions.
\citet{Bran11b} studied a particular class of generators defined via qualified subsets and shows the direction from right to left of \thmref{thm:ShatdirectedMSSP} for these generators \citep[][Thm.~4]{Bran11b}.\footnote{\citeauthor{Bran11b}'s proof relies on a lemma that essentially showed that the generators he considers always satisfy local $\widehat{\alpha}$.}
Later, \citet{BHS15a} proved \thmref{thm:ShatdirectedMSSP} for one particular generator \ba \citep[][Cor.~2]{BHS15a}.
\section{Local Reversal Symmetry}
In this section, we introduce a new property of tournament solutions called local reversal symmetry (\lrs).\footnote{The name of this axiom is inspired by a social choice criterion called \emph{reversal symmetry}. Reversal symmetry prescribes that a uniquely chosen alternative has to be unchosen when the preferences of all voters are reversed \citep{SaBa03a}. A stronger axiom, called \emph{ballot reversal symmetry}, which demands that the choice set is inverted when all preferences are reversed was recently introduced by \citet{DHLP+14a}.}
While intuitive by itself, \lrs is strongly connected to stability and can be leveraged to disprove that $\teq$ is stable and to prove that no refinement of $\bp$ is stable.
For a tournament $T$, let $T^a$ be the tournament whose dominance relation is \emph{locally reversed} at alternative $a$, \ie $T^a=(A,\succ^a)$ with
\[
i \succ^a j \quad \text{if and only if} \quad
(i \succ j \text{ and } a \notin \{i,j\}) \text{ or }
(j \succ i \text{ and } a \in \{i,j\}).
\]
The effect of local reversals is illustrated in \figref{fig:lrs-def}. Note that $T=\left(T^a\right)^a$ and $\left(T^a\right)^b=\left(T^b\right)^a$ for all alternatives $a$ and $b$.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[]
\node (a) at (0,0) {$a$};
\node (b) [right of=a] {$b$};
\node (c) [below of=b] {$c$};
\node (d) [left of=c] {$d$};
\foreach \x / \y in {a/b,a/c,b/c,b/d,c/d,d/a}{
\draw[-latex] (\x) to (\y);
};
\node (caption) [node distance = 0.5*\nd,below of=d, xshift=0.5*\nd] {$T$};
\end{tikzpicture}
\qquad\qquad
\begin{tikzpicture}[]
\node (a) at (0,0) {$a$};
\node (b) [right of=a] {$b$};
\node (c) [below of=b] {$c$};
\node (d) [left of=c] {$d$};
\foreach \x / \y in {b/a,c/a,b/c,b/d,c/d,a/d}{
\draw[-latex] (\x) to (\y);
};
\node (caption) [node distance = 0.5*\nd,below of=d, xshift=0.5*\nd] {$T^a$};
\end{tikzpicture}
\caption{Local reversals of tournament $T$ at alternative $a$ result in $T^a$. $\bp(T)=\teq(T)=\{a,b,d\}$ and $\bp(T^a)=\teq(T^a)=\{b\}$.}
\label{fig:lrs-def}
\end{figure}
\begin{definition}
A tournament solution $S$ satisfies \emph{local reversal symmetry} (\lrs) if for all tournaments $T$ and alternatives $a$,
\[
a\in S(T) \text{ if and only if } a \notin S(T^a).
\]
\end{definition}
\lrs can be naturally split into two properties, \lrsin and \lrsout. \lrsin corresponds to the direction from right to left in the above equivalence and requires that unchosen alternatives are chosen when all incident edges are reversed. \lrsout corresponds to the direction from left to right and requires that chosen alternatives are not chosen when all incident edges are reversed.
It follows directly from the definition that \lrsin (resp.~\lrsout) of a tournament solution $S$ carries over to any tournament solution that is a coarsening (resp.~refinement) of $S$.
\begin{lemma}\label{lem:lrs-inheritance}
Let $S$ and $S'$ be two tournament solutions such that $S\subseteq S'$. If $S$ satisfies \lrsin, then so does $S'$. Conversely, if $S'$ satisfies \lrsout, then so does $S$.
\end{lemma}
There is an unexpected strong relationship between the purely choice-theoretic condition of stability and \lrs.
\begin{theorem}\label{thm:selfstable-lrsin}
Every stable tournament solution satisfies \lrsin.
\end{theorem}
\begin{proof}\label{pf:}
Suppose for contradiction that $S$ is stable but violates \lrsin. Then there exists a tournament $T=(A,\succ)$ and an alternative $a\in A$ such that $a\notin S(T)$ and $a\notin S(T^a)$.
Let $T'=(A',\succ')$, where $A'=X\cup Y$ and each of $T'|_X$ and $T'|_Y$ is isomorphic to $T|_{A\setminus\{a\}}$. Also, partition $X=X_1\cup X_2$ and $Y=Y_1\cup Y_2$, where $X_1$ and $Y_1$ consist of the alternatives that are mapped to $\dom_T(a)$ by the isomorphism. To complete the definition of $T'$, we add the relations $X_1\succ'Y_2$, $Y_2\succ'X_2$, $X_2\succ'Y_1$, and $Y_1\succ'X_1$. The structure of tournament $T'$ is depicted in \figref{fig:selfstable-lrs}.
We claim that both $X$ and $Y$ are externally $S$-stable in $T'$. To this end, we note that for every alternative $x\in X$ (resp. $y\in Y$) the subtournament $T|_{Y\cup\{x\}}$ (resp. $T|_{X\cup\{y\}}$) is isomorphic either to $T$ or to $T^a$, with $x$ (resp. $y$) being mapped to $a$. By assumption, $a$ is neither chosen in $T$ nor in $T^a$, and therefore $X$ and $Y$ are both externally $S$-stable in $T'$.
Now, suppose that $S(X\cup\{y\})=X'\subseteq X$ for some $y\in Y$. Note that $X'\neq\emptyset$ because tournament solutions always return non-empty sets. Since $S$ satisfies $\widehat\alpha$, we have $S(X)=X'$. Hence, $S(X)=X'=S(X\cup\{y\})$ for all $y\in Y$. Since $S$ satisfies $\widehat\gamma$, we also have $S(X\cup Y)=X'$. Similarly, we can deduce that $S(X\cup Y)=Y'$ for some $\emptyset\neq Y'\subseteq Y$. This yields the desired contradiction.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}
[dom/.style={-latex, shorten >=1mm, shorten <=1mm}
]
\node[draw, ellipse split, minimum height=6em] (T1){$X_1$ \nodepart{lower}$X_2$};
\node[draw, ellipse split, minimum height=6em, node distance=8em, right of=T1] (T2) {$Y_1$ \nodepart{lower} $Y_2$};
\node[below of=T1] {$X$};
\node[below of=T2] {$Y$};
\draw[dom] (T1.north east) to (T2.south west);
\draw[dom] (T2.south west) to[bend left=10] (T1.south east);
\draw[dom] (T1.south east) to (T2.north west);
\draw[dom] (T2.north west) to[bend right=10] (T1.north east);
\end{tikzpicture}
\caption{Construction of a tournament $T'$ with two $S$-stable sets $X$ and $Y$ used in the proof of \thmref{thm:selfstable-lrsin}.}
\label{fig:selfstable-lrs}
\end{figure}
\end{proof}
\subsection{Disproving Stability}
As discussed in \secref{sec:tsolutions}, disproving that a tournament solution satisfies stability can be very difficult. By virtue of \thmref{thm:selfstable-lrsin}, it now suffices to show that the tournament solution violates \lrsin. For $\teq$, this leads to the first concrete tournament in which $\teq$ violates stability. With the help of a computer, we found a minimal tournament in which \teq violates \lrsin using exhaustive search. This tournament is of order $13$ and thereby lies exactly at the boundary of the class of tournaments for which exhaustive search is possible.
Using the construction explained in the proof of \thmref{thm:selfstable-lrsin}, we thus obtain a tournament of order $24$ in which $\teq$ violates $\widehat{\gamma}$. This tournament consists of two disjoint isomorphic subtournaments of order $12$ both of which are $\teq$-retentive.
\begin{theorem}
\teq violates \lrsin.
\end{theorem}
\begin{proof}
We define a tournament $T=(\{x_1,\dots,x_{13}\},\succ)$ such that $x_{13}\not\in \teq(T)$ and $x_{13}\not\in \teq(T^{x_{13}})$.
The dominator sets in $T$ are defined as follows:
\[
\begin{array}{lcllcl}
\dom(x_1) &=& \{x_4,x_5,x_6,x_8,x_9,x_{12} \}\text{, } &
\dom(x_2) &=& \{x_1,x_6,x_7,x_{10},x_{12} \}\text{, }\\
\dom(x_3) &=& \{x_1,x_2,x_6,x_7,x_9,x_{10} \}\text{, } &
\dom(x_4) &=& \{x_2,x_3,x_7,x_8,x_{11} \}\text{, }\\
\dom(x_5) &=& \{x_2,x_3,x_4,x_8,x_{10},x_{11} \}\text{, } &
\dom(x_6) &=& \{x_4,x_5,x_9,x_{11},x_{12} \}\text{, }\\
\dom(x_7) &=& \{x_1,x_5,x_6,x_{11},x_{12},x_{13} \}\text{, } &
\dom(x_8) &=& \{x_2,x_3,x_6,x_7,x_{12},x_{13} \}\text{, }\\
\dom(x_9) &=& \{x_2,x_4,x_5,x_7,x_8,x_{13} \}\text{, } &
\dom(x_{10}) &=& \{x_1,x_4,x_6,x_7,x_8,x_9,x_{13} \}\text{, }\\
\dom(x_{11}) &=& \{x_1,x_2,x_3,x_8,x_9,x_{10},x_{13} \}\text{, } &
\dom(x_{12}) &=& \{x_3,x_4,x_5,x_9,x_{10},x_{11},x_{13} \}\text{, }\\
\dom(x_{13}) &=& \{x_1,x_2,x_3,x_4,x_5,x_6\}\text{.}
\end{array}
\]
A rather tedious check reveals that
\[
\begin{array}{lcllcl}
\teq(\dom(x_1)) &=& \{x_4,x_8,x_{12}\} \text{, } &
\teq(\dom(x_2)) &=& \{x_6,x_{10},x_{12}\} \text{, }\\
\teq(\dom(x_3)) &=& \{x_6,x_7,x_9\} \text{, } &
\teq(\dom(x_4)) &=& \{x_2,x_7,x_{11}\} \text{, }\\
\teq(\dom(x_5)) &=& \{x_2,x_8,x_{10}\} \text{, } &
\teq(\dom(x_6)) &=& \{x_4,x_9,x_{11}\} \text{, }\\
\teq(\dom(x_7)) &=& \{x_1,x_5,x_{11}\} \text{, } &
\teq(\dom(x_8)) &=& \{x_3,x_6,x_{12}\} \text{, }\\
\teq(\dom(x_9)) &=& \{x_2,x_5,x_{7}\} \text{, } &
\teq(\dom(x_{10}))&=& \{x_4,x_6,x_7\} \text{, }\\
\teq(\dom(x_{11}))&=& \{x_1,x_2,x_8\} \text{, and } &
\teq(\dom(x_{12}))&=& \{x_3,x_4,x_9\} \text{.}\\
\end{array}
\]
It can then be checked that $\teq(T)=\teq(T^{x_{13}})=\{x_1,\dots,x_{12}\}$.
\end{proof}
Let $\nteq$ be the greatest natural number $n$ such that all tournaments of order $n$ or less admit a unique inclusion-minimal $\teq$-retentive set.
Together with earlier results by \citet{BFHM09a} and \citet{Yang16a}, we now have that $14 \leq \nteq \leq 23$.
The tournament used in the preceding proof does not show that $\me$ (or $\ba$) violate \lrsin. A computer search for such tournaments was unsuccessful. While it is known that $\me$ violates stability, a concrete counterexample thus remains elusive.
\subsection{Most Discriminating Stable Tournament Solutions}
An important property of tournament solutions that is not captured by any of the axioms introduced in \secref{sec:prelims} is discriminative power.\footnote{To see that discriminative power is not captured by the axioms, observe that the trivial tournament solution satisfies stability, monotonicity, regularity, and composition-consistency.}
It is known that $\ba$ and $\mc$ (and by the known inclusions also $\uc$ and $\tc$) almost always select all alternatives when tournaments are drawn uniformly at random and the number of alternatives goes to infinity \citep{Fey08a,ScFe11a}.\footnote{However, these analytic results stand in sharp contrast to empirical observations that Condorcet winners are likely to exist in real-world settings, which implies that tournament solutions are much more discriminative than results for the uniform distribution suggest \citep{BrSe15a}.}
Experimental results suggest that the same is true for $\teq$. Other tournament solutions, which are known to return small choice sets, fail to satisfy stability and composition-consistency. A challenging question is how discriminating tournament solutions can be while still satisfying desirable axioms.
\lrs reveals an illuminating dichotomy in this context for common tournament solutions. We state without proof that discriminating tournament solutions such as Copeland's rule, Slater's rule, and Markov's rule satisfy \lrsout and violate \lrsin. On the other hand, coarse tournament solutions such as \tc, \uc, and \mc satisfy \lrsin and violate \lrsout. The bipartisan set hits the sweet spot because it is the only one among the commonly considered tournament solutions that satisfies \lrsin \emph{and} \lrsout (and hence \lrs).
\begin{theorem}\label{thm:bp-lrs}
\bp satisfies \lrs.
\end{theorem}
\begin{proof}
Since \bp is stable, \thmref{thm:selfstable-lrsin} implies that $\bp$ satisfies \lrsin.
Now, assume for contradiction that \bp violates \lrsout, \ie there is a tournament $T=(A,\succ)$ and an alternative $a$ such that $a \in \bp(T)$ and $a\in \bp(T^a)$. For a probability distribution~$p$ and a subset of alternatives $B\subseteq A$, let $p(B) = \sum_{x\in B} p(x)$. Consider the optimal mixed strategy $p_{T|_{A\setminus\{a\}}}$ in tournament $T|_{A\setminus\{a\}}$.
It is known from \citet[][Prop.~6.4.8]{Lasl97a} that $a\in\bp(T)$ if and only if $p_{T|_{A\setminus\{a\}}}(D(a))>p_{T|_{A\setminus\{a\}}}(\dom(a))$. For $T^a$, we thus have that
$p_{T^a|_{A\setminus\{a\}}}(D(a))>p_{T^a|_{A\setminus\{a\}}}(\dom(a))$. This is a contradiction because $D_T(a)=\dom_{T^a}(a)$ and $\dom_T(a)=D_{T^a}(a)$.
\end{proof}
The relationship between \lrs and the discriminative power of tournament solutions is no coincidence. To see this, consider all \emph{labeled} tournaments of fixed order and an arbitrary alternative $a$. \lrsin demands that $a$ is chosen in \emph{at least} one of $T$ and $T^a$ while \lrsout requires that $a$ is chosen in \emph{at most} one of $T$ and $T^a$. We thus obtain the following consequences.
\begin{itemize}
\item A tournament solution satisfying \lrsin chooses on average at least half of the alternatives.
\item A tournament solution satisfying \lrsout chooses on average at most half of the alternatives.
\item A tournament solution satisfying \lrs chooses on average half of the alternatives.
\end{itemize}
Hence, the well-known fact that \bp chooses on average half of the alternatives \citep{FiRe95a} follows from \thmref{thm:bp-lrs}.
Also, all coarsenings of $\bp$ such as \mc, \uc, and \tc satisfy \lrsin by virtue of \lemref{lem:lrs-inheritance}. On the other hand, since these tournament solutions are all different from \bp, they choose on average more than half of the alternatives and hence cannot satisfy \lrsout.
These results already hint at $\bp$ being perhaps a ``most discriminating'' stable tournament solution. In order to make this precise, we formally define the discriminative power of a tournament solution. For two tournament solutions $S$ and $S'$, we say that $S$ is \emph{more discriminating} than $S'$ if there is $n\in \mathbb{N}$ such that the average number of alternatives chosen by $S$ is lower than that of $S'$ over all labeled tournaments of order $n$. Note that this definition is very weak because we only have an existential, not a universal, quantifier for $n$. It is therefore even possible that two tournament solutions are more discriminating than each other. However, this only strengthens the following results.
Combining Theorems \ref{thm:selfstable-lrsin} and \ref{thm:bp-lrs} immediately yields the following theorem.
\begin{theorem}
\label{thm:stablelrs}
A stable tournament solution satisfies \lrs if and only if there is no more discriminating stable tournament solution.
\end{theorem}
\begin{proof}
First consider the direction from left to right. Let $S$ be a tournament solution that satisfies \lrs.
Due to the observations made above, $S$ chooses on average half of the alternatives. Since any stable tournament solution satisfies \lrsin by \thmref{thm:selfstable-lrsin}, it chooses on average at least half of the alternatives and therefore cannot be more discriminating than $S$.
Now consider the direction from right to left. Let $S$ be a most discriminating stable tournament solution.
Again, since any stable tournament solution satisfies \lrsin, $S$ chooses on average at least half of the alternatives. On the other hand, \bp is a stable tournament solution that chooses on average exactly half of the alternatives. This means that $S$ must also choose on average half of the alternatives, implying that it also satisfies \lrsout and hence \lrs.
\end{proof}
\begin{corollary}
\label{thm:nostablerefinementbp}
There is no more discriminating stable tournament solution than \bp. In particular, there is no stable refinement of \bp.
\end{corollary}
Given Corollary \ref{thm:nostablerefinementbp}, a natural question is whether every stable tournament solution that satisfies mild additional properties such as monotonicity is a coarsening of \bp. We give an example in Appendix \ref{app:s7hat} which shows that this is not true.
Finally, we provide two axiomatic characterizations of \bp by leveraging other traditional properties. These characterizations leverage the following lemma, which entails that, in order to show that two stable tournament solutions that satisfy \lrs are identical, it suffices to show that their roots are contained in each other.
\begin{lemma}
\label{lem:stablelrsequal}
Let $S$ and $S'$ be two stable tournament solutions satisfying \lrs. Then $\rootsym[S]\subseteq\rootsym[S']$ if and only if $S=S'$.
\end{lemma}
\begin{proof}
Suppose that $\rootsym[S]\subseteq\rootsym[S']$, and consider any tournament $T$. We will show that $S(T)\subseteq S'(T)$. If $S'(T)=T$, this is already the case. Otherwise, we have $S'(S'(T)\cup\{a\})=S'(T)$ for each $a\not\in S'(T)$. By definition of the root function, $\rootsym[S'](S'(T)\cup\{a\})=S'(T)$. Since the root function excludes at most one alternative from any tournament, we also have $\rootsym[S](S'(T)\cup\{a\})=S'(T)$ by our assumption $\rootsym[S]\subseteq\rootsym[S']$. Hence $S(S'(T)\cup\{a\})=S'(T)$ as well. Using $\widehat{\gamma}$ of $S$, we find that $S(T)=S'(T)$. So $S(T)\subseteq S'(T)$ for every tournament $T$. However, since $S$ and $S'$ satisfy \lrs, and therefore choose on average half of the alternatives, we must have $S=S'$.
Finally, if $S=S'$, then clearly $\rootsym[S]=\rootsym[S']$ and so $\rootsym[S]\subseteq\rootsym[S']$.
\end{proof}
\begin{theorem}
\label{thm:BPcharLRS}
\bp is the only tournament solution that satisfies stability, composition-consistency, monotonicity, regularity, and \lrs.
\end{theorem}
\begin{proof}
Let $S$ be a tournament solution satisfying the five aforementioned properties. Since $S$ and \bp are stable and satisfy \lrs, by \lemref{lem:stablelrsequal} it suffices to show that $\rootsym[S]\subseteq\rootsym[\bp]$. This is equivalent to showing that when $\rootsym[\bp]$ excludes an alternative from a tournament, then $\rootsym[S]$ excludes the same alternative. In other words, we need to show that when \bp excludes exactly one alternative $a$, then $S$ also only excludes $a$.
Let $T$ be a tournament in which \bp excludes exactly one alternative $a$.
As defined in \secref{sec:bpandteq}, $\bp(T)$ corresponds to the support of the unique Nash equilibrium of $G(T)$. \citet{LLL93b} and \citet{FiRy95a} have shown that this support is always of odd size and that the equilibrium weights associated to the alternatives in $\bp(T)$ are odd numbers.
Hence, using composition-consistency of \bp and the fact that the value of a symmetric zero-sum game is zero, $T$ can be transformed into a new (possibly larger) tournament $T_1=(A,\succ)$ by replacing each alternative except $a$ with a regular tournament of odd order such that $T_1|_{A\setminus\{a\}}$ is regular. Moreover, in $T_1$, $|\dom(a)|>\frac{|A|}{2}$.
We will now show that $a\not\in S(T_1)$. Since $S$ is monotonic, it suffices to prove this when we strengthen $a$ arbitrarily against alternatives in $T_1$ until $|\dom(a)|=\frac{|A|+1}{2}$.
Let $X=D(a)$ and $Y=\dom(a)$, and let $T_2$ be a tournament obtained by adding a new alternative $b$ to $T_1$ so that $X\succ \{b\}\succ Y$ and $a\succ b$. Note that $T_2$ is again a regular tournament, so $S(T_2)=A\cup \{b\}$. In particular, $b\in S(T_2)$.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[]
\node (V) at (0,0) {$X$};
\node (W) [right of=V] {$Y$};
\node (y) [below of=W] {};
\node[vertex,dashed] (x) [left of=y] {$a$};
\foreach \x / \y in {x/V,W/x}{
\draw[-latex] (\x) to (\y);
};
\node (caption) [node distance = 0.5*\nd,below of=d, xshift=0.5*\nd] {$T_1$};
\end{tikzpicture}
\qquad\qquad
\begin{tikzpicture}[]
\node (V) at (0,0) {$X$};
\node (W) [right of=V] {$Y$};
\node[vertex] (y) [below of=W] {$b$};
\node[vertex] (x) [left of=y] {$a$};
\foreach \x / \y in {x/V,x/y,y/W,V/y,W/x}{
\draw[-latex] (\x) to (\y);
};
\node (caption) [node distance = 0.5*\nd,below of=d, xshift=0.5*\nd] {$T_2$};
\end{tikzpicture}
\qquad\qquad
\begin{tikzpicture}[]
\node (V) at (0,0) {$X$};
\node (W) [right of=V] {$Y$};
\node[vertex,dashed] (y) [below of=W] {$b$};
\node[vertex,dashed] (x) [left of=y] {$a$};
\foreach \x / \y in {x/V,y/x,y/V,W/x,W/y}{
\draw[-latex] (\x) to (\y);
};
\node (caption) [node distance = 0.5*\nd,below of=d, xshift=0.5*\nd] {$T_3 = (T_2)^b$};
\end{tikzpicture}
\caption{Illustration of the proof of \thmref{thm:BPcharLRS}. Circled alternatives are contained in the choice set $S(\cdot)$. Alternatives circled with a dashed line are not contained in the choice set $S(\cdot)$.}
\label{fig:BPproof}
\end{figure}
Let $T_3=(T_2)^b$ be the tournament obtained from $T_2$ by reversing all edges incident to $b$. By \lrsout, we have $b\not\in S(T_3)$. If it were the case that $a\in S(T_3)$, then it should remain chosen when we reverse the edge between $a$ and $b$. However, alternative $a$ in the tournament after reversing the edge is isomorphic to alternative $b$ in $T_3$, and so we must have $b\in S(T_3)$, a contradiction. Hence $a\not\in S(T_3)$. Since $S$ satisfies stability and thus $\widehat{\alpha}$, we also have $a\not\in S(T_1)$, as claimed. See \figref{fig:BPproof} for an illustration.
Now, $\widehat{\alpha}$ and regularity of $S$ imply that $S(T_1)=S(T_1|_{A\backslash\{a\}})=A\backslash\{a\}$. Since $S$ satisfies composition-consistency, we also have that $S$ returns all alternatives except $a$ from the original tournament $T$, completing our proof.
\end{proof}
Based on Theorems \ref{thm:stablelrs} and \ref{thm:BPcharLRS}, we obtain another characterization that does not involve \lrs and hence only makes use of properties previously considered in the literature.
\begin{corollary}
\label{thm:BPcharsize}
\bp is the unique most discriminating tournament solution that satisfies stability, composition-consistency, monotonicity, and regularity.
\end{corollary}
\begin{proof}
Suppose that a tournament solution $S$ satisfies stability, composition-consistency, monotonicity, and regularity and is as discriminating as \bp. By \thmref{thm:selfstable-lrsin}, $S$ satisfies \lrsin. Since $S$ chooses on average half of the alternatives, it satisfies \lrsout and hence \lrs as well. \thmref{thm:BPcharLRS} then implies that $S=BP$.
\end{proof}
The only previous characterization of \bp that we are aware of was obtained by \citet[][Thm.~6.3.10]{Lasl97a} and is based on a rather contrived property called \emph{Copeland-dominance}. According to \citet[][p.~153]{Lasl97a}, ``this axiomatization of the Bipartisan set does not add much to our knowledge of the concept because it is merely a re-statement of previous propositions.''
\coref{thm:BPcharsize} essentially shows that, for most discriminating stable tournament solutions, Laslier's Copeland-dominance is implied by monotonicity and regularity.
We now address the independence of the axioms used in \thmref{thm:BPcharLRS}.
\begin{remark}
\lrs is not implied by the other properties. In fact, the trivial tournament solution satisfies stability, composition-consistency, monotonicity, and regularity.
\end{remark}
\begin{remark}
Monotonicity is not implied by the other properties. In fact, the tournament solution that returns $\bp(\overline{T})$, where $\overline{T}$ is the tournament in which all edges in $T$ are reversed, satisfies stability, composition-consistency, regularity, and \lrs.
\end{remark}
The question of whether stability, composition-consistency, and regularity are independent in the presence of the other axioms is quite challenging. We can only provide the following partial answers.
\begin{remark}\label{rem:pos}
Neither stability nor composition-consistency is implied by \lrs, monotonicity, and regularity. In fact, there is a tournament solution that satisfies \lrs, monotonicity, and regularity, but violates stability and composition-consistency (see Appendix~\ref{app:pos}).
\end{remark}
\begin{remark}\label{rem:s7hat}
Neither regularity nor composition-consistency is implied by stability and monotonicity. In fact, there is a tournament solution that satisfies stability and monotonicity, but violates regularity and composition-consistency (see Appendix~\ref{app:s7hat}).
\end{remark}
\citet{BHS15a} brought up the question whether stability implies regularity (under mild assumptions) because all stable tournaments solutions studied prior to this paper were regular.\footnote{We checked on a computer that the stable tournament solution \tcring \citep[see][]{BBFH11a,BBH15a} satisfies regularity for all tournaments of order 17 or less.} \remref{rem:s7hat} shows that this does not hold without making assumptions that go beyond monotonicity.
Given the previous remarks, it is possible that composition-consistency and regularity are not required for \thmref{thm:BPcharLRS} and \coref{thm:BPcharsize}.
Indeed, our computer experiments have shown that the only stable and monotonic tournament solution satisfying \lrs for all tournaments of order up to 7 is \bp. This may, however, be due to the large number of automorphisms in small tournaments and composition-consistency and regularity could be required for larger tournaments.
It is also noteworthy that the proof of \thmref{thm:BPcharLRS} only requires a weak version of composition-consistency, where
all components are tournaments in which all alternatives are returned due to automorphisms.
Since stability is implied by Samuelson's weak axiom of revealed preference or, equivalently, by transitive rationalizability, \coref{thm:BPcharsize} can be seen as an escape from Arrow's impossibility theorem where the impossibility is turned into a possibility by weakening transitive rationalizability and (significantly) strengthening the remaining conditions \citep[see, also,][]{BrHa11a}.
\section*{Acknowledgements}
This material is based on work supported by Deutsche Forschungsgemeinschaft under grants {BR~2312/7-1} and {BR~2312/7-2}, by a Feodor Lynen Research Fellowship of the Alexander von Humboldt Foundation, by ERC Starting Grant 639945, by a Stanford Graduate Fellowship, and by the MIT-Germany program.
The authors thank Christian Geist for insightful computer experiments and Paul Harrenstein for helpful discussions and preparing \figref{fig:stability-illustration}.
| {
"attr-fineweb-edu": 1.77832,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfBI4uzlh9r2JtWOu | \section{Introduction}
\textit{Our [the Los Angeles Lakers'] collective success having forged some kind of unity in this huge and normally fragmented metropolis, it cuts across cultural and class lines.}
\\[5pt]
\rightline{{\rm --- Kareem Abdul-Jabbar, an NBA \replaced{Hall of Famer}{legend}.}}\\[3pt]
Professional sports not only involve competitions among athletes, but also attract fans to attend the games, watch broadcasts, and consume related products \cite{wenner1989media}.
For instance, the 2017 final game of the National Basketball Association (NBA) %
attracted 20 million TV viewers \cite{statista:nba}; a 30-second commercial during the Super Bowl cost around 4.5 million dollars in 2015 and these commercials have become an integral part of American culture \cite{wiki:NFL}.\footnote{Most influential Super Bowl commercials: \url{http://time.com/4653281/super-bowl-ads-commercials-most-influential-time/}.}
Fans of sports teams can be very emotionally invested and treat
fans of rival teams almost as enemies, which can even lead to violence \cite{roadburg1980factors}.
Such excitement towards professional sports extends to online communities.
A notable example is \communityname{/r/NBA} ~\added{on Reddit}, which attracts over a million subscribers and has become one of the most active subreddits on Reddit, a popular community-driven website
\cite{redditlist}.
\citeauthor{rnba}, a sports writer, has suggested that online fan communities are gradually replacing the need for sports blogs and even larger media outlets altogether \cite{rnba}.
The growth of online fan communities thus provides exciting opportunities for studying fan behavior in professional sports at a large scale.
It is important to recognize that fan behavior is driven by sports events, including sports games, player transfers between teams, and even a comment from a team manager.
The dynamic nature of sports games indicates that discussions in online fan communities may echo the development in games, analogous to the waves of excitement in a stadium.
Therefore, our goal in this paper is to characterize {\em online} fan communities in the context of {\em offline} games and team performance.
To do that, we build a large-scale dataset of online fan communities from Reddit with \replaced{479K}{477K} users, \replaced{1.5M}{1.4M} posts, and 43M comments, as well as statistics that document offline games and team performance.\footnote{The dataset is available at \url{http://jasondarkblue.com/papers/CSCW2018NBADataset_README.txt}.}
We choose Reddit as a testbed because 1) Reddit has explicit communities for every NBA team, which allows us to compare the differences between winning teams and losing teams; and
2) Reddit is driven by fan communities, e.g., the ranking of posts is determined by upvotes and downvotes of community members.
In comparison, team officials can have a great impact on a team's official Twitter account and Facebook page.
\para{Organization and highlights.} We first \added{summarize related work (\secref{sec:related}) and then} provide an overview of the NBA fan communities on Reddit as well as necessary background knowledge regarding the NBA games (\secref{sec:data}).
We demonstrate the seasonal patterns in online fan communities and how they align with the NBA season in the offline world.
We further characterize the discussions in these NBA fan communities using topic modeling.
We investigate three research questions in the rest of the paper.
First, we study the relation between team performance in a game and this game's associated fan activity in online fan communities.
We are able to identify game threads that are posted to facilitate discussions during NBA games.
These game threads allow us to examine the short-term impact of team performance on fan behavior.
We demonstrate intriguing contrasts between top teams and bottom teams: user activity increases when top teams lose and bottom teams win.
Furthermore, close games with small point differences are associated with higher user activity levels.
Second, we examine how team performance influences fan loyalty in online communities beyond a single game.
It is important for professional teams to acquire and maintain a strong fan
base that provides consistent support and consumes team-related products.
Understanding fan loyalty is thus a central research question in the literature of sports management \cite{dwyer2011divided,stevens2012influence, yoshida2015predicting,doyle2017there}.
For instance, ``bandwagon fan'' refers to a person who follows the tide and supports teams with recent success.
Top teams may have lower fan loyalty due to the existence of many bandwagon fans.
Our results validate this hypothesis by using user retention to measure fan loyalty.
We also find that a team's fan loyalty is correlated with the team's improvement over a season and with the average age of
the roster.
Third, we turn to the content in online fan communities to understand the impact of team performance on the topics of discussion.
Prior studies show that a strong fan base can minimize the effect of a team's short-term \added{(poor)} performance on its long-term success
\cite{sutton1997creating,shank2014sports}.
To foster fan identification in teams with poor performance,
fans may shift the focus from current failure to future success and ``\textit{trust the process}''\footnote{A mantra that reflects Philadelphia 76ers' identity \cite{trusttheprocess} 76ers went through a streak of losing seasons to get top talents in draft-lottery and rebuild the team.} \cite{doyle2017there,campbell2004beyond,jones2000model}.
Discussions in online fan communities enable quantitative analysis of such \added{a} hypothesis.
\replaced{We show that fans of the top teams are more likely to discuss \topicname{season prospects,}
whiles fans of the bottom teams are more likely to discuss \topicname{future.}
Here \topicname{future} refers to the assets that a team has, including talented young players,
draft picks, and salary space,
which can potentially prepare the team for future success in the following seasons.}
{We show that fans of bottom teams are more likely to discuss \topicname{future,} while fans of top teams are more likely to discuss \topicname{season prospects}.}
We \deleted{summarize additional related work in \secref{sec:related} and} offer concluding discussions in \secref{sec:conclusion}.
Our work develops the first step towards studying fan behavior in professional sports using online fan communities\deleted{and presents valuable insights for understanding both online communities and sports management.} \added{and provides implications for online communities and sports management.
For online communities, our results highlight the importance of understanding online behavior in the offline context.
Such offline context can influence the topics of discussion, the activity patterns, and users' decisions to stay or leave.
For sports management, our work reveals strategies for developing a strong fan base such as shifting the topics of discussion and leveraging unexpected wins and potential future success.}
\section{Related Work}
\label{sec:related}
In this section, we survey prior research mainly in two areas related to the work presented in this paper:
online communities and sports fan behavior.
\subsection{Online Communities}
The proliferation of online communities has enabled a rich body of research in understanding group formation and community dynamics \cite{Backstrom:2006:GFL:1150402.1150412,Ren:07,Kairam:2012:LDO:2124295.2124374,Kim:2000:CBW:518514}.
Most relevant to our work are studies that investigate how external factors affect user behavior in online communities \cite{palen2016crisis,starbird2010chatter,romero2016social,zhang2017shocking}.
\citet{palen2016crisis} provide an overview of studies on social media behavior in response to natural disasters and point out limits of social media data for addressing emergency management.
\citet{romero2016social} find that communication networks between traders ``turtle'' up during shocks in stock price and reveal relations between social network structure and collective behavior.
Other offline events studies include the dynamics of breaking news \cite{keegan2013hot,keegan2012staying,leavitt2014upvoting}, celebrity death \cite{keegan2015coordination,gach2017control}, and Black Lives Matter \cite{twyman2017black,Stewart:2017:DLC:3171581.3134920}.
This literature illustrates that online communities do not only exist in the virtual world.
They are usually deeply embedded in the offline context in our daily life.
Another relevant line of work examines user engagement in multiple communities and in particular, user loyalty
\cite{tan2015all,zhang2017community,hamilton2017loyalty}.
\citet{hamilton2017loyalty} operationalize loyalty in the context of multi-community engagement and consider users loyal to a community if they consistently prefer the community over all others.
They show that loyal users employ language that signals collective identity and their loyalty can be predicted from their first interactions.
Reddit has attracted significant interest from researchers in the past few years due to its growing importance.
Many aspects and properties of Reddit have been extensively studied, including user and subreddit lifecycle in online
platforms \cite{tan2015all,newell2016user}, hate speech \cite{chandrasekharan2017you,chandrasekharan2017bag,saleemweb},
interaction and conflict between subreddits \cite{kumar2018community,tan:18},
and its relationship with other web sources \cite{vincent2018examining,newell2016user}.
Studies have also explored the impacts of certain Reddit evolutions and policy changes
on user behaviors.
Notable events include pre-default subreddit \cite{lin2017better} and
Reddit unrest \cite{newell2016user,chandrasekharan2017you,matias2016going}.
Our work examines a special set of online communities that derived from professional sports teams.
As a result, regular sports games and team performance are central for understanding these communities and user loyalty in these communities.
Different from prior studies, we focus on the impact of team performance on user behavior in online fan communities.
\subsection{Sports Fan Behavior}
As it is crucial for a sports team to foster a healthy and strong fan base, extensive studies in sports management have studied fan behavior.
Researchers have studied factors that affect purchasing behavior of sports fans
\cite{smith2007travelling, wann2008motivational, trail2001motivation}, including psychometric properties and fan motivation.
A few studies also build predictive models of fan loyalty \cite{bee2010exploring, yoshida2015predicting}.
\citet{bee2010exploring} suggest that fan attraction, involvement, psychological commitment, and resistance can be predictors of fan behavioral loyalty.
\added{\citet{dolton12018football} estimate
that the happiness that fans feel when their team
wins is outweighted by the sadness that strikes
when their team loses by a factor or two.}
\citet{yoshida2015predicting} build regression models based on attitudinal processes to predict behavioral loyalty.
The potential influence of mobile technology on sports spectators is also examined
from different angles \cite{torrez2012look,ludvigsen2010designing,jacucci2005supporting}.
\citet{torrez2012look} describes survey results that suggest the current usage of mobile technology
among college sports fans. The work by \citet{ludvigsen2010designing} examines the potential
of interactive technologies for active spectating at sporting events.
Most relevant to our work are studies related to fan identification \cite{campbell2004beyond,doyle2017there,dwyer2011divided,hirt1992costs,hyatt2015using,jones2000model,stevens2012influence,sutton1997creating,wann2002using} and we have discussed them to formulate our hypotheses.
These studies usually employ qualitative methods through interviews or small-scale surveys.
It is worth noting that fan behavior can differ depending on the environment.
\citet{cottingham2012interaction} demonstrates the difference in emotional energy between fans in sports bars and those attending the game in the stadium.
In our work, we focus on online communities, which are an increasingly important platform for sports fans.
These online fan communities also allow us to study team performance and fan behavior at a much larger scale than all existing studies.
\section{An Overview of NBA Fan Communities on Reddit}
\label{sec:data}
Our main dataset is derived from NBA-related communities on Reddit, a popular website organized by communities where users can submit posts and make comments.
A community on Reddit is referred to as a \textit{subreddit}. We use community and subreddit interchangeably in this paper.
We first introduce the history of NBA-related subreddits and then provide an overview of activity levels and discussions in these subreddits.
\subsection{NBA-related Subreddits}
\begin{table}[]
\centering
\begin{tabular}{lrr}
\toprule
& \communityname{/r/NBA} & \added{Average of team subreddits (std)} \\
%
\midrule
\added{\#users} & 400K & 13K (8K) \\
\#posts & 847K & 24K (16K) \\
\#comments & 33M & 328K (282K) \\
\bottomrule
\end{tabular}
\caption{Dataset Statistics. There are in total 30 teams in the NBA league.
\replaced{\#users refers to the number of unique users who posted/commented in the subreddit.}
{``Team subreddits'' statistics are aggregated across all the NBA team subreddits.}}
\label{tab:stats}
\end{table}
On Reddit, the league subreddit \communityname{/r/NBA} is for NBA fans to discuss
anything that
happened in the entire league, ranging from a game to gossip related to a player.
There are 30 teams in total in the NBA league, and
each team's subreddit is for fans to discuss team-specific topics.
Each subreddit has multiple moderators to make sure posts are relevant to the subreddit's theme.
We collected posts and comments in these 31 subreddits (\communityname{/r/NBA} + 30 NBA team subreddits)
from pushshift.io~\cite{pushshift}
from the beginning of each subreddit until October 2017.~\footnote{A small amount of data is missing
due to scraping errors and other unknown reasons with this dataset \cite{gaffney2018caveat}.
We checked the sensitivity of our results to missing posts with a dataset provided by J.Hessel;
our results in this paper do not change after accounting for them. }
The overall descriptive statistics of our dataset are shown in Table~\ref{tab:stats}.
\noindent\textbf{A brief history of the NBA-related subreddits on Reddit.}
NBA-related subreddits have thrived since January 2008, when Reddit released a new policy to allow users to create their own subreddit.
The Lakers' and the Celtics' subreddits were created by fans in 2008, and they are the first two NBA teams to have their team subreddits.
These two teams are also widely acknowledged as the most successful franchises
in the history of the NBA league~\cite{mostsucessful}.
It is also worth
noting
that these two teams' subreddits were created even before \communityname{/r/NBA},
which
was created
at the end of 2008.
For the remaining 28 teams, 14 of their subreddits were created by users in 2010, and the other 14 were created in 2011.
Moreover, three teams' subreddit names have changed.
The Pelicans' subreddit changed their subreddit name from \communityname{/r/Hornets} to \communityname{/r/Nolapelicans} and
the Hornets' subreddit from \communityname{/r/Charlottebobcats} to \communityname{/r/Charlottehornets} because these two
teams changed their official team names. Additionally, the Rockets' subreddit shortened its name from \communityname{/r/Houstonrockets} to \communityname{/r/Rockets}.
To rebuild each team's complete subreddit history,
we combined posts and comments in these three teams' old and new subreddits.
Figure~\ref{fig:user} presents the number of users that post and comment in each team subreddit.
\begin{figure}
\begin{subfigure}[t]{0.46\textwidth}
\includegraphics[width=\textwidth]{overall_num_of_users_by_subreddit}
\caption{Number of users in team subreddits.}
\label{fig:user}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.46\textwidth}
\includegraphics[width=\textwidth]{num_of_comments_by_month_in_NBA}
\caption{User activity in \communityname{/r/NBA} by month.}
\label{fig:comment}
\end{subfigure}
\caption{Figure~\ref{fig:user} shows the number of users that post and comment in team subreddits. \communityname{/r/Warriors}, \communityname{/r/Lakers}, \communityname{/r/Cavaliers}
are the top three subreddits in both posting and commenting.
The average number of users across all teams is 4,166 for posting
and 11,320 for commenting.
Figure~\ref{fig:comment} shows user activity level in \communityname{/r/NBA} by month. During the off season (July-mid October), user activity decreases sharply, as no games are played during this period.
Then in the regular season (late October to next March), user activity increases steadily. The activity of \communityname{/r/NBA}
%
peaks in May and June, as the championship games happen in these two months.
}
\end{figure}
\subsection{NBA Season Structure Reflected in Reddit Activity}
As discussed in the introduction, fan behavior in NBA-related subreddits is influenced by offline events.
In particular, the NBA runs in seasons, and seasonal patterns are reflected on Reddit.
To show that, we start with a brief introduction of the NBA.
30 teams in the NBA are divided into two conferences (East and West).
In each season, teams play with each other to compete for the final championship.
\replaced{The following three time segments make up a complete NBA season:}
{A complete season can be divided into three periods:}
\footnote{Please see the NBA's official description for more details \cite{NBARule}.}
\begin{itemize}
\item \textbf{Off season}: from July to mid October.
There are no games in this
period.\footnote{There are Summer League games and preseason games played in this period,
but the results don't count in season record.}
Every team is allowed to draft young players,
sign free agents, and trade players with other teams.
The bottom teams in the last season get the top positions when drafting young players, which hopefully leads to a long-term balance between teams in the league.
The goal of the off season for each team is to improve its overall competitiveness for the coming season.
\item \textbf{Regular season}: from late October to middle of April.
Regular season games occur in this period.
Every team has 82 games scheduled during this time,
41 home games and 41 away games.
A team's regular season record is used for playoff qualification and seeding.
\item \textbf{Playoff season}: from the end of regular season to June.
16 teams
(the top 8 from the Western conference and the top 8 from the Eastern conference)
play knockouts in each conference
and compete for the conference championship.
The champion of the Western Conference and the Eastern Conference
play the final games to win the final championship.
\end{itemize}
Given the structure, a complete NBA season spans two calendar years.
In this paper, for simplicity and clarity, we refer to a specific season by the calendar year when it ends.
For instance, the official 2016-2017 NBA season is referred to as {\em the 2017 season} throughout the paper.
User activity in NBA-related subreddits is driven by the structure of the NBA season.
As an example, Figure~\ref{fig:comment} shows user activity in \communityname{/r/NBA} by month in the 2016 and 2017 season.
From July to September, user activity decreases sharply as
there are no games during this period.
Then from October to the next March, the number of comments increases steadily as the regular season unfolds.
According to the NBA rules, every game in the regular season carries the same weight for playoff qualification, the games in October should be equally important as the games in March.
However,
fans are much more active \added{on Reddit as it gets} closer to the end of the regular season because they deem these games ``more critical.''
This may be explained by the ``deadline pressure'' phenomenon in psychology~\cite{ariely2008predictably}.
\added{This circumstance has also been observed in other sports.
For instance, \citet{paton2005attendance} illustrate that the attendance of domestic cricket leagues
in England and Wales is much higher in the later segment of the season than the earlier segment.
\citet{hogan2017analysing} find that the possibility of the home team reaching the knock-out stage
had a significant positive impact on attendance in the European Rugby Cup.}
We also find that user activity drops a little bit in April in both the 2016 and 2017 season.
One possible explanation is that as the regular season is ending,
fans of the bottom teams that clearly cannot make the playoffs
reduce their activity during this period.
After that,
the volume of comments increases dramatically during the playoff games.
The activity of \communityname{/r/NBA} peaks in May and June, when the conference championship and final championship games happen.
\subsection{Topic analysis}
To understand what fans are generally talking about in NBA-related subreddits,
we use Latent Dirichlet Allocation (LDA)~\cite{blei2003latent},
a widely used topic modeling method, to analyze user comments.
We treat each comment as a document and use all the comments in \communityname{/r/NBA} to train a LDA model
with the Stanford Topic Modeling Toolbox~\cite{stanfordlda}.
We choose the number of topics based on perplexity scores~\cite{wallach2009evaluation}.
The perplexity score drops significantly \added{when the topic number increases} from 5 to 15,
but
does not change much from 15 to 50, all within 1370-1380 range.
Therefore, we use 15 topics in this paper.
Table~\ref{tab:topic} shows the top five topics with the greatest average topic weight and the
top ten weighted words in each topic.
\added{Two authors, who are NBA fans and active users on \communityname{/r/NBA}, manually assigned a label to each of the five most frequent topics based on the top words in each topic.}
\replaced{Each label}{We name each topic} in Table~\ref{tab:topic}
\replaced{summarizes}{to summarize} the topic's gist, and
the five \replaced{labels}{names} are \topicname{personal opinion,}
\topicname{game strategy,} \topicname{season prospects,} \topicname{future,} and \topicname{game stats.}
We describe our preprocessing procedure and present the other ten topics in \secref{sec:appendix_topics}.
\begin{table}[]
\centering
\small
\begin{tabular}{llr}
\toprule
\multicolumn{1}{c}{\textbf{LDA topic}} & \multicolumn{1}{c}{\textbf{top words}} & \multicolumn{1}{c}{\textbf{average topic weight}} \\ \toprule
\topicname{personal opinion} & \begin{tabular}[c]{@{}l@{}}opinion, fact, reason, agree, understand,\\ medium, argument, talking, making, decision\end{tabular} & 0.083 \\
\midrule
\topicname{game strategy} & \begin{tabular}[c]{@{}l@{}}defense, offense, defender, defensive, shooting,\\ offensive, shoot, open, guard, post\end{tabular} & 0.082 \\ \midrule
\topicname{season prospects} & \begin{tabular}[c]{@{}l@{}}final, playoff, series, won, championship,\\ beat, winning, west, east, title\end{tabular} & 0.078 \\ \midrule
\topicname{future} & \begin{tabular}[c]{@{}l@{}}pick, trade, star, top, chance,\\ young, future, move, round, potential \end{tabular} & 0.075 \\ \midrule
\topicname{game stats} & \begin{tabular}[c]{@{}l@{}}top, number, league, stats, mvp,\\ average, career, assist, put, shooting \end{tabular} & 0.075 \\ \bottomrule
\end{tabular}
\caption{The top five topics by LDA using all the comments in \communityname{/r/NBA}.
The top ten weighted words are presented for each topic.
In preprocessing, all team and player names are removed.
The remaining words are converted to lower case and lemmatized before training the LDA model.
}
\label{tab:topic}
\end{table}
\section{Research Questions and Hypotheses}
We study three research questions to understand how team performance affects fan behavior in online fan communities.
The first one is concerned with team performance in a single game and that game's associated user activity, while the other two questions are about team performance in a season and community properties (fan loyalty and the topics of discussion).
\subsection{Team Performance and Game-level Activity}
An important feature of NBA-related subreddits is to support game related discussion.
In practice, each game has a game thread in the home-team subreddit, the away-team subreddit, and the overall \communityname{/r/NBA}.
Team performance in each game can have a short-term impact on fans' behavior.
For instance, \citet{Leung2017Effect} show that losing games has a negative impact on the contributions to the corresponding team's Wikipedia page, but winning games does not have a significant effect.
However, it remains an open question how team performance in games relate to user activity in \textit{online sports fan communities}.
Previous studies find that fans react differently to
top
teams than to bottom teams
based on interviews and surveys \cite{doyle2017there,sutton1997creating,yoshida2015predicting}.
In particular, \citet{doyle2017there} find that fans of teams with an overwhelming loss to win ratio can be insensitive to losses through interviews.
In contrast, fans that support top teams may be used to winning. The hype created by the media and other fans elevates the expectation in the fan community. As a result, losing can surprise fans of the top teams and lead to a heated discussion.
Therefore, we formulate our first hypothesis as follows:
\smallskip
\noindent\textbf{H1:} In subreddits of the top teams, fans are more active on losing days; in subreddits of the bottom teams, fans are more active on winning days.
\subsection{Team Performance and Fan Loyalty}
Researchers in sports management show that a team's recent success does not necessarily lead to a loyal fan base
\cite{campbell2004beyond,bee2010exploring,stevens2012influence}.
For instance,
``bandwagon fan'' refers to
individuals who become fans of a team simply because of their recent success.
These fans tend to have a weak attachment to the team and are ready to switch to a different team when the team starts to perform poorly.
On the contrary, in the bottom teams,
active fans that stay during adversity are probably loyal due to their deep attachment to the team
\cite{doyle2017there,campbell2004beyond}.
They are able to endure current stumbles and treat them as a necessary process for future success.
Our second hypothesis explores the relation between team performance and fan loyalty:
\smallskip
\noindent\textbf{H2:} Top team subreddits have lower fan loyalty and bottom team subreddits have higher fan loyalty.
\subsection{Team Performance and Topics of Discussion}
In addition to whether fans stay loyal, our final question examines what fans talk about in an online fan community.
As a popular sports quote says,
``{\it Winning isn't everything, it's the only thing}.''\footnote{Usually attributed to UCLA football coach Henry Russel Sanders.}
A possible hypothesis is that that the discussion concentrates on winning and team success.
However, we recognize the diversity across teams depending on team performance.
Several studies find that fans of teams with poor performance may shift the focus from current failure to future success: staying optimistic can help fans endure adversity and maintain a positive group identity \cite{doyle2017there,campbell2004beyond,jones2000model}.
This is in clear contrast with the focus on winning the final championship of the top teams \cite{campbell2004beyond}.
As a result, we formulate our third hypothesis as follows:
\smallskip
\noindent\textbf{H3:} The topics of discussion in team subreddits vary depending on team performance.
Top team subreddits focus more on \topicname{season prospects}, while bottom team subreddits focus more on \topicname{future}.
\section{Method}
In this section, we first provide an overview of independent variables and then discuss dependent variables and formulate
\replaced{hierarchical regression analyses}{linear models} to test our hypothesis in each research question.
\subsection{Independent Variables}
To understand how team performance affects fan behavior in online fan communities, we need to control for factors such as a team's market value and average player age.
We collect statistics of the NBA teams from the following websites:
Fivethirtyeight,~\footnote{\url{http://fivethirtyeight.com/}.}
Basketball-Reference,~\footnote{\url{https://www.basketball-reference.com/}.}
Forbes.~\footnote{\url{https://www.forbes.com/}.} and Wikipedia.~\footnote{\url{https://www.wikipedia.org/}.}
We standardize all independent variables for linear regression models.
Table~\ref{tab:variables} provides a full list of all variables used in this paper.
In addition to control variables that capture the differences between seasons and months, the variables can be grouped in three categories: performance, game information, and team information.
\begin{table}[t]
\centering
\small
\begin{tabular}{lp{0.5\textwidth}r}
\toprule
\multicolumn{1}{c}{\textbf{Variable}} & \multicolumn{1}{c}{\textbf{Definition}} & \multicolumn{1}{c}{\textbf{Source}} \\ \midrule
\multicolumn{3}{l}{\textit{Performance}} \\
{\bf winning} & Win or lose a game. & FiveThirtyEight \\
{\bf season elo} & A team's elo rating at the end of a season. & FiveThirtyEight \\
{\bf season elo difference} & A team's elo rating difference between the end of a season and its last season. & FiveThirtyEight \\
{\bf month elo} & A team's elo rating at the end of that month. & FiveThirtyEight \\
{\bf month elo difference} & A team's elo rating difference between the end of a month and its last month. & FiveThirtyEight \\
\midrule
\multicolumn{3}{l}{\textit{Game information}} \\
team elo & A team's elo rating before the game. & FiveThirtyEight \\
opponent elo & The opponent's elo rating before the game. & FiveThirtyEight \\
point difference & Absolute point difference of the game. & FiveThirtyEight \\
rivalry or not & If the opponent team is a rivalry. & Wikipedia \\
top team & If a team is with the five highest elo ratings at the end of a season. & FiveThirtyEight \\
bottom team & If a team is with the five lowest elo ratings at the end of a season. & FiveThirtyEight \\
\midrule
\multicolumn{3}{l}{\textit{Team information}} \\
market value & Transfer fee estimated to buy a team on the market. & Forbes \\
average age & The average age of the roster. & Basketball-Reference \\
\added{\#star players} & The number of players selected to play the NBA All-Star Game.& Basketball-Reference \\
\added{\#unique users}& The number of users that made at least one post/comment in the team's subreddit. & N/A \\
offense & The average points scored per game. & Basketball-Reference \\
defense & The average points allowed per game. & Basketball-Reference \\
turnovers & The average turnovers per game. & Basketball-Reference \\ \midrule
\multicolumn{3}{l}{\textit{Temporal information}} \\
season & A categorical variable to indicate the season. & N/A (control variable) \\
month & A categorical variable to indicate the month. & N/A (control variable) \\
\bottomrule
\end{tabular}
\caption{List of variables and their corresponding definitions and sources. Measurements of team performance are in bold.
}
\label{tab:variables}
\end{table}
\subsubsection{Performance}
Since our research questions include both team performance in a single game and team performance over a season,
we consider performance variables both for a game and for a season.
First, to measure a team's game performance, we simply use whether this team wins or loses.
Second, to measure a team's performance over a season, we use elo ratings of the NBA teams.
The elo rating system was originally invented as
a chess rating system for calculating the relative skill levels of players.
The popular forecasting website FiveThirtyEight developed an elo rating system to
measure the skill levels of different NBA teams~\cite{elo538}.
These elo ratings are used to predict game results on FiveThirtyEight and are well received by major sports media, such as ESPN~\footnote{\url{http://www.espn.com/}.}
and CBS Sports.~\footnote{\url{https://www.cbssports.com/}.}
The FiveThirtyEight elo ratings satisfy the following properties:
\begin{itemize}
\item A team's elo rating is represented by a number that increases or decreases
depending on the outcome of a game. After a game, the winning team takes
elo points from the losing one, so the system is zero-sum.
\item The number of elo points exchanged after a game depends on the elo rating difference between two teams prior to the game,
final basketball points, and home-court advantage. Teams gain more elo points for unexpected wins, great basketball point differences, and winning away games.
\item The long-term average elo rating of all the teams is 1500.
\end{itemize}
To measure team performance, we use a team's elo rating at the end of a season as well as the elo difference between the end of this season and last season.
A high elo rating at the end indicates an absolute sense of strong performance;
a great elo rating difference suggests that a team has been improving.
In addition to studying how team performance over a season affects fan loyalty, we also include team performance over a month to check the robustness of the results.
\subsubsection{Game Information}
To test {\bf H1}, we need to use the interaction between game performance and top (bottom) team so that we can capture whether a top team loses or a bottom team wins.
We define {\em top team} as teams with the highest five elo ratings at the end of a season and {\em bottom team} as teams with the lowest five elo ratings at the end of a season.
We also include the following variables to characterize a single game:
1) Two team's elo ratings, which can partly measure the importance of a game;
2) (Basketball) point difference, which captures how close a game is;
3) Rivalry game: which indicates known rivalry relations in the NBA, such as the Lakers and the Celtics.
We collect all pairs of the NBA rivalry teams from Wikipedia~\cite{wiki:rivalry}.
\subsubsection{Team Information}
To capture team properties, \replaced{we include a team's market value, average age of players,
and the number of star players.}{we include both a team's market value and average age.}
Market value estimates the value of a team on the current market.
There are three key factors that impact a team's market value,
including market size, recent performance and history~\cite{marketvalue}.
We collect market values of all NBA teams from Forbes.
We scrape the average age of players on the roster from Basketball-Reference,
which computes the average age of players at the start of Feb 1st of that season.
The website chooses to calculate average age
on Feb 1st because it is near the player trade deadline~\citep{NBARule}, and every team has a relatively stable roster at that time.
\added{The number of star players measures the number of players selected
to play in the NBA All-Star Game~\citep{wiki:NBA_All-Star_Game} of that season
and this information is collected from Basketball-Reference.}
We further include variables that characterize a team's playing style:
1) Offense: the average points scored per game;
2) Defense: the average points allowed per game;
3) Turnovers: the average number of turnovers per game.
\added{Teams' playing style information by season is collected from Basketball-Reference.}
\subsection{Analysis for \replaced{H1}{RQ1}}
\begin{figure}
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics[width=\textwidth]{NumofCommentsinGameThread}
\caption{The CDF of \#comments in game threads.}
\label{fig:CDFComment}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\includegraphics[width=\textwidth]{NumofUniqueUsersinGameThread}
\caption{The CDF of \#unique users in game threads.}
\label{fig:CDFUser}
\end{subfigure}
\caption{\added{Figure~\ref{fig:CDFComment} shows the cumulative distribution of the number
of comments in all game threads in both \communityname{/r/NBA} and team subreddits.
Figure~\ref{fig:CDFUser} shows the cumulative distribution of the number
of unique users in all game threads in both \communityname{/r/NBA} and team subreddits.}
}
\label{fig:gamethreadsCDF}
\end{figure}
Online fan communities provide a platform for fans to discuss sports games in real time and make the game watching experience interactive with other people on the Internet.
Accordingly, every team subreddit
posts a ``Game Thread''
before the start of
a game.
Fans are encouraged to make comments related to a game in its game thread.
\added{Figure~\ref{fig:gamethreadsCDF} shows the cumulative distributions of the number of comments and the number of unique users.}
A game thread can
accumulate hundreds or thousands of comments.
The number of comments is usually significantly higher during game time than other time periods.
\replaced{Figure~\ref{fig:proportioncomments} shows the average proportion of comments
made in each team subreddit by hour on the game day of the 2017 season
(normalized based on games' starting hour).
The number of comments peaked around the game time.}
{Figure~\ref{fig:proportioncomments} shows the proportion of comments
by hour on two randomly picked consecutive
game days in \communityname{/r/Lakers} and the number of comments
peaked around game time.}
\begin{figure}
\centering
\includegraphics[width=0.65\linewidth]{ProportionofCommentsbyHour}
\caption{%
\replaced{The average proportion of comments made in each team subreddit
by hour on the game day during the 2017 season (normalized based on game's starting hour).
Error bars represent standard errors and are too small to see in the figure.
Comment activity increases and peaks at the second hour after the game starts,
as a typical NBA game takes around 2.5 hours.}
{The distribution of comments by hour on two randomly picked consecutive game days
in \communityname{/r/Lakers}.
On 2017-01-12, the game started on 17:30
and the number of comments peaked
from 17:00 to 20:00, as a typical NBA game lasts around 2.5 hours. On 2017-01-14, the game started on 12:30,
a similar peak occurred from 12:00 to 15:00.}
}
\label{fig:proportioncomments}
\end{figure}
We use the number of comments in game threads to
capture the fan activity level for a game.
Most game threads used \replaced{titles that are similar to this}{the title} format: ``[Game Thread]: team 1 @ team 2''.\footnote{If more than one game thread is created for the same game,
only the first one is kept, and the others are deleted by the moderator.}
\replaced{We}{After a careful regular expression matching, we} detected 8,596 game threads
in team subreddits and 6,277 game threads in \communityname{/r/NBA} \added{based on regular expression matching}.
Since NBA-related subreddits allow any fan to create game threads,
titles of game threads
do not follow the same
pattern, especially in the earlier times of team subreddits.
A detailed explanation and sanity check is presented in \secref{sec:appendix_thread}.
\replaced{Hierarchical}{OLS} regression analysis was used to analyze the effect of team performance
in a single game on fan activity.
Our \replaced{full}{formal} linear regression model to test {\bf H1}
is shown below:
\begin{align}
\label{eq:h1}
\variablename{\#comments in game thread} \sim & \beta_0 + \added{\beta_s\,\variablename{season} + \beta_m\,\variablename{month}
+ \beta_t\,\variablename{top team} + \beta_b\,\variablename{bottom team}} \nonumber\\
& + \beta_1\,\variablename{winning} + \beta_2\,\variablename{top team winning} + \beta_3\,\variablename{top team losing} \nonumber \\
& + \beta_4\,\variablename{bottom team winning} + \beta_5\,\variablename{bottom team losing} \nonumber \\
& + \beta_6\,\variablename{team elo} + \beta_7\,\variablename{opponent elo} + \beta_{8}\,\variablename{rivalry or not}
+ \beta_{9}\,\variablename{point difference} \nonumber \\
& + \beta_{10}\,\variablename{market value} + \beta_{11}\,\variablename{average age} + \added{\beta_{12}\,\variablename{\#star players}} + \added{\beta_{13}\,\variablename{\#unique users}} \nonumber \\
& + \beta_{14}\,\variablename{offense} + \beta_{15}\,\variablename{defense} + \beta_{16}\,\variablename{turnovers} \nonumber.\\
\end{align}
To test our hypothesis in team subreddits, all the variables in Equation~\ref{eq:h1} are included.
Unlike game threads in team subreddits, game threads in \communityname{/r/NBA} %
involve two teams and the following variables are ill-defined:
``winning,'' ``offense,'' ``defense,'' and ``turnovers.''
Therefore, these variables are removed when testing our hypothesis on game threads in \communityname{/r/NBA}.
\subsection{Analysis for \replaced{H2}{RQ2}}
Fan loyalty refers to people displaying recurring behavior and a strong
positive attitude towards a team~\citep{dwyer2011divided}.
To examine the relationship between team performance and fan loyalty in team subreddits, we first define active users as those that
post or comment in a team subreddit during a time period.
We then define two measurements of fan loyalty:
\textit{seasonly user retention} and \textit{monthly user retention}.
Seasonly user retention refers to the proportion of users that remain active in season $s+1$ among all users that are active in season $s$.
Monthly user retention refers to the proportion of users that remain active in month $m+1$ among all users that are active in month $m$.
The \replaced{full}{formal} linear regression models to test {\bf H2}
are shown below:
\begin{align}
\label{eq:h2season}
\variablename{seasonly user retention} \sim & \beta_0 + \added{\beta_s\,\variablename{season}} \nonumber \\
& + \beta_1\,\variablename{season elo} + \beta_2\,\variablename{season elo difference} \nonumber \\
& + \beta_{3}\,\variablename{market value} + \beta_{4}\,\variablename{average age} + \added{\beta_{5}\,\variablename{\#star players}} + \added{\beta_{6}\,\variablename{\#unique players}} \nonumber \\
&+ \beta_{7}\,\variablename{offense} + \beta_{8}\,\variablename{defense} + \beta_{9}\,\variablename{turnovers} \nonumber. \\
\end{align}
\begin{align}
\label{eq:h2season}
\variablename{monthly user retention} \sim & \beta_0 + \added{\beta_s\,\variablename{season}
+ \beta_m\,\variablename{month}} \nonumber \\
& + \beta_1\,\variablename{month elo} + \beta_2\,\variablename{month elo difference} \nonumber \\
& + \beta_{3}\,\variablename{market value} + \beta_{4}\,\variablename{average age} + \added{\beta_{5}\,\variablename{\#star players}} + \added{\beta_{6}\,\variablename{\#unique players}} \nonumber \\
&+ \beta_{7}\,\variablename{offense} + \beta_{8}\,\variablename{defense} + \beta_{9}\,\variablename{turnovers} \nonumber. \\
\end{align}
\subsection{Analysis for \replaced{H3}{RQ3}}
Among the five topics listed in Table~\ref{tab:topic}, \topicname{season prospects} and
\topicname{future} topics are closely related to our hypotheses about fans talking about winning and framing the future.
By applying the trained LDA model to comments in each team subreddit,
we are able to estimate the average topic distribution of each team subreddit by season.
Our \replaced{full}{formal} linear regression model to test {\bf H3}
is shown below:
\begin{align}
\label{eq:h3}
\variablename{topic weight} \sim & \beta_0 + \added{\beta_s\,\variablename{season}} \nonumber \\
& + \beta_1\,\variablename{season elo} + \beta_2\,\variablename{season elo difference} \nonumber \\
& + \beta_{3}\,\variablename{market value} + \beta_{4}\,\variablename{average age} + \added{\beta_{5}\,\variablename{\#star players}} + \added{\beta_{6}\,\variablename{\#unique players}} \nonumber \\
&+ \beta_{7}\,\variablename{offense} + \beta_{8}\,\variablename{defense} + \beta_{9}\,\variablename{turnovers} \nonumber, \\
\end{align}
where \variablename{topic weight} can be the average topic weight of either \topicname{season prospects} or \topicname{future.}
\begin{table}[t]
\centering
\small
\begin{tabular}{l|LLL|LLL}
\toprule
& \multicolumn{3}{c|}{Team subreddits} & \multicolumn{3}{c}{\communityname{/r/NBA}} \\
\multicolumn{1}{c|}{Variable} & \multicolumn{1}{c}{Reg. 1} & \multicolumn{1}{c}{Reg. 2} & \multicolumn{1}{c|}{Reg. 3} & \multicolumn{1}{c}{Reg. 1} & \multicolumn{1}{c}{Reg. 2} & \multicolumn{1}{c}{Reg. 3} \\
\midrule
\textit{Control: season} &&&&&& \\
2014 & 0.011*** & 0.012*** & 0.013*** & 0.007*** & 0.007*** & 0.005*** \\
2015 & 0.032*** & 0.032*** & 0.045*** & 0.010*** & 0.010*** & 0.006*** \\
2016 & 0.046*** & 0.046*** & 0.062*** & 0.014*** & 0.015*** & 0.012*** \\
2017 & 0.067*** & 0.068*** & 0.081*** & 0.018*** & 0.019*** & 0.016*** \\ [5pt]
\textit{Control: top/bottom team} &&&&&& \\
top team & & 0.012*** & 0.012*** & & 0.012*** & 0.020*** \\
bottom team & & -0.012*** & -0.007*** & & -0.009*** & -0.007** \\ [5pt]
\textit{Performance} &&&&&& \\
winning & & & 0.003** & & & \multicolumn{1}{c}{--} \\
top team winning & & & -0.006*** & & & -0.018*** \\
top team losing & & & 0.006*** & & & 0.018*** \\
bottom team winning & & & 0.007*** & & & 0.006*** \\
bottom team losing & & & -0.011*** & & & -0.003* \\ [5pt]
\textit{Game information} &&&&&& \\
team elo & & & 0.083*** & & & 0.091*** \\
opponent elo & & & 0.070*** & & & 0.111*** \\
rivalry or not & & & 0.010*** & & & 0.008*** \\
point difference & & & -0.017*** & & & -0.010*** \\ [5pt]
\textit{Team information} &&&&&&\\
market value & & & 0.051*** & & & 0.013*** \\
average age & & & -0.067*** & & & -0.015* \\
\added{\#star players} & & & 0.040*** & & & 0.020*** \\
\added{\#unique users}& & & 0.058*** & & & 0.017*** \\
offense & & & 0.023** & & & \multicolumn{1}{c}{--}\\
defense & & & -0.012** & & & \multicolumn{1}{c}{--} \\
turnovers & & & 0.084*** & & & \multicolumn{1}{c}{--} \\
\midrule
intercept & -0.010** & -0.011** & 0.085*** & 0.001 & -0.004** & -0.156*** \\
Adjusted $R^2$ & 0.236 & 0.286 & 0.440 & 0.302 & 0.338 & 0.644 \\
Intraclass Correlation (Season) \cite{packageICC} & 0.087 & \multicolumn{1}{c}{--} & \multicolumn{1}{c|}{--} & 0.021 & \multicolumn{1}{c}{--} & \multicolumn{1}{c}{--} \\
\bottomrule
\end{tabular}
\caption{\replaced{Hierarchical regression analyses}{Linear regression models} for game-level activity
in team subreddits and \communityname{/r/NBA}.
\added{Month is also added as a control variable for each model.}
\textbf{Throughout this paper, the number of stars indicate p-values, ***:
$p<0.001$, **: $p<0.01$ *: $p<0.05$.}
\added{We report $p$-values without the Bonferroni correction in all the regression tables.
In \secref{sec:ftest}, we report $F$-test results with the null hypothesis that adding team performance variables does not provide a significantly better fit and reject the null hypothesis after the Bonferroni correction.}
}
\label{tab:single}
\end{table}
\section{Results}
Based on the above variables, our results from \replaced{hierarchical regression analyses}
{linear regression models} by and large validate our hypotheses.
Furthermore, we find that the average age of players on the roster consistently plays an important role in fan behavior,
while it is not the case for market value \added{and the number of star players}.
\subsection{How does Team Performance Affect Game-level Activity? (\replaced{H1}{RQ1})}
Consistent with {\bf H1}, regression results show that the
top team losing and the bottom team winning correlate with higher levels of fan activity in both team subreddits and \communityname{/r/NBA}.
Table~\ref{tab:single} presents the results of our \replaced{hierarchical regression analyses}{OLS linear regression}.
The $R^2$ value is \replaced{0.40}{0.39} for team subreddits and \replaced{0.63}{0.52} for \communityname{/r/NBA},
suggesting that our \replaced{linear variables}{linear regression} can reasonably \replaced{recover}{predict} fan
activity
in game threads.
Overall, fans are more active when their team wins in team subreddits
(remember that the notion of one's team does not hold in \communityname{/r/NBA}).
The interaction with the top team and the bottom team show that surprise can stimulate fan activity:
both the top team losing and the bottom team winning have significantly positive coefficients.
To put this into context, in the 2017 season, the average winning percentage of the top five teams is 69\%.
Fans of the top teams may get used to their teams winning games, in which case losing becomes a surprise.
On the contrary, the average winning percentage of the bottom five teams is 31\%.
It is \replaced{invigorating}{``surprising''} for these fans to watch their team winning.
The \replaced{extra excitement}{surprise} can stimulate more comments in the game threads in both team subreddits and \communityname{/r/NBA}.
In comparison, when top teams win or bottom teams lose, fans are less active, evidenced by the negative coefficient in team subreddits (not as statistically significant in \communityname{/r/NBA}).
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{DayComments}
\caption{Average number of comments on winning, losing and non-game days
for the top three and the bottom three teams in the 2017 (left) and 2016 (right) regular season.
In all the top and bottom teams,
the average number of comments on game days is significantly higher than non-game days.
In all top teams, the average number of comments on losing days is higher than winning days,
while bottom teams show the opposite trend.
}
\label{fig:commentvolume}
\end{figure}
To further illustrate this contrast,
Figure~\ref{fig:commentvolume}
shows the average number of comments
on winning, losing, and non-game days for the top three and the bottom three teams in the 2017 and 2016 regular season.
Consistent patterns arise: 1) In all top and bottom teams,
the average number of comments on game days is significantly higher than non-game days;
2) In all top teams, the average number of comments on losing days is higher than winning days, but bottom teams show exactly the opposite trend.
Our results differ from that of \citet{Leung2017Effect}\replaced{, which finds that}{where} unexpected winning does not have a significant impact on Wikipedia page edits.
\added{One of the primary differences between our method and theirs is that
they did not specifically control the effect of top/bottom team.}
It \replaced{may also}{can} be explained by the fact that Wikipedia page edits
do not capture the behavior of most fans and are much more sparse than comments in online fan communities.
Online fan communities provide rich behavioral data
for understanding how team performance affects fan behavior.
\added{The number of fans involved in our dataset is much higher than that in their Wikipedia dataset.
A comparison between fans' behavior on Reddit and Wikipedia could be an interesting
direction for future research.}
In addition, game information and team information also serve as important \replaced{factors}{predictors}.
Among variables about game information, point difference is negatively correlated with game-level user activity, as
the game intensity tends to be higher when the point difference is small (a close game).
Better teams (with higher elo ratings)
playing against better teams or rivalry teams correlates with higher user activity levels.
As for team information, a team's market value\replaced{, the number of unique users, and the number of star players are}{ is}
positively correlated with the number of comments,
\replaced{since these two factors are}{since market value is}
closely related to the number of fans.
Younger teams with more average points scored and
less points allowed per game stimulate more discussion in team subreddits.
\begin{table}[t]
\small
\centering
\begin{tabular}{l|LL|LL}
\toprule
\multicolumn{1}{l|}{} & \multicolumn{2}{c|}{Seasonly user retention} & \multicolumn{2}{c}{Monthly user retention} \\
\multicolumn{1}{c|}{Variable} & \multicolumn{1}{c}{Reg. 1} & \multicolumn{1}{c|}{Reg. 2}
& \multicolumn{1}{c}{Reg. 1} & \multicolumn{1}{c}{Reg. 2} \\ \midrule
\textit{Control: season} &&&&\\
2014 & 0.237*** & 0.237*** & 0.086*** & 0.055*** \\
2015 & 0.184*** & 0.187*** & 0.126*** & 0.126*** \\
2016 & 0.088*** & 0.116*** & 0.130*** & 0.139*** \\
2017 & 0.073*** & 0.090*** & 0.137*** & 0.141*** \\[5pt]
\textit{Performance} &&&&\\
season elo & & -0.370** & & \multicolumn{1}{c}{--} \\
season elo difference & & 0.229*** & & \multicolumn{1}{c}{--} \\
month elo & & \multicolumn{1}{c}{--} & & -0.170** \\
month elo difference & & \multicolumn{1}{c}{--} & & 0.032** \\[5pt]
\textit{Team information} &&&&\\
market value & & 0.068* & & 0.051** \\
average age & & -0.105* & & -0.021 \\
\added{\#star players} & & -0.038 & & 0.041 \\
\added{\#unique users} & & 0.181*** & & 0.111*** \\
offense & & -0.168 & & 0.004 \\
defense & & -0.077 & & -0.053 \\
turnovers & & -0.037 & & -0.041 \\
\midrule
intercept & 0.583*** & 0.629*** & 0.478*** & 0.460*** \\
Adjusted $R^2$ & 0.286 & 0.503 & 0.155 & 0.232 \\
Intraclass Correlation (Season) \cite{packageICC} & 0.396 & \multicolumn{1}{c|}{--} & 0.029 & \multicolumn{1}{c}{--} \\
\bottomrule
\end{tabular}
\caption{\replaced{Hierarchical regression analyses}{Linear regression models}
for seasonly user retention rate and monthly user retention rate in team subreddits.
\added{Month is also added as a control variable for the monthly user retention analysis.
{\em \#unique users} is counted every season for the seasonly user retention analysis
and every month for the monthly user retention analysis.}
For both dependent variables, team's overall performance has a negative
coefficient while short-term performance and
market value has a positive coefficient.
\deleted{Average age and playing style don't have significant effects
on user retention rate.}
}
\label{tab:loyalty}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{UserRetention}
\caption{Seasonly user retention rate and average monthly user retention rate of the top three and bottom three teams in the 2017 (left) and 2016 (right) season.
Bottom teams consistently have higher user retention than top teams.}
\label{fig:retention}
\end{figure}
\subsection{How does Team Performance Relate to Fan Loyalty in Team Subreddits? (\replaced{H2}{RQ2}) }
Our findings confirm \textbf{H2}, that top teams tend to have lower fan loyalty
and bottom teams tend to have higher fan loyalty, measured by both seasonly user retention and monthly user retention.
Table~\ref{tab:loyalty} shows the \replaced{hierarchical}{linear} regression results.
In both regression \replaced{analyses}{models}, elo rating,
which measures a team's absolute performance,
has a statistically significant negative impact on user retention rate.
The coefficient of elo rating also has the greatest absolute value among all variables (except intercept).
Meanwhile, improved performance reflected by elo difference positively correlates with user retention.
Figure~\ref{fig:retention} presents the seasonly user retention rate
and average monthly user retention rate of the top 3 and bottom 3 teams in the 2017 (left) and 2016 (right) season.
It is consistent that in these two seasons,
bottom teams have higher user retention rate than top teams, both seasonly and monthly.
This may be explained by the famous ``bandwagon'' phenomenon in professional sports~\cite{wann1990hard}:
Fans may ``jump on the bandwagon'' by starting to follow the current top teams,
which provides a short cut to achievement and success for them.
In comparison, terrible team performance can serve as a loyalty filter.
After a period of poor performance,
only die-hard fans
stay active and optimistic in the team subreddits.
Our results echo the finding by \citet{hirt1992costs}:
after developing strong allegiances with a sports team, fans find it difficult
to disassociate from the team, even when the team is unsuccessful.
It is worth noting that the low fan loyalty of the top teams cannot simply be explained by the fact that they tend to have more fans.
\deleted{Although we did not explicitly include the number of users as an independent variable because market value is highly correlated with the number of users (Pearson correlation at 0.71),
our results are robust, even if we include \#users in the regression models.}
In fact, teams with higher market value \added{and more unique users} (more fans) tend to have a higher user retention rate, partly because their success depends on a healthy and strong fan community.
Similar to game-level activity, fans are more loyal to younger teams, at least in seasonly user retention \added{(the coefficient is also negative for monthly user retention and $p-$value is 0.07)}.
Surprisingly, according to our \added{hierarchical} regression results, a
team's \added{number of star players and} playing style (offense, defense, and turnovers)
\replaced{have}{has} no significant impact on user retention.
\begin{table}[t]
\small
\centering
\begin{tabular}{l|LL|LL}
\toprule
\multicolumn{1}{l|}{} & \multicolumn{2}{c|}{\topicname{season prospects}} & \multicolumn{2}{c}{\topicname{future}} \\
\multicolumn{1}{c|}{Variable} & \multicolumn{1}{c}{Reg. 1} & \multicolumn{1}{c|}{Reg. 2}
& \multicolumn{1}{c}{Reg. 1} & \multicolumn{1}{c}{Reg. 2} \\ \midrule
\textit{Control: season} &&&&\\
2014 & 0.096*** & 0.056** & 0.059* & 0.137*** \\
2015 & 0.067** & 0.049* & 0.054* & 0.129*** \\
2016 & 0.069** & 0.053* & 0.109*** & 0.175*** \\
2017 & 0.061* & 0.048* & 0.065* & 0.160*** \\[5pt]
\textit{Performance} &&&&\\
season elo & & 0.410*** & & -0.415** \\
season elo difference & & -0.130 & & -0.099 \\[5pt]
\textit{Team information} &&&&\\
market value & & -0.065 & & 0.051 \\
average age & & 0.149*** & & -0.189*** \\
\added{\#star players} & & 0.018 & & -0.187**\\
\added{\#unique users} & & 0.071 & & -0.104 \\
offense & & -0.036 & & 0.037 \\
defense & & -0.120 & & -0.115 \\
turnovers & & 0.059 & & -0.092 \\ \midrule
intercept & 0.398*** & 0.286*** & 0.478*** & 0.814*** \\
Adjusted $R^2$ & 0.013 & 0.578 & 0.002 & 0.619 \\
Intraclass Correlation (Season) \cite{packageICC} & 0.059 & \multicolumn{1}{c|}{--} & 0.003 & \multicolumn{1}{c}{--} \\
\bottomrule
\end{tabular}
\caption{\replaced{Hierarchical regression analyses}{Linear egression models} for \topicname{season prospects} topic weight and \topicname{future} topic weight in team subreddits.
Team performance has positive correlation
with \topicname{season prospects} topic and negative correlation with \topicname{future} topic. }
\label{tab:rq2topic}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{Topic_Scatterplot}
\caption{Scatterplot of \topicname{season prospects} topic weight and \topicname{future} topic weight in all the team subreddits in the 2017 (left) and 2016 (right) season.
The top three teams and bottom three teams are represented by team logos instead of points.
Teams are ranked by elo rating at the end of each season. Fans of the top teams tend to discuss much more \topicname{season prospects} topics (lower right corner) and
fans of the bottom teams tend to discuss much more \topicname{future} topics (upper left corner).
}
\label{fig:topic}
\end{figure}
\subsection{How does Team Performance Affect Topics of Discussion in Team Subreddits? (\replaced{H3}{RQ3})}
Our final question is concerned with the relation between team performance and topics of discussion in online fan communities.
Our results validate \textbf{H3}, that better teams have more discussions on \topicname{season prospects} and worse teams tend to discuss \topicname{future.}
Table~\ref{tab:rq2topic} presents the results of \replaced{hierarchical regression analyses}
{OLS linear regression} on \topicname{future} topic weight
and \topicname{season prospects} topic weight computed with our LDA model.
In both regressions, only team performance (season elo) and average age
have statistically significant coefficients.
Both team performance and average age are positively correlated with \topicname{season prospects} and negatively correlated with \topicname{future}.
\replaced{Moreover, the number of star players has a negative correlation with \topicname{future}
but has no significant effect on \topicname{season prospects.}}{Despite having only two variables with significant coefficients, both regression models achieve predictive power with $R^2$ above 0.55.}
\added{Despite having only two or three variables (except control variables and intercept) with significant coefficients,
both regression analyses achieve strong correlation with $R^2$ above 0.57.}
Note that the improvement in team performance (season elo difference) does not have a significant effect.
As an example, Figure~\ref{fig:topic} further shows topic weights of \topicname{future} and \topicname{season prospects}
for all the teams in the 2017 and 2016 season.
The top 3 teams and bottom 3 teams in each season are highlighted using team logos.
The top teams are consistently in the lower right corner (high \topicname{season prospects}, low \topicname{future}),
while the bottom teams are
in the upper left corner (low \topicname{season prospects}, high \topicname{future}).
Our results echo the finding in \citet{doyle2017there}:
framing the future is an important strategy for fans of teams with poor performance to maintain a positive identity
in the absence of success.
The effect of average age reflects the promise that young talents hold for NBA teams.
Although it takes time for talented rookies that just come out of college to develop physical and mental strength to compete in the NBA,
fans can see great potential in them and
remain positive about their team's future, despite the team's short-term poor performance.
In contrast, veteran players are expected to bring immediate benefits to the team and compete for playoff positions and even championships.
For example, \citet{agingveteran} lists a number of veteran players
who either took a pay cut or accepted a smaller role in top teams
to chase a championship ring at the end of their career.
A team's playing style, including offense points, defense points, and turnovers, doesn't seem to influence the topic weights of these two topics.
We also run regression for the other three top topics in Table~\ref{tab:topic} and present the results in \secref{sec:appendix_topic_regression}.
Team performance plays a limited role for the other three topics, while average age is consistently significant for all three discussion topics.
\section{Concluding Discussion}
\label{sec:conclusion}
In this work, we provide the first large-scale characterization
of online fan communities of the NBA teams.
We build a unique dataset that combines user behavior in
NBA-related subreddits and statistics of team performance.
We demonstrate how team performance affects fan behavior both
at the game level and at the season level.
Fans are more active when top teams lose and bottom teams win,
which suggests that
in addition to simply winning or losing,
surprise plays an important role in driving fan activity.
Furthermore,
a team's strong performance doesn't necessarily make the fan community more loyal.
It may attract ``bandwagon fans'' and result in a low user retention rate.
We find that the bottom teams generally have higher
user retention rate than the top teams.
Finally, fans of the top teams and
bottom teams
focus on different topics of discussion.
Fans of the top teams talk more about season records, playoff seeds, and winning
the championship, while fans of the bottom teams spend more time framing
the future to compensate for the lack of recent success.
\para{Limitations.}
One key limitation of our work is the representativeness of our dataset.
First, although our study uses a dataset that spans five years, our period coincides with the rapid growth of the entire Reddit community.
We use \textit{season} and \textit{month} to try our best to account for temporal differences,
but our sample could still be based upon fans with a mindset of growth.
Second, although \citet{rnba} suggests that \communityname{/r/NBA} is now playing an important role among fans,
the NBA fan communities on Reddit may not be representative of the Internet and the whole offline population.
Another limitation of our work lies in our measurement.
For game-level activity, we only consider the number of comments in the game threads.
This measurement provides a nice way to make sure that the comments are about the game, but we may have missed related comments in other threads.
We do not consider other aspects of the comments such as sentiment and passion.
In addition, our fan loyalty metric is entirely based on user retention.
A user who posts on a team subreddit certainly supports the team to a different extent from those who do not.
Our metric may fail to capture lurkers who silently support their teams.
Finally, our topics of discussion are derived from topic modeling, an unsupervised approach.
Supervised approaches could provide more accurate identification of topics, although the deduction approach would limit us to a specific set of topics independent of the dataset.
\para{Implications for online communities.}
First, our work clearly demonstrates that online communities do not only exist in the virtual world; they are usually
embedded in the offline context and attract people with similar offline interests.
It is an important research question to understand to what extent and how online communities relate to offline contexts as well as what fraction of online communities are entirely virtual.
Professional sports provide an interesting case,
because these online fan communities, in a way, only exist as a result of the offline sports teams and games.
Such connections highlight the necessity to combine multiple data sources to understand
how fans' usage of social media correlates with the on-going events of the topic of their interests.
Our study has the potential to serve as a window into the relationship between online social behavior
and offline professional sports. We show that subreddit activity has significant correlations
with game results and team properties.
Exploring the factors that motivate users of interest-based communities to communicate with social media
is also an important and rich area for future research.
\added{For example, a promising future direction is to study the reasons
behind fans departing a team subreddit.
Possible reasons include being disappointed
by the team performance or playing style, favorite players being traded, and being attacked by other fans in the team subreddit or \communityname{/r/NBA}.}
Second, our results show that teams with strong performance correlate with low fan loyalty.
These results relate to the multi-community perspective in online community research \cite{tan2015all,zhang2017community,hamilton2017loyalty,Zhu:2014:SEN:2556288.2557348}.
One future direction is to examine where fans migrate to and whether fans leave the NBA or the Reddit altogether, and more importantly, what factors determine such migration decisions.
Third, our findings reveal strategies for the design of sports-related online platforms.
Our results clearly demonstrate that teams in under-performing periods
are more likely to develop a more loyal fan base
that discusses more about their team's \topicname{future.}
Recognizing these loyal fans and acknowledging their contributions within the fan community can be critical for
facilitating attraction and retention of these fans.
For example, team subreddits' moderators may reward a unique flair to the users
who have been active in the community for a long time, especially during the difficult times.
\para{Implications for sports management.}
Our findings suggest that winning is not everything.
In fact, unexpected losses can stimulate fan activity.
The increase of fan activity does not necessarily happen in a good way.
For example, the fans of the Cavaliers, which won the Eastern Championship of the 2017 season, started to discuss firing the team's head coach Tyronn Lue after losing three of the first six games in the following season.
Managers may try to understand the role of expectation in fan behavior and guide the increased activity and attention towards improving the team and building a strong fan base.
We also find that the average age of the roster consistently plays an important role in fan behavior:
younger teams tend to bring more fan activity on game days and develop a more loyal fan base that discusses about \topicname{future.}
These results contribute to existing literature on the effect of age in sports management.
\citet{timmerman2000racial} finds that the average age is positively correlated with team performance, while the age diversity is negatively correlated (in other words, veterans improve team performance but are not necessarily compatible with young players).
The tradeoff between veteran players and young talents requires more research from the perspective of both team performance and fan engagement.
Finally, it is crucial for teams to maintain a strong fan base that can support them during unsuccessful times because it is difficult for sports teams to sustain winning for a long time.
This is especially true in the NBA since the draft lottery mechanism is designed
to give bottom teams opportunities to improve and compete.
Consistent with \citet{doyle2017there}, we find that framing the \topicname{future} can be an important strategy for teams with poor performance to maintain a positive group identity.
The absence of success can be a great opportunity to develop a deep attachment with loyal fans. Prior studies show that certain fan group would like
to persevere with their supported team through almost anything,
including years of defeat, to recognize themselves as die-hard fans.
By doing this, they feel that they would reap more affective significance among the fan community
when the team becomes successful in the future~\cite{wann1990hard,hyatt2015using}.
It is important for managers to recognize these loyal fans and create ways to acknowledge and leverage their positions within the fan community.
For instance, teams may host ``Open Day'' and invite these loyal fans to visit facilities and
interact with star players and coaching staff.
Hosting Ask Me Anything (AMA)~\cite{wiki:AMA} interviews is another strategy to engage with online fan communities.
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 1.859375,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdv85qhLBXeFpcc7B | \section{Introduction}
Traffic jams are not only observed in vehicular traffic but also
in the crowd dynamics of mass-sport events, particularly cross-country
ski marathons. The Swedish
\emph{Vasaloppet}, a 90-km race with about 15\,000
participants, is the most prominent example (cf. Fig.~\ref{fig:photo}).
Several other races attract up to 10\,000
participants. Consequently, ``traffic jams'' among the athletes
occur regularly. They are not only a hassle for the
athletes but also pose organisational or
even safety threats.
While there are a few scientific investigations of the
traffic around such events~\cite{ahmadi2011analysis},
we are not aware of any investigations on
the crowd dynamics of the skiers
\emph{themselves}.
Unlike the athletes in running or skating events~\cite{TGF13-running}, the skiers in
Marathons for the classic style (which is required in the
Vasaloppet main race) move along fixed tracks, i.e., the traffic flow
is not only unidirectional but \emph{lane based}.
This allows us to generalize car-following and
lane changing models~\cite{TreiberKesting-Book} to formulate a microscopic model for the
motion of skiers.
Simulating the model allows event managers to improve the race
organization by identifying (and possibly eliminating) bottlenecks, determining the optimum
number of starting groups and
the maximum size of each
group, or optimizing the starting schedule~\cite{TGF13-running}.
We propose a
microscopic acceleration and track-changing model
for cross-country skiers taking into account different fitness levels,
gradients, and interactions between the athletes in all traffic
situations. After calibrating the model on
microscopic data of jam free sections of the \emph{Vasaloppet 2012},
we apply the open-source
simulator {\tt MovSim.org}~\cite{movsim} to simulate
all 15\,000 participants of the Vasaloppet during the first ten
kilometers. The simulations show that the initial jam causes a delay of
up to \unit[40]{minutes} which agrees with evidence from the data.
The next section introduces the model. In Section~\ref{sec:sim}, we
describe the calibration, the simulation, and the
results. Section~\ref{sec:concl} concludes with a discussion.
\begin{figure}
\fig{0.8\textwidth}{vasa2_cropped2.eps}
\caption{\label{fig:photo}Starting phase of the Vasaloppet 2012.}
\end{figure}
\section{\label{sec:mod}The Model}
Unlike the normal case in motorized traffic,
the ``desired'' speed (and acceleration) of a skier is
restricted essentially by his or her performance
(maximum mechanical power $P=P\sub{max}$),
and by the maximum speed $v_c$ for active propulsion ($P=0$ for
$v \ge v_c$). Since, additionally,
$P\to
0$ for $v\to
0$, it is plausible to model the usable power as a function of
the speed as a parabola,
\be
\label{pow1}
P(v,v_c)=4 P\sub{max}\frac{v}{v_c}\left(1-\frac{v}{v_c}\right) \theta(v_c-v),
\ee
where $\theta(x)=1$ if $x\ge 0$, and zero, otherwise.
While the maximum mechanical power is reached at $v_c/2$, the maximum
propulsion force $F\sub{max}=4P\sub{max}/v_c$, and the maximum acceleration
\be
a\sub{max}=\frac{4P\sub{max}}{mv_c},
\ee
is reached at zero speed.
The above formulas are valid for conventional techniques such as the
``diagonal step'' or ``double poling''. However, if the uphill
gradient (in radian) exceeds the angle $\alpha\sub{slip}=a\sub{max}/g$
(where $g=\unit[9.81]{m/s^2}$), no forward movement is possible in this
way. Instead, when $\alpha>\alpha\sub{max}/2$, athletes use the slow
but steady ``fishbone''
style described by~\refkl{pow1} with a lower maximum speed $V_{c2}$
corresponding to a higher maximum gradient $4P\sub{max}/(gmv_{c_2})$. In summary, the
propulsion force reads
\be
F(v,\alpha)=\twoCases
{P(v,v_c)/v}{\alpha<\alpha\sub{max}/2}
{P(v,v_{c2})/v}{\text{otherwise.}}
\ee
Balancing this force with the inertial, friction, air-drag, and
gravitational
forces defines the free-flow acceleration $\dot{v}\sub{free}$:
\be
m\dot{v}\sub{free} =F(v) - \frac{1}{2}c_d A \rho v^2
- mg(\mu_0+\alpha).
\ee
If the considered skier is following a leading athlete (speed $v_l$) at
a spatial gap $s$, the free-flow acceleration is complemented by the
decelerating interaction force of the intelligent-driver model
(IDM)\cite{TreiberKesting-Book} leading to the full longitudinal model
\be
\abl{v}{t}=\text{min}\left\{\dot{v}\sub{free},\,
a\sub{max}\left[1-\left(\frac{s^*(v,v_l)}{s}\right)^2\right]\right\},
\ee
where the desired dynamical gap of the IDM depends on the gap $s$ and the
leading speed $v_l$ according to
\be
s^*(v,v_l)=s_0+\max\left(0,\, vT+\frac{v (v_l-v)}
{2\sqrt{a\sub{max}b}}\right).
\ee
Besides the ski length, this model has the parameters $c_d A \rho/m$,
$\mu_0$, $P\sub{max}/m$, $v_c$
(defining $a\sub{max}$),
$v_{c2}$, $s_0$, $T$, and $b$ (see Table~\ref{tab:param}). It is
calibrated such that the maximum unobstructed speed $v\sub{max}$ on
level terrain,
defined by $F(v\sub{max},0)-c_d A \rho v\sub{max}^2/2-
mg\mu_0=0$, satisfies the observed speed distributions on level
unobstructed sections (Fig.~\ref{fig:speeds}).
\subsection{Lane-changing model}
We apply the general-purpose lane−changing model
MOBIL~\cite{TreiberKesting-Book}. Generally, lane changing and
overtaking is allowed on either side and
crashes are much less avoided than in vehicular traffic, so, the
symmetric variant of the model
with zero politeness and rather aggressive safety settings is appropriate. Lane changing
takes place if it is both safe and advantageous. The safety criterion
is satisfied if, as a consequence of the change,
the back skier on the new track is not forced to decelerate by
more than his or her normal deceleration ability $b$:
\be
\label{safety}
\abl{v\sub{back,new}}{t} \ge -b.
\ee
A change is advantageous if, on the new track, the athlete can
accelerate more (or needs to decelerate less) than on the old track:
\be
\abl{v\sub{front,new}}{t} \ge \abl{v\sub{actual}}{t} + \Delta a,
\ee
where the only new parameter $\Delta a$ represents
some small threshold to avoid lane changing for
marginal advantages. Note that for mandatory lane changes (e.g., when a track
ends), only the safety criterion~\refkl{safety} must be satisfied.
\begin{figure}
\fig{0.98\textwidth}{Vasaloppet2012.speed_S12.eps}
\caption{\label{fig:speeds}Speed density functions for the section
between Station~1 and~2 for each starting group. No jams were
observed in this section.}
\end{figure}
\begin{table}
\centering
\caption{\label{tab:param}Model parameters of the proposed longitudinal model}
\begin{tabular}{ll}
\hline\noalign{\smallskip}
Parameter & Typical Value ($4\sup{th}$ starting group)\\
\noalign{\smallskip}\hline\noalign{\smallskip}
ski length $l$ & \unit[2]{m} \\
Mass $m$ incl. equipment & \unit[80]{kg} \\
air-drag coefficient $c_d$ & 0.7 \\
frontal cross section $A$ & $\unit[1]{m^2}$ \\
friction coefficient $\mu_0$ & 0.02 \\ \hline
maximum mechanical power $P\sub{max}$ & \unit[150]{W}\\
limit speed for active action $v_c$ & \unit[6]{m/s} \\
time gap $T$ & \unit[0.3]{s} \\
minimum spatial gap $s_0$ & \unit[0.3]{m} \\
normal braking deceleration $b$ & $\unit[1]{m/s^2}$ \\
maximum deceleration $b\sub{max}$ & $\unit[2]{m/s^2}$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\section{\label{sec:sim}Simulation Results}
We have simulated all of the 15\,000 athletes of
the Vasaloppet~2012 for the first \unit[10]{km}
(cf. Fig.~\ref{fig:sim}) by implementing the model into the
open-source traffic simulator {\tt MovSim.org}. The starting field
includes 70 parallel
tracks (cf. Fig.~\ref{fig:photo}) where the 10 starting groups (plus a small elite group) are
arranged in order. Further ahead, the number of tracks decreases
gradually down to 8~tracks at the end of the uphill section for $x\ge
\unit[7]{km}$. The uphill gradients and the course geometry (cf. Fig.~\ref{fig:sim}) were
obtained using Google Earth.
\begin{figure}
\fig{0.8\textwidth}{simulation.eps}
\caption{\label{fig:sim}Screenshot of the MovSim Simulation of the
first \unit[10]{km} of the Vasaloppet~2012 (center) with an
enlargement of the diverge-merge section (left top). Also shown are
two photos of the crowd flow at
the corresponding sections (right).}
\end{figure}
\begin{figure}
\fig{0.98\textwidth}{tdistr.eps}
\caption{\label{fig:tdistr}Distribution functions of the split times
from the start to Station S1 (left), S1 to S2 (left and right), and
S6 to S7 (right), shown separately for the fastest groups (elite and
groups 1 and 2) and the remaining
groups~3 to~10. All three sections take about the same time.
Major jams occur only for the groups~3 to~10 and
only between the start and S1.}
\end{figure}
As in the real event, we simulated a mass start.
While the initial starting configuration dissolves
relatively quickly, massive jams form at the beginning of the gradient
section, particularly at the route divide (inset of
Fig.~\ref{fig:sim}).
In summary, the delays due to the jams accumulated up to \unit[40]{minutes}
for the last starting groups which agrees with the macroscopic
flow-based analysis of the split-time data (Fig.~\ref{fig:tdistr}).
\section{\label{sec:concl}Conclusion}
Using the open-software MovSim, we have quatitatively reproduced the
congestions and stop-and-go waves on the first ten kilometers of the Vasaloppet
Race 2012. The jams leading to a delay of up to 40~minutes are caused a steep
uphill section and a simultaneous reduction of the number of
tracks. Further simulations have also shown that eliminating the worst
bottlenecks by locally adding a few tracks only transfers the jams to
locations further downstream. In contrast, replacing the mass start
(which is highly controversial) by
a wave start with a
five-minute delay between the starting groups would essentially
eliminate the jams without the need to reduce the total number of
participants.
\bibliographystyle{elsart-num}
\section{{\sf\Large\textbf{#1}}}}
\providecommand{\mysubsection}[1]{\subsection{{\sf\large\textbf{#1}}}}
\providecommand{\bc}{\begin{center}}
\providecommand{\ec}{\end{center}}
\providecommand{\be}{\begin{equation}}
\providecommand{\ee}{\end{equation}}
\providecommand{\bea}{\begin{eqnarray}}
\providecommand{\eea}{\end{eqnarray}}
\providecommand{\bdm}{\begin{displaymath}}
\providecommand{\edm}{\end{displaymath}}
\providecommand{\bdma}{\begin{eqnarray*}}
\providecommand{\edma}{\end{eqnarray*}}
\providecommand{\ba}{\begin{eqnarray*}}
\providecommand{\ea}{\end{eqnarray*}}
\providecommand{\bi}{\begin{itemize}}
\providecommand{\ei}{\end{itemize}}
\providecommand{\benum}{\begin{enumerate}}
\providecommand{\eenum}{\end{enumerate}}
\providecommand{\bnew}{{\bf }
\providecommand{\enew}{}}
\providecommand{\dmTwo}[1]{
\begin{displaymath}
\begin{array}{ll} #1
\end{array}
\end{displaymath}
}
\providecommand{\refkl}[1]{(\ref{#1})}
\providecommand{\twoCases}[4]{
\left\{
\begin{array}{ll}
#1 & #2 \\
#3 & #4
\end{array}
\right.
}
\providecommand{\threeCases}[6]{
\left\{
\begin{array}{ll}
#1 & #2 \\
#3 & #4 \\
#5 & #6
\end{array}
\right.
}
\providecommand{\myVector}[1]{
\left(\begin{array}{c}
#1
\end{array} \right)
}
\providecommand{\fboxdm}[1]{
\begin{center} \fbox{
${\displaystyle \begin{array}{l} #1 \end{array}}$} \end{center}
}
\providecommand{\fboxeq}[1]{
\begin{equation}
\fbox{${\displaystyle #1 }$}
\end{equation}
}
\providecommand{\fboxtext}[1]{
\begin{center} \framebox{
\begin{minipage}{155mm} #1 \end{minipage}
}
\end{center}
}
\providecommand{\fitfboxtext}[1]{
\begin{center}
\fbox{{#1}}
\end{center}
}
\providecommand{\text}[1]{{\mbox{ #1}}}
\providecommand{\Angstroem}{{\AA}}
\providecommand{\cels}{\mbox{$^{\circ}{\rm C}$}}
\providecommand{\emptySet}{\{\not\hspace{-1mm}0\}}
\providecommand{\Fr}{\mbox{Fr\'{e}edericksz--}}
\providecommand{\Poincare}{Poincar\'{e}}
\providecommand{\rb}{Rayleigh--B\'{e}nard}
\providecommand{\RB}{Rayleigh--B\'{e}nard}
\providecommand{\via}{{\it via}}
\providecommand{\msii}{\mbox{m/s$^2$}}
\providecommand{\result}[1]{\underline{\underline{#1}}}
\providecommand{\tr}{^{\text{T}}}
\providecommand{\titlebox}[1]{\parbox{140mm}{\vspace{2mm} #1 \vspace{2mm}}}
\providecommand{\spacelinebot}[1]{\parbox[t]{1mm}{\hspace{1mm}\vspace{#1}}}
\providecommand{\spacelinetop}[1]{\parbox[b]{1mm}{\hspace{1mm}\vspace{#1}}}
\providecommand{\spacelinemid}[1]{\parbox[c]{1mm}{\hspace{1mm}\vspace{#1}}}
\providecommand{\spacingbot}{\parbox[t]{1mm}{\hspace{1mm}\vspace{3mm}}}
\providecommand{\spacingtop}{\parbox[b]{1mm}{\hspace{1mm}\vspace{6mm}}}
\providecommand{\spacingmid}{\parbox[c]{1mm}{\hspace{1mm}\vspace{9mm}}}
\providecommand{\fig}[2]{
\begin{center}
\includegraphics[width=#1]{#2}
\end{center}
}
\providecommand{\oldfigc}[2]{
\renewcommand{\baselinestretch}{1.0}
\parbox[t]{#1}{\sloppy \small #2}
\renewcommand{\baselinestretch}{\usualstretch}
\small \normalsize
}
\providecommand{\figps}[2]{
\begin{minipage}[]{#1}
\epsfxsize #1
\epsffile{#2}
\end{minipage}
}
\providecommand{\figcaption}[3]{
\renewcommand{\baselinestretch}{1.0}
\noindent
\parbox[]{#1}
{\sloppy \small Figure #2 \
#3
}
\renewcommand{\baselinestretch}{\usualstretch}
\small \normalsize
}
\providecommand{\largefig}[4]{
\begin{minipage}{\lenxtot}
\figps{\lenxtot}{#1}
\vspace{#2} \\
\figcaption{\lenxtot}{#3}{#4}
\end{minipage}
}
\providecommand{\smallfig}[5]{
\begin{minipage}{\lenxtot}
\begin{minipage}{#1}
\figps{#1}{#2}
\end{minipage}
\hfill
\begin{minipage}{#3}
\figcaption{#3}{#4}{#5}
\end{minipage}
\end{minipage}
}
\providecommand{\fett}{\bf \boldmath }
\providecommand{\bfall}{\bf \boldmath }
\providecommand{\bfmath}[1]{\mbox{\bf\boldmath{$#1$}}}
\providecommand{\dd}{{\rm d}}
\providecommand{\diff}[1]{ \ {\rm d} #1}
\providecommand{\ablpart}[2]{\frac{\partial #1}{\partial #2}}
\providecommand{\abl}[2]{\frac{{\rm d} #1}{{\rm d} #2}}
\providecommand{\ablpartzwei}[2]{\frac{\partial^{2} #1}{\partial #2^{2}}}
\providecommand{\ablparttwo}[2]{\frac{\partial^{2} #1}{\partial #2^{2}}}
\providecommand{\ablzwei}[2]{\frac{{\rm d}^{2} #1}{{\rm d} #2^{2}}}
\providecommand{\cc}{^{\ast}}
\providecommand{\complex}{C \hspace*{-0.65 em}
\parbox{3mm}{\vspace*{-0.2 em} {\scriptsize /}}
}
\renewcommand{\d}[1]{\partial_{#1}}
\providecommand{\deltat}{\mbox{$\delta(t-t')$}}
\providecommand{\deltar}{\mbox{$\delta(\v{r}-\v{r'})$}}
\providecommand{\deltax}{\mbox{$\delta(x-x')$}}
\providecommand{\deltavx}{\mbox{$\delta(\v{x}-\v{x}')$}}
\providecommand{\deltay}{\mbox{$\delta(y-y')$}}
\providecommand{\deltaz}{\mbox{$\delta(z-z')$}}
\providecommand{\deltart}{\mbox{$\delta(\v{r}-\v{r'})\delta(t-t')$}}
\providecommand{\deltaxyt}{\mbox{$\delta(x-x')\delta(y-y')\delta(t-t')$}}
\providecommand{\deltaij}{\mbox{$\delta_{ij}$}}
\providecommand{\deltaik}{\mbox{$\delta_{ik}$}}
\providecommand{\deltail}{\mbox{$\delta_{il}$}}
\providecommand{\deltajk}{\mbox{$\delta_{jk}$}}
\providecommand{\deltajl}{\mbox{$\delta_{jl}$}}
\providecommand{\dfrac}[2]{{\displaystyle \frac{#1}{#2}}}
\providecommand{\dsumi}{{\displaystyle \sum\limits_{i=1}^{n}}}
\providecommand{\dint}[2]{{\displaystyle\int\limits_{#1}^{#2}}}
\providecommand{\dsum}[2]{{\displaystyle \sum\limits_{#1}^{#2}}}
\providecommand{\dsuml}[2]{{\displaystyle \sum\limits_{#1}^{#2}}}
\providecommand{\erw}[1]{\mbox{$\langle #1 \rangle$}}
\providecommand{\floor}[1]{{\rm floor}\left(#1\right)}
\providecommand{\funkint}[1]{\int \!\!\cal{D}[#1]}
\providecommand{\hc}{^{\dagger}}
\renewcommand{\Im}{\mbox{Im}}
\providecommand{\intd}[1]{\int \!d#1}
\providecommand{\intdn}[1]{\int \!\!d^{n}\v{#1}}
\providecommand{\intvol}{\int \!\!d^{3}}
\providecommand{\intvolr}{\int \!\!d^{3} r}
\providecommand{\m}[1]{\underline{\underline{#1}}}
\providecommand{\nab}{\v{\nabla}}
\providecommand{\order}[1]{{\cal O}(#1)}
\providecommand{\overdot}[1]{\stackrel{.}{#1}}
\renewcommand{\Re}{\mbox{Re}}
\providecommand{\re}{\mbox{Re}}
\providecommand{\rot}[1]{\v{\nabla}\times \v{#1}}
\providecommand{\sumi}{\sum_{i=1}^n}
\providecommand{\sub}[1]{_{\rm #1}}
\renewcommand{\sup}[1]{^{\rm #1}}
\providecommand{\ueber}[2]{
\left(\begin{array}{c}
#1 \\ #2
\end{array} \right)
}
\renewcommand{\v}[1]{\mbox{\boldmath$#1$}}
\providecommand{\vscript}[1]{\mbox{{\scriptsize $\bf #1$}}}
\renewcommand{\v}[1]{\bbox{#1}}
\providecommand{\varabl}[2]{\frac{\delta #1}{\delta #2}}
\providecommand{\vonx} {\mbox{$(\v{x})$}}
\providecommand{\vonk} {\mbox{$(\v{k})$}}
\providecommand{\vonw} {\mbox{$(\omega )$}}
\providecommand{\vonr} {\mbox{$(\v{r})$}}
\providecommand{\vonrs} {\mbox{$(\v{r'})$}}
\providecommand{\vonxyt} {\mbox{$(x,y,t)$}}
\providecommand{\vongrad} {\mbox{$(\v{\nabla })$}}
\providecommand{\vonrgrad} {\mbox{$(\v{r},\v{\nabla })$}}
\providecommand{\vonxt} {\mbox{$(\v{x},t)$}}
\providecommand{\vonrt} {\mbox{$(\v{r},t)$}}
\providecommand{\vonrsts} {\mbox{$(\v{r'},t')$}}
\providecommand{\nonu} {\nonumber}
\providecommand{\uu}[1]{\underline{\underline{#1}}}
| {
"attr-fineweb-edu": 1.986328,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUerHxK6wB9mn5AY7K | \section{Introduction} \label{sec:introduction}
It is safe to say that sports are an integral part of many Americans' lives. Even with a global pandemic, sports viewership has been steadily increasing in America. \textit{It is estimated that by 2025, an average of 90 million Americans will be watching live sports at least once per month}~\footnote{https://www.statista.com/statistics/1127341/live-sport-viewership/}.
Out of all the different professional sports leagues, the NFL (National Football League) accounts for the majority of viewership, and has seen a nonstop growth every year in its audience. \textit{A projected massive total of 168 million people watched the NFL during the course of its season}; the 2021 NFL regular season averaged 17.1 million viewers, a 10\% increase from the 2020 NFL regular season~\footnote{https://bit.ly/3PgL11q};
and to top it all off, what's considered the world's most viewed sporting event, \textit{the Super Bowl, estimated a total of 112.3 million viewers}~\footnote{https://bit.ly/3PhzdvN},
which meant approximately one-third of the entire US population spent their Sunday afternoon in front of a TV screen to tune in to watch football. With viewership booming and revenue coming in from every direction, \textit{the NFL is named the most profitable sports league in the world}~\footnote{https://bit.ly/3Pm76LQ}.
The profound success of the league allows its \textbf{salary cap}, which is the upper limit on the amount of money players on a team can be paid, \textit{to increase from \$182.5 million during the 2020 COVID year to \$208.2 million}~\footnote{https://bit.ly/3Ql82Bv},
and this number is projected to continue to grow.
One of the most intriguing parts of the NFL season is the off-season, which is a period of exhilaration and anticipation for fans, as they will get to witness players in new places or stay with their current teams. Just this season, we have seen record-breaking contracts all across the league, the most notable being the Cleveland Browns paying quarterback Deshaun Watson \$230 million all in guaranteed money (the largest guaranteed contract in NFL history)~\footnote{https://bit.ly/3QhN3PX}.
\textit{Something important to note about NFL contracts is that the money is only partially guaranteed}, and instead contract extensions provide the max amount (including incentives) that a player could make based on how well they perform, making the fully guaranteed that Deshaun Watson earns even more incredible. This raises a question of if all NFL teams would be providing their superstar players large contract extensions.
The answer is no, and it is all because of a particular designation that the NFL founded almost 20 years ago back in 1993, called the \textbf{franchise tag}, which many NFL players now dread to be signed under. In short, the franchise tag allows a team to sign a player who they consider to be a "franchise" player - an integral part of the team - to a fixed, one year deal with an appropriate amount of guaranteed money depending on the player's position. At the first glance, the franchise tag appears to be mutually beneficial to the team and the player. The team gets to retain a key player that largely impacts the team's performance while not breaking the bank, and the player gets a decent amount of guaranteed money. However, in recent years, the franchise tag has seemed to become \textit{lopsided towards the team and less towards the player}, leading to an outbreak of criticism by many as people believe that the franchise tag has become an exploitative tool for teams.
\textit{It is important to also know that the franchise tag can be used multiple years up to three times}, but it is nowhere as common as the one-year franchise tag. Although the amount of guaranteed money does go up, it is not by a substantial amount. The backlash for teams tagging a player multiple times has been well circulated, and has been most well stated by Seattle Seahawks All-Pro offensive tackle Walter Jones, one of two players ever to be tagged for three consecutive years~\footnote{https://www.fieldgulls.com/2007/4/6/204056/2795}:
"Maybe when it [franchise tag] was invented, it was good…teams tell you how much you should be flattered that they think enough of you to make you their franchise guy…but it's not a thriller…it's a killer watching all the deals get signed with huge bonuses and you're not getting the big money upfront. It's a lousy system."
These criticisms raise an intriguing question: is the NFL's franchise tag fair to players? To the best of my knowledge, this question has been systematically studied. In this paper, I make an attempt to answer this question. My main contributions are to research the logistics behind this criticism, that is, players on a franchise tag obtaining smaller contract extensions than players not on a franchise tag, through statistical and economic lens and to offer a way of bridging the socioeconomic gap~\cite{blair2011sports} between the team and players on the franchise tag.
The paper proceeds as follows. Section \ref{sec:data} discusses the data and methods used. Section \ref{sec:result} presents the results. Section \ref{sec:discussion} presents related discussions to conclude the paper.
\section{Data and Methods} \label{sec:data}
\subsection{Data}
The common criticism of the franchise tag has been the fact that players under the designation have little to no negotiation powers for their contract and will now have a smaller chance of signing a large contract extension the following season, regardless they are a free agent or not. With the knowledge that an NFL player's prime is considerably shorter than other sports due to the violent nature of the game, \textit{ranging from 25-29 years old}, and the fact that the average contract extension (the contract after a player's rookie contract) takes place around that age range as well, it is imperative for players to secure what could be the money they live the rest of their life on once their rookie contract is over. Teams are inclined to have higher expectations for franchise tagged players, so they will be met with an excessive amount of pressure. In addition to the volatile nature of football, whether injuries or performance-wise, if anything negative happens to the franchise tagged player, it is almost certain that their free agency stock will drop to extremes where they may even be out of the league the following year. Even though players can \textbf{"hold out"} (a term used by football fans for players who voluntarily decide to sit out games and practices for the purpose of seeking a more expensive contract) when under the franchise tag, they paint themselves as a self-centered agitator, which discourages teams from offering them a contract. Figure \ref{fig:table} shows a chart of the amount of money players earn under the franchise tag. \textit{Although it is fully guaranteed, it is nowhere near the hundreds of millions of potential money players could be earning through free agency.} The 2022 NFL off-season was record-breaking, with 40 players earning 50+ million dollar contract extensions, the largest amount in an NFL off-season ever, equating to a total of about 2.3 billion dollars in guaranteed money.
To support my hypothesis of players on a franchise tag obtaining smaller contract extensions than players not on a franchise tag, I separate my contract extension data into three study groups: (1) players who were free agents,(2) players under a one-year franchise tag, and (3) players under a multi-year franchise tag. The criteria I use for obtaining my data were as follows:
\begin{itemize}
\item They all had to be players from the same off-season.
\item They all have to have signed their contracts within the 25-29 age range - the prime age for an NFL player - when most NFL players sign their first contract extension.
\item The players in the free agent group all had to be "superstar" or above average players, meaning they either have made a Pro Bowl or All-Pro selection or have garnered consistent production, since franchise-tagged players tend to be among the top 10\% at their position.
\end{itemize}
I collect my data through reliable and in-depth NFL statistical websites, such as Spotrac~\footnote{https://www.spotrac.com/}, a website dedicated to contracts and the breakdowns of each one, and Pro Football Reference~\footnote{https://www.pro-football-reference.com/}, a forum that provides player statistics and information. \textit{My data consist of 38 free agents, 16 one-year franchise tagged players, and 6 multi-year franchise tagged players,} sorted by three categories of player name, guaranteed money made from signing the contract, and maximum amount of money made from signing the contract. Figure \ref{fig:table} shows the actual data I curate for analysis for the three most recent off-seasons from 2020 to 2022, in order to provide a big enough sample size to conduct a significance test and make inferences on.
\subsection{Methods}
I conduct my analysis through the following four means.
1) \textbf{Table comparison}. I put the curated data into tables that compare guaranteed money and max amount of money earned from the contract for the three study groups, where I calculate the average amount for each circumstance as well as a baseline comparison.
2) \textbf{Box plot chart}~\cite{book}.
I create two box and whisker plot charts to help better visually represent my data. For the plots, I utilize the minimum, first quartile (25th percentile), third quartile (75th percentile), and maximum values from the respective datasets. Tables 3 and 4 show my box and whisker plots.
3) \textbf{Statistical significance test}~\cite{book}
I conduct six two sample t-tests for the difference between two means comparing free agents and franchise tagged (one year and multi-year combined) players in terms of their guaranteed money and maximum amount of money they earn on their contract extensions through two pooled data tests and four non-pooled data tests.
Mathematically, the six tests of the same nature are defined as follows.
\begin{itemize}
\item{Test 1} – Guaranteed money of free agent ($\mu_1$) vs (pooled) franchise tagged player ($\mu_2$)
\begin{equation} \label{eqn:test1}
H_0: \mu_1 = \mu_2 ~~ vs ~~ H_a: \mu_1 > \mu_2
\end{equation}
where\\
$\mu_1$: population mean of free agents contract extension guaranteed money,\\
$\mu_2$: population mean of franchise tagged players (one + multi year) contract extension guaranteed money.
\item{Test 2} – Guaranteed money of free agent ($\mu_1$) vs one-year franchise tagged player ($\mu_2$).
\item{Test 3} – Guaranteed money of free agent ($\mu_1$) vs multi-year franchise tagged player ($\mu_2$).
\item{Test 4} – Maximum money of free agent ($\mu_1$) vs (pooled) franchise tagged player ($\mu_2$).
\item{Test 5} – Maximum money of free agent ($\mu_1$) vs one-year franchise tagged player ($\mu_2$);
\item{Test 6} – Guaranteed money of free agent ($\mu_1$) vs multi-year franchise tagged play ($\mu_2$).
\end{itemize}
4) \textbf{Payoff matrix}~\cite{econ}.
I create two payoff matrices that showcase the projected outcomes of the actions that the player and team takes before my proposed solution and after my proposed solution.
\section{Results} \label{sec:result}
As shown in Figure \ref{fig:table}, \textit{there is a clear pattern that the more years on the franchise tag, the guaranteed and total potential money a player will make gets lower.} For all the free agent players, the total average guaranteed and total average max money was 25.15 million and 46.55 million dollars, respectively. For the franchise tagged players, the total average guaranteed and total average max money from contract extension was 22.77 million and 37.2 million dollars, respectively. For the multi-year franchise tagged players, the total average guaranteed and total average max money from contract extension was 21.62 million dollars and 35.97 million dollars, respectively.
When comparing the box and whisker plots as in Figure \ref{fig:box}, for both guaranteed money and max amount of money earned from contract extensions, \textit{the variance of the multi-year franchise-tagged players is significantly less than the variance of the free agents and franchise-tagged players.} When examining the median of free agents (24.75, 40.50), it is also higher than both franchise tagged players (23.5, 36), and multi-year franchise tagged players (24, 33.5) for both guaranteed and max amount of money earned from contract extensions. Overall, these statistics prove that there is a huge advantage for free agents signing contract extensions when compared to players on any type of franchise tag.
Through my six significance tests, I obtain p-values of 0.153, 0.252, 0.161, 0.012, 0.067, and 0.016. \textit{These p-values mostly indicate that the relationship between free agents and franchise-tagged players (one and multi-year) is statistically significant}, meaning that its likely that the massive difference in money made from contract extensions is actually real rather than by chance.
However, numbers merely do not show enough. Let us examine the selected players' situations closer for the most recent off-season, 2022. When looking at the franchise tagged players, three out of the four of them had problems. Allen Robinson had a career-worst season with the Chicago Bears, Chris Godwin tore his ACL, and Marcus Maye tore his Achilles. As for the multi-year franchise tagged players, two out of the four of them had issues. Dak Prescott suffered a near career-ending ankle injury, and Brandon Schreff tore his MCL. It is safe to presume that after seeing their superstar player go down with injury, the respective teams do not feel as high of an incentive to pay them top dollar money. Combining that with the fact that the more years under the franchise tag, the older the player gets within their prime, meaning that the player's performance inevitably declines, teams is more hesitant to sign an older player coming off an injury than just browsing free agency for younger and more productive assets.
Figure \ref{fig:pre-solution} shows game-theoretic aspects by listing the simultaneous choices of the player and the team pre-solution. \textit{There is a clear dominant strategy for the team to use the franchise tag, as the benefits massively outweigh the risks, but the player loses in every scenario.} The player is either viewed as someone that is not team-friendly and loses out on guaranteed money through holding out by not playing on the franchise tag, or faces the massive risk of earning less potential money whilst dealing with immense pressure by playing on the franchise tag.
\section{Discussions} \label{sec:discussion}
As one-sided as the franchise tag is, it should not be removed from the NFL. \textit{Is there a way that we could reach a win-win, mutually beneficial situation between the team and the player which is always the intended purpose of the franchise tag?} In fact, there are two things the NFL should implement to make the franchise tag more fair.
First, the NFL needs to completely remove the multi-year franchise tag and make the franchise tag simply one year long. When looking at the past three seasons, the amount of multi-year franchise tagged players is so minimal (less than half of franchise tagged players), and further they make less average guaranteed and max money from contract extensions compared to the other groups. The low p-values of 0.161 and 0.016 provide evidence that the massive difference in money made contract extensions is in fact real. The range of money players make from the multi-year franchise tag as shown by the box and whisker plot is actually substantially lower than the other two study groups. The franchise tag itself is already extremely unfair to the player since it drastically reduces the amount of money they could be making while supplying them with added pressure, so if a player has to go through the same process for two or three years, it means they lose out on tens, even hundreds of millions once they reach their contract extensions.
Second, the NFL should add a team and player option to the franchise tag to bring the mutuality aspect that the designation lacks. For the player, other than signing the tag, they are given a second option where they are allowed to test free agency and seek a contract from other teams, however, they will not be eligible to play in games and therefore will be without pay for the entire process until they settle on an agreement. For the team, if the player does end up finding a contract, they have the opportunity to match the offer and keep the player. By using these choices, it'll actually give the player a fair chance of seeking a large contract extension which the current franchise tag does not offer at all, but it will still be fair for the team since they won't have to pay the player during the process as they won't be playing in games. Once the player does go through the lengthy process of negotiating a contract, then the team still has a chance to keep their star player, but they'll have to match the price. This maintains the strategic element of using such a valuable designation like the franchise tag, but in a fair and non exploitative way. The team will have to truly decipher whether or not the player is worth keeping, instead of just keeping them for the sake of saving millions in cap space.
Figure \ref{fig:solution} shows game-theoretic aspects by listing the simultaneous choices of the player and the team post-solution. \textit{Both the team and player have a dominant strategy} - for the team it is using the tag/offering a contract - for the player it is seeking a contract extension. The Nash equilibrium would be the team using the tag/offering a contract and the player seeking a contract extension. The solution fixes the problem of the franchise tag not being mutually beneficial.
Underpaying labor workers and wealth inequality is a huge problem in today's society, so we definitely do not want to witness players, who risk their entire bodies as a career, get burdened as well and lose out on hundreds of millions of dollars. Hopefully, with the above two suggestions, the NFL can utilize the franchise tag the way it's supposed to: as a synergistic nomination rather than a one-sided tool.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.757812,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdxTxaL3SujGfKS02 | \subsection{The F1/10 Competition}
Few things focus the mind and excite the spirit like a competition.
In the early days of racing, competitors first had to build their vehicles before they could race them. It was thus as much an engineering as a racing competition. We want to rekindle that competitive spirit.
For the past three years, we have been organizing the F1/10 International Autonomous Racing Competition, the first ever event of its kind.
The inaugural race was held at the 2016 ES-Week in Pittsburgh, USA; followed by another race held during Cyber-Physical Systems (CPS) Week in April 2018, in Porto, Portugal. The third race was held at the 2018 ES-Week in October in Turin, Italy, Figure~\ref{fig:outreach} [Right]).
Every team builds the same baseline car, following the specifications on \url{f1tenth.org}.
From there, they have the freedom to deploy any algorithms they want to complete a loop around the track in the fastest time, and to complete the biggest number of laps in a fixed duration.
Future editions of the race will feature car-vs-car racing.
So far, teams from more than 12 universities have participated in the F1/10 competition, including teams from KAIST (Korea), KTH (Sweden), Czech Technical University, University of Connecticut, Seoul National University, University of New Mexico, Warsaw university of Technology, ETH Zurich, Arizona State University, and Daftcode (a Polish venture building company).
\section{Acknowledgement}
\subsection{System Architecture}
Figure~\ref{fig:overview} shows an overview of the F1/10 platform.
The perception module interfaces and controls the various sensors including scanning LiDARs, monocular \& stereo cameras, inertial sensors, etc. The sensors provide the platform with the ability to navigate and localize in the operating environment.
The planning pipeline (in ROS) helps process the sensor data, and run mapping, and path planning algorithms to determine the trajectory of the car.
Finally, the control module determines the steering and acceleration commands to follow the trajectory in a robust manner.
\subsection{F1/10 Build}
In this section we provide a brief description of how the F1/10 autonomous race is built. Detailed instructions and assembly videos can be found at \url{f1tenth.org}.
\noindent \textbf{Chassis:} The chassis consists of two parts. The bottom chassis is a 1/10 scale race car chassis available from Traxxas~\cite{traxxasref}. The top chassis is a custom laser-cut ABS plate that our team has developed and to which all the electronic components are attached. The CAD and laser cut files for the top plate are open-sourced.
The Traxxas bottom chassis is no ordinary racing toy: it is a very realistic representation of a real car. It has 4-wheel drive and can reach a top speed of 40mph, which is extremely fast for a car this size. Tire designs replicate the racing rubber used on tarmac circuits. The turnbuckles have broad flats that make it easy to set toe-in and camber, just like in a real race car. The bottom chassis has a high RPM brush-less DC motor to provide the drive to all the wheels, an Electronic Speed Controller (ESC) to controls the main drive motor using pulse-width modulation (PWM), a servo motor for controlling the Ackermann steering, and a battery pack; which provides power to all these systems.
All the sensors and the on-board computer are powered by a separate power source (lithium-ion battery).
The F1/10 platform components are affordable and widely available across the world making it accessible for research groups at most institutions.
These components are properly documented and supported by the manufacturer and the open-source community.
\noindent \textbf{Sensors and Computation:}
The F1/10 platform uses an NVIDIA Jetson TX2~\cite{franklin2017nvidia} GPU computer. The Jetson is housed on a carrier board~\cite{orbittycarrier} to reduce the form factor and power consumption.
The Jetson computer hosts the F1/10 software stack built on Robot Operating System (ROS).
The entire software stack, compatible with the sensors listed below, is available as an image that can be flashed onto the Jetson, enabling a plug-and-play build.
The default sensor configuration includes a monocular USB web cam, a ZED depth camera, Hokuyo 10LX scanning LiDAR, and a MPU-9050 inertial measurement unit (IMU). These sensors connect to the Jetson computer over a USB3 hub. Since the underpinnings of the software stack is in ROS, many other user preferred sensors can also be integrated/replaced.
\noindent \textbf{Power Board:} In order to enable high performance driving and computing the F1/10 platform utilizes Lithium Polymer batteries. The power board is used to provide a stable voltage source for the car and its peripherals since the battery voltage varies as the vehicle is operated. The power board also greatly simplifies wiring of peripherals such as the LIDAR and wifi antennas. Lastly the power board includes a Teensy MCU in order to provide a simple interface to sensors such as wheel encoders and add-ons such as RF receivers for long range remote control.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{research/images/Fig_3.png}
\caption{Planing and control research enabled by the F1/10 platform: \textit{(Bottom Left)} Reactive Obstacle Avoidance, \textit{(Top Left)} End-to-End Driving, \textit{(Top Right)} Model Predictive Control, \textit{(Bottom Right)} V2V Collaboration}
\label{fig:planning-control}
\end{figure*}
\noindent \textbf{Odometry:}
Precise odometry is critical for path planing, mapping, and localization.
Odometry is provided by the on board VESC as an estimate of the steering angle and the position of the vehicle.
The open-source F1/10 software stack includes the custom ROS nodes, and a configuration file required to interface with the VESC and obtain the odometry information.
\noindent \textbf{Communication architecture:}
The F1/10 testbed includes a wireless access point which is used to remotely connect (ssh) into the Jetson board.
The software stack is configured to use \textit{ROS-Over-Network} used for both sending commands to the car and obtaining telemetry data from the car in real-time. In addition we have created software which supplies a socket which enables communication between multiple F1/10 vehicles operating under different ROS master nodes.
\section{0pt}{8pt plus 0pt minus 2pt}{0pt plus 2pt minus 0pt}
\setlength{\belowcaptionskip}{-12pt}
\linespread{0.97}
\settopmatter{printacmref=false}
\renewcommand\footnotetextcopyrightpermission[1]{}
\pagestyle{plain}
\usepackage{footmisc}
\begin{document}
\title{F1/10: An Open-Source Autonomous Cyber-Physical Platform}
\author{\import{common/}{authors.tex}}
\renewcommand{\shortauthors}{M. O'Kelly et al.}
\begin{abstract}
\import{common/}{abstract.tex}
\end{abstract}
\maketitle
\section{Introduction}
\import{introduction/src/}{introduction.tex}
\section{F1/10 Testbed}
\import{f1tenth_build/src/}{build.tex}
\import{research/src/}{research_enabled.tex}
\section{From F1/10 to full-scale AVs}
\import{research/src/}{full_scale.tex}
\section{F1/10 Education and Compeitions}
\import{class_comp/src/}{class_comp.tex}
\section{Conclusion and Discussion}
\import{conclusion_fw/src/}{conclusion.tex}
\bibliographystyle{acm}
\section{Research: Planning and Control}
The decision making systems utilized on AVs have progressed significantly in recent years; however they still remain a key challenge in enabling AV deployment \cite{shalev2017formal}. While AVs today can perform well in simple scenarios such as highway driving; they often struggle in scenarios such as merges, pedestrian crossings, roundabouts, and unprotected left-turns. Conducting research in difficult scenarios using full-size vehicles is both expensive and risky. In this section we highlight how the F1/10 platform can enable research on algorithms for obstacle avoidance, end-to-end driving, model predictive control, and vehicle-to-vehicle communication.
\subsection{Obstacle avoidance}
\label{subsec:ftg}
Obstacle avoidance and forward collision assist are essential to the operation of an autonomous vehicle.
The AV is required to scan the environment for obstacles and safely navigate around them.
For this reason, many researchers have
developed interesting real-time approaches for avoiding unexpected static and dynamic obstacles~\cite{tallamraju2018decentralized, iacono2018path}.
To showcase the capability of the F1/10 testbed, we implement one such algorithm known as \textit{Follow The Gap} (FTG) method~\cite{SEZER20121123}.
The Follow the Gap method is based on the construction of a gap array around the vehicle and calculation of the best heading angle for moving the robot into the center of the maximum gap in front, while simultaneously considering its goal.
These two objectives are considered simultaneously by using a fusing function.
Figure~\ref{fig:planning-control}[Left] shows an overview and the constraints of FTG method.
The three steps involved in FTG are:\\
(a) Calculating the gap array using vector field histogram, and finding the maximum gap in the LIDAR point cloud using an efficient sorting algorithm,\\
(b) Calculating the center of the largest gap, and\\
(c) Calculating the heading angle to the centre of the largest gap in reference to the orientation of the car, and generating a steering control value for the car.
\subsection{End-to-end driving}
Some recent research replaces the classic chain of perception, planning, and control with a neural network that directly maps sensor input to control output~\cite{DBLP:journals/corr/BojarskiTDFFGJM16, chi2017deep, eraqi2017end}, a methodology known as end-to-end driving.
Despite the early interest in end-to-end driving \cite{pomerleau1989alvinn}, most self-driving cars still use the perception-planning-control paradigm.
This slow development can be explained by the challenges of verifying system performance; however, new approaches based on reinforcement learning are being actively developed \cite{kendall2018learning}.
The F1/10 testbed is a well suited candidate for experimentation with end-to-end driving pipelines, from data gathering and annotation, to inference, and in some cases even training.
\noindent \textbf{Data gathering and annotation for deep learning:}
As shown in Figure~\ref{fig:planning-control}[Right], we are able to integrate a First Person View (FPV) camera and headset with the F1/10 car. We are also able to drive the car manually with a USB steering wheel and pedals instead of the RC remote controller which comes with the Traxxas car.
The setup consists of a Fat Shark FSV1204 - 700TVL CMOS Fixed Mount FPV Camera, 5.8GHz spiroNET Cloverleaf Antenna Set, 5.8GhZ ImmersionRC receiver, and Fat Shark FSV1076-02 Dominator HD3 Core Modular 3D FPV Goggles Headset.
The FPV setup easily enables teleoperation for the purposes of collecting data to train the end-to-end deep neural netowrks (DNNs).
Each training data consists of an input, in this case an image from the front facing camera, and a label a vector containing the steering angle and requested acceleration.
Practical issues arise due to the fact that the label measurements (50 Hz) must be synchronized with the acquired camera images (30 Hz). Included in this portion of the stack is a ROS node which aligns the measurements and the labels. As part of this research we are releasing over 40,000 labeled images collected from multiple builds at the University of Pennsylvania and the University of Virginia.
\noindent \textbf{End-to-End driving:}
Partly inspired by Pilotnet~\cite{DBLP:journals/corr/BojarskiTDFFGJM16} end-to-end work, we implemented a combination of a LSTM~\cite{NIPS2012_4824} and a Convolutional Neural Network(CNN)~\cite{Hochreiter:1997:LSM:1246443.1246450} cell.
These units are then used in the form of a recurrent neural network (RNN).
This setup uses the benefits of LSTMs in maintaining temporal information (critical to driving) and utilizes the ability of CNN's to extract high level features from images.
To evaluate the performance of the model we use the normalized root mean square error (NRMSE) metric between the ground truth steering value and the predicted value from the DNN.
As can be seen in the point-of-view (PoV) image in Figure~\ref{fig:planning-control}[Left], our DNN is able to accurately predict the steering angle with an NRMSE of 0.14.
\subsection{Global \& local approaches to path planning}
\label{sec:path_planning}
AVs operate in relatively structured environments. Most scenarios an AV might face feature some static structure. Often this is the road geometry, lane connectivity, locations of traffic signals, buildings, etc. Many AVs exploit the static nature of these elements to increase their robustness to sensing errors or uncertainty.
In the context of F1/10, it may be convenient to exploit some information known \textit{a priori} about the environment, such as the track layout and floor friction.
These approaches are called \textit{static}, or \textit{global}, and they typically imply building a map of the track, simulating the car in the map, and computing offline a suitable nominal path which the vehicle will attempt to follow. Valuable data related to friction and drift may also be collected to refine the vehicle dynamics model.
More refined models can be adopted off-line to compute optimal paths and target vehicle speeds, adopting more precise optimization routines that have a higher computational complexity to minimize the lap time.
Once the desired global path has been defined, the online planner must track it. To do that, there are two main activities must be accomplished on-line, namely \textit{localization} and \textit{vehicle dynamics control}.
Once the vehicle has been properly localized within a map, a local planner is adopted to send longitudinal and transversal control signals to follow the precomputed optimal path. As the local planner needs to run in real-time, simpler controllers are adopted to decrease the control latency as much as possible. Convenient online controllers include pure pursuit path geometric tracking \cite{coulter1992implementation}. The F1/10 software distribution includes an implementation of pure pursuit, nodes for creating and loading waypoints, and path visualization tools. For the interested reader we recommend this comprehensive survey of classical planning methods employed on AVs \cite{frazzoliMPsurvey16}.
\subsection{Model Predictive Control}
\label{sec:mpc}
While data annotation for training end-to-end networks is relatively easy, the performance of such methods is difficult to validate empirically \cite{shalev2016sample} especially relative to approaches which decompose functionality into interpret-able modules.
In this section we outline both a local planner which utilizes a model predictive controller (MPC) and a learned approximation of the policy it generates detailing one way planning components can be replaced with efficient learned modules.
\noindent\textbf{Components:} The F1/10 platform includes a MPC written in C++ comprised of the vehicle dynamics model, an optimization routine which performs gradient descent on the spline parameters. Peripheral support nodes provide an interface to road center line information, a multi-threaded goal sampler, a 2D occupancy grid, and a trajectory evaluation module. Additionally, we include a CUDA implementation of a learned approximation of the MPC which utilizes the same interface as described above.
\noindent\textbf{Cubic Spline Trajectory Generation:} One local planner available on the F1/10 vehicle utilizes the methods outlined in
\cite{McNaughton2011} and \cite{Howard_2009_6434} and first described in \cite{nagy2001trajectory}. This approach is commonly known as \emph{state-lattice planning with cubic spline trajectory generation}.
Each execution of the planner requires the current state of the vehicle and a goal state. Planning occurs in a local coordinate frame. The vehicle state $x$ is defined in the local coordinate system, a subscript indicates a particular kind of state (i.e. a goal) In this implementation we define $x$ as: $\vec{x}={[s_x\ s_y\ v\ \Psi\ \kappa]}^T$, where $s_x$ and $s_y$ are the x and y positions of the center of mass, $v$ is the velocity, $\Psi$ is the heading angle, and $\kappa$ is the curvature.
In this formulation, trajectories are limited to a specific class of parameterized curves known as \emph{cubic splines} which are dense in the robot workspace. We represent a cubic spline as a function of arc length such that the parameters $\vec{p} = [s\ a\ b\ c\ d]^T$
where $s$ is the total curve length and ($a,b,c,d)$ are equispaced knot points representing the curvature at a particular arc length. When these parameters are used to define the expression of $\kappa(s)$ which can be used to steer the vehicle directly.
The local planner's objective is then to find a \emph{feasible trajectory} from the initial state defined by the tuple
$\vec{x}$ to a goal pose $\vec{x}_{g}$.
We use a gradient descent algorithm and forward simulation models which limit the ego-vehicle curvature presented in \cite{Howard_2009_6434}.
These methods ensure that the path generated is kinematically and dynamically feasible up to a specified velocity.
\noindent\textbf{Learning an Approximation:} Recall that $\vec{x}$, the current state of the AV, can be expressed as the position of a moving reference frame attached to the vehicle. \textit{Offline}, a region in front of the AV is sampled, yielding a set of $M$ possible goals $\{\vec{x}_{i}\}_{i=1}^M$, each expressed in relative coordinates.
Then for each goal $\vec{x}_{g,i}$ the reference trajectory connecting them is computed by the original MPC algorithm.
Denote the computed reference trajectory by $\vec{p}_i = [s\ a\ b\ c\ d]^T$
Thus we now have a \textit{training} set $\{(\vec{x}_{g,i}, \vec{p}_i)\}_{i=1}^M$.
A neural network $NN_{TP}$ is used to fit the function $x_{goal,i} \mapsto \vec{p}_i$.
\textit{Online}, given an actual target state $\vec{x}_g$ in relative coordinates, the AV computes $NN_{TP}(\vec{x}_g)$ to obtain the parameters of the reference trajectory $\vec{p}_g$ leading to $\vec{x}_g$. Our implementation utilizes a radial basis function network architecture, the benefits of this decision is that the weights can be trained algebraically (via a pseudo-inverse) and each data point is guaranteed to be interpolated exactly. On 145,824 samples in the test set our methodology exhibits a worst-case test error of $0.1\%$ and is capable of generating over 428,000 trajectories per-second.
\subsection{Vehicle-to-Vehicle Communication, Cooperation, and Behavioral Planning}
The development of autonomous vehicles has been propelled by an idealistic notion that the technology can nearly eliminate accidents. The general public expects AVs to exhibit what can best be described as superhuman performance; however, a key component of human driving is the ability to communicate intent via visual, auditory, and motion based cues. Evidence suggests that these communication channels are developed to cope with scenarios in which the fundamental limitations of human senses restrict drivers to \textit{cautious operations} which anticipate dangerous phenomena before they can be identified or sensed.
\noindent\textbf{Components:} In order to carry out V2V communication experiments we augment the F1/10 planning stack with ROS nodes which contain push/pull TCP clients and servers, these nodes extract user defined state and plan information so that it may be transmitted to other vehicles.
In this research we construct an AV `roundabout' scenario where the center-island obstructs the ego-vehicles view of the other traffic participants. A communication protocol which transmits an object list describing the relative positions of participating vehicles, and a simple indicator function encodes whether given each vehicles preferred next action it is safe to proceed into the roundabout is implemented. Alternative scenarios such as a high-speed merge or highway exit maneuver can also easily be constructed at significantly less cost and risk than real world experiments. The F1/10 platform enables an intermediate step between simulation and real-world testing such that the effects of sensor noise, wireless channel degradation, and actuation error may be studied in the context of new V2V protocols.
\section{Research: Perception}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{research/images/Fig_4.png}
\caption{Some perception research enabled by the F1/10 platform (clockwise, starting left); (a) lane following using a monocular camera, (b) optical flow computation using Farenback's method and FlowNet 2.0 and, (c) localization and mapping}
\label{fig:perception}
\end{figure*}
In this section we highlight how the F1/10 vehicle enables a novel mode of research relative to perception tasks. Although there has been huge progress in low-level vision tasks such as object detection due to effectiveness of deep learning, AVs only perform such tasks in order to enable decisions which lead to safe mobility. In this context the F1/10 vehicle is a unique tool because it allows researchers to measure not just performance of a perception subsystem in isolation, but rather the capabilities of the whole system within its operating regime. Due to the extensive planning and state estimation capabilities already reliably enabled on the car new researchers focused on perception subsystems can enable comparison of a variety of methods on uniform platform in the context of specific driving tasks.
\subsection{Simultaneous Localization and Mapping}
The ability for a robot to create a map of a new environment without knowing its precise location (SLAM) is a primary enabler for the use of the F1/10 platform in a variety of locations and environments. Moreover, although SLAM is a well understood problem it is still challenging to create reliable real-time implementations. In order to allow the vehicle to drive in most indoor environments we provide interface to a state of the art LIDAR-based SLAM package which provides loop-closures, namely Google Cartographer \cite{hess2016real}. Included in our base software distribution are local and global settings which we have observed to work well empirically through many trials in the classroom and at outreach events. In addition we include a description of the robots geometry in an appropriate format which enables plug-and-play operation.
For researchers interested primarily in new approaches to SLAM the F1/10 platform is of interest due to its non-trivial dynamics, modern sensor payload, and the ability to test performance of the algorithm in motion capture spaces (due to the small size of vehicle).
In addition to SLAM packages we also provide an interface to an efficient, parallel localization package which utilizes a GPU implementation of raymarching to simulate the observations of random particles in a known 2D map \cite{walsh17}. The inclusion of this package enables research on driving at the limits of control even without a motion capture system for state estimation.
\subsection{Computer Vision}
Our distribution of F1/10 software includes the basic ingredients necessary to explore the use of deep learning for computer vision. It includes CUDA enabled versions of PyTorch~\cite{paszke2017pytorch}, Tensorflow~\cite{abadi2016tensorflow}, and Darknet~\cite{redmon2013darknet}. We include example networks for semantic segmentation \cite{DBLP:journals/corr/abs-1803-06815}, object detection \cite{redmon2016you}, and optical flow \cite{ilg2017flownet}; we focus on efficient variants of the state-of-the-art that can run at greater than 10 FPS on the TX2.
Recently, it has come to light that many DNNs used on vision tasks are susceptible to so called \textit{adversarial examples}, subtle perturbations of a few pixels which to the human eye are meaningless but when processed by a DNN result in gross errors in classification. Recent work has suggested that such adversarial examples are \textit{not} invariant to viewpoint transformations \cite{lu2017no}, and hence \textit{not} a concern. The F1/10 platform can help to enable principled investigations into how errors in DNN vision systems affect vehicle level performance.
\subsection{Lane keep assist}
The F1/10 platform is designed to work with a wide array of sensors and, among them are USB cameras which enable implementation of lane tracking, and lane keep assist algorithms~\cite{guo2013cadas,satoh2002lane}.
Utilizing the OpenCV~\cite{bradski2000opencv} libraries. We implemented a lane tracking algorithm~\cite{ruyi2011lane} to run in real-time on the F1/10 on-board computer.
To do so, we created an image processing pipeline to capture, filter, process, and analyze the image stream using the ROS \textit{image\textunderscore transport} package, and designed a ROS node to keep track of the left and right lanes and calculate the geometric center of the lane in the current frame. The F1/10 steering controller was modified to keep track of the lane center using a proportional-derivative-integral (PID) controller. The image pipeline detailed in Fig.~\ref{fig:perception} [Left] is comprised of the following tasks:\\
(a) The raw RGB camera image, in which the lane color was identified by its hue and saturation value, is converted to greyscale and subjected to a color filter designed to set the lane color to white and everything else to black,\\
(b) The masked image from the previous step is sent through a canny edge detector and then through a logical AND mask whose parameters ensured that the resulting image contains only the information about the path,\\
(c) The output from the second step is filtered using a Gaussian filter that reduces noise and is sent through a Hough transformation, resulting in the lane markings contrasting a black background.
The output of the image pipeline contains only the lane markings.
The lane center is calculated and the F1/10 current heading is compared to the lane center to generate the error in heading. The heading of the car is updated to reflect the new heading generated by the ROS node using a PID controller.
\section{Research: Systems, Simulation, and Verification}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{research/images/Fig_5.png}
\caption{Figure (left) shows an F1/10 car in a simulated environment generated using data from the real world, (right, top) real time scheduling of vanishing point algorithm on the F1/10 onboard computer and, (right, bottom) verifying traffic behavior}
\label{fig:systems-research}
\end{figure*}
Safety and robustness are key research areas which must make progress in order to deploy commercial AVs. In this section we highlight the tools which we are using to enable simulation, real-time systems research, and verification efforts.
\subsection{Gazebo Racing Simulators}
Why would we want to use a simulator if you have the F1/10 platform? We want to test the car's algorithms in a controlled environment before we bring it into the real world so that we minimize risk of crashing. For instance, if a front steering servo plastic piece were to break, it is necessary to disassemble 20 parts in order to replace it. In fact each of the labs taught in our courses can be completed entirely in simulation first. The added benefit is that researchers and students with resource constraints can still utilize the software stack that we have built.
We use the ROS Gazebo simulator software \cite{koenig2004design}. From a high level, Gazebo loads a world as a .DAE file and loads the car. Gazebo also includes a physics engine that can determine how the F1/10 car will respond to control inputs, friction forces, and collisions with other obstacles in the environment. The F1/10 simulation package currently provides four tracks, each of which have real world counterparts.
It is also possible to create custom environments. In the F1/10 reference manual we provide a tutorial on the use of Sketchup to create simple 3D models. More advanced 3D modeling tools such as 3DS Max and Solid Works will also work. Our future work includes a cloud based simulation tool which utilizes the PyBullet \cite{coumans2016pybullet} physics engine and Kubernetes \cite{brewer2015kubernetes} containers for ease of deployment and large scale reinforcement learning experiments.
\subsection{Real-time Systems Research}
Autonomous driving is one of the most challenging engineering problems posed to modern embedded computing systems. It entails processing and interpreting a wide amount of data, in order to make prompt planning decisions and execute them in real-time. Complex perception and planning routines impose a heavy computing workload to the embedded platform, requiring multi-core computing engines and parallel accelerators to satisfy the challenging timing requirements induced by high-speed driving. Inaccuracy in the localization of the vehicles as well as delays in the perception and control loop may significantly affect the stability of the vehicle, and result in intolerable deviations from safe operating conditions. Due to the safety-critical nature of such failures, the F1/10 stack is an ideal platform for testing the effectiveness of new real-time scheduling and task partitioning algorithms which efficiently exploit the heterogeneous parallel engines made available on the vehicle. One example of such research implemented on the F1/10 platform is the AutoV project \cite{xu2017autov} which explores whether safety critical vehicle control algorithms can be safely run within a virtual environment.
The F1/10 platform also enables real-time systems research which explicitly consider the problem of co-design at the application layer. Specifically the goal is to create planning, perception, and scheduling algorithms which adapt to the context of the vehicle's operating environment. This regime was explored in a study on CPU/GPU resource allocation for camera-based perception and control \cite{pant2015power}. In the experiments performed on the F1/10 platform the objective was to obtain energy-efficient computations for the perception and estimation algorithms used in autonomous systems by manipulating the clock of each CPU core and the portion of the computation which would be offloaded to the a GPU.
These knobs allow us to leverage a trade-off between computation time, power consumption and output quality of the perception and estimation algorithms.
In this experiment, a vanishing point algorithm is utilized to navigate a corridor.
The computation is decomposed into three sequential components, and we study how its runtime and power consumption are affected by whether each component is run on a GPU or CPU, and the frequency at which it is executed.
Results highlight CPU/GPU allocation and execution frequencies which achieve either better throughput or lower energy consumption without sacrificing control performance.
The possible set of operating points and their effect on the update rate and power consumption for the vanishing point algorithm are shown in Fig.~\ref{fig:systems-research} [Middle].
\input{verification}
\subsection{Monitoring, Testing, \& Verification}
F1/10 can be used to support and demonstrate advances in formal verification and runtime monitoring.
\\
\textbf{Real-time verification}.
Rechability analysis is a technique for rigorously bounding a system's future state evolution, given that its current state $x(t)$ is known to be in some set $X(t)$.
The uncertainty about the system's current state is due to measurement noise and actuation imperfections.
Being able to ascertain, rigorously, bounds on the system state over $[t,t+T]$ despite current uncertainty allows the car to avoid unsafe plans.
Calculating the system's \textit{reach set}, however, can be computationally expensive and various techniques are proposed to deal with this issue, but very few have been explicitly aimed at real-time operation, or tested in a real-life situation.
The F1/10 platform enables such testing of reachability software in a real-world setup, with the code running along with other loads on the target hardware.
\\
\textbf{Runtime monitoring}
Good design practice requires the creation of \textit{runtime monitors}, which are software functions that monitor key properties of the system's behavior in real-time, report any violations, and possibly enforce fail-safe behavior.
Increased sophistication in the perception and control pipelines necessitates the monitoring of complex requirements, which range from enforcing safety and security properties to pattern matching over sensor readings to help perception~\cite{Abbas18Emsoft}.
A promising direction is to generate these complex monitors automatically from their high-level specification~\cite{havelundruntime,bartocci2018specification,rewriting-techniques,ulus2018sequential,havelund2002synthesizing,basin2018algorithms,dokhanchi2014line}.
These approaches have been implemented in standalone tools such as \cite{montre,mop-overview,basin2011monpoly,reger2015marq,Anna10staliro}.
For robotic applications, it will be necessary to develop a framework that handles specifications in a unified manner and generates efficient monitoring ROS nodes to be deployed quickly in robotic applications.
Steps in this direction appear in ROSMOP\footnote{https://github.com/Formal-Systems-Laboratory/rosmop}~\cite{mop-overview}, and in REELAY\footnote{https://github.com/doganulus/reelay}.
The F1/10 platform is ideal for testing the generated monitors' efficiency.
Its hardware architecture could guide low-level details of code generation and deployment over several processors.
The distributed nature of ROS also raises questions in distributed monitoring.
Finally, F1/10 competitions could be a proving ground for ease-of-use: based on practice laps, new conditions need to be monitored and the corresponding code needs to be created and deployed quickly before the next round.
This would be the ultimate test of user-friendliness.
\noindent\textbf{Generating Adversarial Traffic}
Because F1/10 cars are reduced-scale, cheaper and safer to operate than full-scale cars, they are a good option for testing new algorithms in traffic, where the F1/10 cars provide the traffic.
E.g. if one has learned a dynamic model of traffic in a given area, as done in \cite{okelly2018} then that same model can drive a fleet of F1/10 cars, thus providing a convincing setup for testing new navigation algorithms.
This fleet of cars can also allow researchers to evaluate statistical claims of safety, since it can generate more data, cheaply, than a full-scale fleet.
| {
"attr-fineweb-edu": 1.967773,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcBfxK0wg05VB6Ksa | \section{INTRODUCTION}
\label{introduction}
Sports scheduling is an attractive research area in these decades
\cite{deWerra2006,Trick2011}.
In the field of sports scheduling, the traveling tournament problem (TTP)
is a well-known benchmark problem proposed by \citeasnoun{Easton2001}.
Various approximation algorithms were proposed for the problem
in the last decade~\cite{Hoshino2012,Imahori2014,Thielen2012,Yamaguchi2011}.
In the following, some terminology and the problem are introduced.
Given a set~$T$ of $n$ teams, where $n \geq 4$ and is even,
a game is specified by an ordered pair of teams.
Each team has its home venue.
For any pair of teams $i, j \in T$, $d_{ij} \geq 0$ denotes
the distance between the home venues of $i$ and~$j$.
Throughout the paper, we assume that
triangle inequality ($d_{ij} + d_{jk} \geq d_{ik}$)
and symmetry ($d_{ij} = d_{ji}$) hold.
A double round-robin tournament is a set of games
in which every team plays every other team once
at its home venue (called {\it home} game) and once
at the home venue of the opponent (called {\it away} game).
Consequently, $2(n-1)$ slots are necessary to complete
a double round-robin tournament with $n$ teams.
Each team stays at its home venue before a tournament
and then travels to play its games at the chosen venues.
After a tournament, each team returns to its home venue
if the team plays an away game at the last slot.
When a team plays two consecutive away games,
the team goes directly from the venue of the first opponent
to that of another opponent without returning to its home venue.
The traveling distance of a team is defined by the sum of
distances~$d_{ij}$ if the team travels from the home venue
of $i$ to the home venue of $j$.
The objective of TTP is to minimize the total traveling distance,
which is the sum of traveling distances of $n$ teams.
Two types of constraints, called {\it no-repeater} and {\it at-most} constraints,
should be satisfied.
The no-repeater constraint is that, for any pair of teams $i$ and $j$,
two games of $i$ and $j$ cannot be held in two consecutive slots.
The at-most constraint is that, for a given parameter $k$,
no team plays more than $k$ consecutive home games
and more than $k$ consecutive away games.
The present paper considers the case for $k=2$;
the problem is called TTP(2), which is defined as follows.
\medskip
\noindent
{\bf Traveling Tournament Problem for $k = 2$ (TTP(2))}\\
{\bf Input:\/} A set of teams $T$ and
a distance matrix~$D=(d_{ij})$. \\
{\bf Output:\/}
A double round-robin tournament such that
\noindent
C1. No team plays more than two consecutive away games,
\noindent
C2. No team plays more than two consecutive home games,
\noindent
C3. The no-repeater constraint is satisfied for all teams,
\noindent
C4. The total distance traveled by the teams is minimized.
\medskip
For this problem, \citeasnoun{Thielen2012} proposed two types
of approximation algorithms.
The first algorithm is a 1.5 + $O(1/n)$ approximation algorithm.
The second one is a 1 + $O(1/n)$ approximation algorithm,
though it works only for the case with $n = 4m$ teams.
In this paper, we propose a 1 + $24/n$ approximation algorithm
for the case with $n = 4m + 2$ teams.
With the algorithm by \citeasnoun{Thielen2012},
we achieve an approximation ratio 1 + $O(1/n)$ for TTP(2).
\section{LOWER BOUNDS}
\label{lower bounds}
In this section, we present the independent lower bound for TTP(2) obtained by
\citeasnoun{Campbell1976} and another lower bound for analyzing schedules
generated by our approximation algorithm.
The basic idea of the independent lower bound is that the optimal trips
for a team can be obtained by computing a minimum weight perfect matching~$M$
in a complete undirected graph $G$ on the set of teams,
where the weight of the edge from team $i$ to $j$ given as the distance $d_{ij}$
between the home venues of teams $i$ and $j$.
Let $s(i) := \sum_{j \neq i} d_{ij}$ be the sum of weights of the edges
between team $i$ and all the other teams $j$, and let $\Delta := \sum_i s(i)$.
Let $d(M)$ be the weight of a minimum weight perfect matching~$M$ in~$G$.
Then the traveling distance of team $i$ is at least $s(i) + d(M)$,
and the total traveling distance is at least
\begin{equation}
\sum_{i=1}^{n} \left( s(i) + d(M) \right) = \Delta + n \cdot d(M),
\label{lb1}
\end{equation}
which is called the independent lower bound.
We note that \citeasnoun{Thielen2012} showed that
this lower bound cannot be reached in general.
We introduce another lower bound, which is weaker
than the above independent lower bound but is useful
to analyze the solutions we generate in the next section.
Let $d(T)$ be the weight of a minimum spanning tree~$T$ in~$G$.
Then, for any team $i$, $d(T) \le s(i)$ holds.
Hence the total traveling distance (i.e., another lower bound) is at least
\begin{equation}
\sum_{i=1}^{n} \left( d(T) + d(M) \right) = n \cdot (d(T) + d(M)).
\label{lb2}
\end{equation}
It is known that Christofides algorithm \cite{Christofides1976}
for the traveling salesman problem generates a Hamilton cycle~$C$ in~$G$,
whose length is at most $d(T) + d(M)$.
\section{ALGORITHM}
\label{algorithm}
We propose an approximation algorithm for TTP(2)
for the case with $n = 4m + 2$ teams.
Our algorithm is similar to the 1 + $O(1/n)$ approximation algorithm
by \citeasnoun{Thielen2012} for $n = 4m$ teams.
A key concept of the algorithm is the use of a minimum weight perfect
matching, a Hamilton cycle computed by Christofides algorithm,
and the circle method to construct a single round-robin tournament.
We first compute a minimum weight perfect matching~$M$
and a Hamilton cycle~$C$ in the graph $G$.
By using Christofides algorithm, the length of
the Hamilton cycle~$C$ is at most $d(M) + d(T)$,
where $T$ is a minimum weight spanning tree in $G$.
The $n$ teams are assumed to be numbered such that
the edges $\langle 1, 2 \rangle, \langle 3, 4 \rangle, \ldots, \langle n-1, n \rangle$
form the minimum weight perfect matching~$M$ in~$G$.
Among possible numberings, we choose a numbering
with the following three properties. \smallskip \\
(a) \ $s(n-5) + s(n-4) + \cdots + s(n) \le 6\Delta/n$, \\
(b) \ $t(n-7) + t(n-6) \le 12 \Delta / \{n(n-6)\}$, \\
(c) \ teams $2, 4, \ldots, n-8$ are appeared in the Hamilton \\
\hspace*{5mm} cycle~$C$ in this order if the other teams are removed,
\smallskip \\
where $t(i) := d_{i, n-5} + d_{i, n-4} + \cdots + d_{in}$.
We note that the existence of a numbering with the above property (a)
comes from an equation
\[ \Delta = \sum_{i=1}^{n} s(i) = \sum_{i=1}^{n/2} \{s(2i-1) + s(2i)\}, \]
in fact, we choose three pairs $\langle 2i-1, 2i \rangle$ in ascending order
of the values $\{s(2i-1) + s(2i)\}$ for $\langle n-5, n-4 \rangle$,
$\langle n-3, n-2 \rangle$ and $\langle n-1, n \rangle$.
We use the following equation for existence of a numbering with property (b):
\[ \sum_{i=1}^{(n-6)/2} \{t(2i-1) + t(2i)\} = s(n-5) + \cdots + s(n) \le 6\Delta /n. \]
We then construct a double round-robin tournament.
As the first phase, we construct a schedule with $2n-16$ slots.
These $2n-16$ slots are divided into four slots,
and we call each of them a {\it block}.
Thus, there are $n/2-4$ blocks in the first phase;
block 1 contains slots 1, 2, 3, 4, block 2 contains slots 5, 6, 7, 8 and so on.
As the second phase, we construct schedules of the last 14 slots.
We use the mirroring technique in this phase; a schedule with seven slots
is constructed and the same schedule is copied by changing the venues.
Moreover, in this phase, we construct schedules for teams $1, 2, \ldots, n-8$
and teams $n-7, n-6, \ldots, n$ independently.
Fig.~\ref{whole} shows the whole image of the schedule we construct.
\begin{figure}[tb]
\centering
\includegraphics[width=8cm]{whole.eps}\\
\caption{Whole image of the schedule we construct}
\label{whole}
\end{figure}
In the first phase, we apply a scheme inspired by the circle method,
where a similar idea was also used in \citeasnoun{Thielen2012}.
As an example, see Fig.~\ref{first1} to Fig.~\ref{first3} for
a case with $n=30$ teams.
As displayed in these figures, we put two teams ($2i-1, 2i$)
for $i = 1, 2, \ldots, n/2$ on one vertex.
\begin{figure}[htb]
\centering
\includegraphics[width=7cm]{first1.eps}\\
\caption{Games at the first block for a case with 30 teams}
\label{first1}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=7cm]{first2.eps}\\
\caption{Games at the second block for a case with 30 teams}
\label{first2}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=7cm]{first3.eps}\\
\caption{Games at the last block for a case with 30 teams}
\label{first3}
\end{figure}
Teams 1 to $n-8$ are put on black vertices in sequence.
Teams ($n-7, n-6$) are put on the bottom right gray vertex,
teams ($n-5, n-4$) are put on the top right,
teams ($n-3, n-2$) are put on the leftmost,
and teams ($n-1, n$) are put on the gray vertex
on the leftmost vertical arc.
Home away patterns of the teams are also displayed in these figures.
An arc from teams ($i, j$) to ($k, l$) represents
four games in one block for each team $i, j, k$ and $l$.
After four games, teams on the black vertices are moved to the next vertices
as shown in Fig.~\ref{first2}, and then play next four games at the second block.
This is repeated for $n/2 -4$ times, and Fig.~\ref{first3} shows the last block.
For all blocks, directions of arcs and teams on the gray vertices are fixed.
The home away pattern for each vertex is also fixed except for the last block;
at the last block all the vertices have HAAH or AHHA patterns.
This modification is done for the compatibility with the second phase.
We note that there are $n/2 - 4$ possibilities for the initial position
of teams $(1, 2)$.
We choose the best one among $n/2 - 4$ possibilities, where the importance
to choose the best one will be analyzed in the next section.
\begin{figure}[htb]
\centering
\begin{minipage}{0.1\hsize}
\includegraphics[width=7mm]{edge1.eps}
\end{minipage}
\begin{minipage}{0.4\hsize} \ \
\begin{tabular}{r|c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c}
slots \\
teams \,\,\,& 1 & 2 & 3 & 4\\
\hline
1 & 3H& 4H& 3A& 4A \\
2 & 4H& 3H& 4A& 3A \\
3 & 1A& 2A& 1H& 2H \\
4 & 2A& 1A& 2H& 1H \\
\end{tabular}\\
\end{minipage}
\caption{Games for an arc with HHAA pattern}
\label{edge1}
\end{figure}
\begin{figure}[htb]
\centering
\begin{minipage}{0.1\hsize}
\includegraphics[width=7mm]{edge2.eps}
\end{minipage}
\begin{minipage}{0.4\hsize} \ \
\begin{tabular}{r|c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c}
slots \\
teams \,\,\,& 1 & 2 & 3 & 4\\
\hline
1 & 3H& 4A& 3A& 4H \\
2 & 4H& 3A& 4A& 3H \\
3 & 1A& 2H& 1H& 2A \\
4 & 2A& 1H& 2H& 1A \\
\end{tabular}\\
\end{minipage}
\caption{Games for an arc with HAAH pattern}
\label{edge2}
\end{figure}
\begin{figure}[htb]
\centering
\begin{minipage}{0.15\hsize}
\includegraphics[width=12mm]{edge3.eps}
\end{minipage}
\begin{minipage}{0.4\hsize} \ \
\begin{tabular}{r|c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c}
slots \\
teams \,\,\,& 1 & 2 & 3 & 4\\
\hline
1 & $x$H& 3H& $x$A& 3A \\
2 & 4H& $x$H& 4A& $x$A \\
3 & $y$A& 1A& $y$H& 1H \\
4 & 2A& $y$A& 2H& $y$H \\
$x$ & 1A& 2A& 1H& 2H \\
$y$ & 3H& 4H& 3A& 4A \\
\end{tabular}\\
\end{minipage}
\caption{Games for the leftmost vertical arc}
\label{edge3}
\end{figure}
\begin{figure}[htb]
\centering
\begin{minipage}{0.15\hsize}
\includegraphics[width=12mm]{edge4.eps}
\end{minipage}
\begin{minipage}{0.4\hsize} \ \
\begin{tabular}{r|c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c}
slots \\
teams \,\,\,& 1 & 2 & 3 & 4\\
\hline
1 & $x$H& 3A& $x$A& 3H \\
2 & 4H& $x$A& 4A& $x$H \\
3 & $y$A& 1H& $y$H& 1A \\
4 & 2A& $y$H& 2H& $y$A \\
$x$ & 1A& 2H& 1H& 2A \\
$y$ & 3H& 4A& 3A& 4H \\
\end{tabular}\\
\end{minipage}
\caption{Games for the leftmost vertical arc at the last block}
\label{edge4}
\end{figure}
Here, we describe games for each arc in Fig.~\ref{edge1} to Fig.~\ref{edge4}.
If one vertex has HHAA and the other has AAHH patterns,
each team plays games as displayed in Fig.~\ref{edge1}.
Each number in this table corresponds to the opponent
and away (home) game is denoted by A (H).
Fig.~\ref{edge2} shows the games for an arc with HAAH and AHHA patterns,
this kind of arcs appear in the top right and the leftmost horizontal arcs and
at the last block.
We have an exceptional arc in Figs.~\ref{first1} to \ref{first3},
the left most vertical arc with an intermediate gray vertex.
This kind of arc is not necessary for the case with $n=4m$ teams.
For six teams on this arc, we give four games for each team
as described in Fig.~\ref{edge3} and Fig.~\ref{edge4}.
In the second phase, every team plays the remaining games.
Let $T_1 := \{1, 2, \ldots, n-8 \}$ and
$T_2 := T \setminus T_1$.
Then, for every pair of teams $i \in T_1$ and $j \in T_2$,
team~$i$ plays with team~$j$ for two games in the first phase.
Thus we can construct second phase schedules for $T_1$ and $T_2$ independently.
\begin{figure}[htb]
\centering
\includegraphics[width=7cm]{second1.eps} \vspace*{3mm} \\
\includegraphics[width=7cm]{second2.eps} \vspace*{3mm} \\
\includegraphics[width=7cm]{second3.eps}\\
\caption{A schedule for teams $1, 2, \ldots, n-8$ in the second phase
(before the mirroring)}
\label{second}
\end{figure}
For teams belonging to $T_1$, team $2i-1$ has not played against
teams $2i-4, 2i-3, 2i-2, 2i, 2i+1, 2i+2, 2i+4$, and
team $2i$ has not played against
teams $2i-5, 2i-3, 2i-2, 2i-1, 2i+1, 2i+2, 2i+3$,
where mod $2n-8$ is applied for team numbers
(more precisely,
team $2n-7$ means team 1,
team 0 means team $2n-8$,
team $-1$ means team $2n-9$ and so on).
Fig.~\ref{second} shows how we schedule the remaining games
for teams in $T_1$. Every arc in this figure corresponds
to a game at the venue of the team at the head of the arc.
Apart from the games (of Days 1 to 4) in the gray box, the schedule is
regularly constructed and can be repeated as many times
as necessary.
We note that the number of teams on the upper column
of Fig.~\ref{second} becomes odd for any number of teams $n=4m+2$,
and we need a gray box in which the schedule can be irregular.
The schedule shown in Fig.~\ref{second} is designed
such as (1) no team plays more than two consecutive home/away
games in these seven slots,
(2) every team has HA or AH pattern in Days (slots) 1 and 2,
and (3) every team has HA or AH pattern in Days 6 and 7.
By the mirroring technique, we construct the same schedule
except for the venues of games.
By concatenating them, the second phase (14 slots) schedule
for teams belonging to $T_1$ is constructed.
A schedule for teams in $T_2$ is constructed by
a single round-robin tournament for eight teams
with the following properties:
(1) no team plays more than two consecutive home/away games,
(2) every team has HA or AH pattern in the first two slots,
and (3) every team has HH or AA pattern in the first
and the last slots.
One example of such single round-robin tournaments
is displayed in Fig.~\ref{second2}.
We construct 14 slots schedule for teams in $T_2$ by constructing
the same (except for the venues) schedule with the mirroring
technique and concatenating them.
\begin{figure}[ht]
\centering
\noindent
\begin{tabular}{r|c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c@{ \,\,}c}
slots \\
teams \,\,\,& 1 & 2 & 3 & 4 & 5 & 6 & 7 \\
\hline
1 & 3H& 4A& 5H& 2A& 6A& 8H& 7H \\
2 & 4H& 3A& 6A& 1H& 5H& 7A& 8H \\
3 & 1A& 2H& 7H& 4A& 8A& 6H& 5A \\
4 & 2A& 1H& 8A& 3H& 7H& 5A& 6A \\
5 & 7H& 8A& 1A& 6H& 2A& 4H& 3H \\
6 & 8H& 7A& 2H& 5A& 1H& 3A& 4H \\
7 & 5A& 6H& 3A& 8H& 4A& 2H& 1A \\
8 & 6A& 5H& 4H& 7A& 3H& 1A& 2A \\
\end{tabular}
\caption{A schedule for teams $n-7, n-6, \ldots, n$ in the second phase
(before the mirroring)}
\label{second2}
\end{figure}
\section{ANALYSIS OF ALGORITHM}
In this section, we show the feasibility of the schedule we constructed
in Section~3,
and estimate the approximation ratio of the proposed algorithm.
\vspace*{-3mm}
\paragraph{Feasibility}
We first show that the constructed schedule in the previous section
is a double round-robin tournament.
It is clear that every team plays exactly one game for each slot.
We check whether every team~$i$ plays with every other team~$j$
once at its home and once at $j$'s home venues.
In the first phase, we use (a modified version of)
the circle method, which constructs a single round-robin
tournament (instead of the rightmost part and
the vertical arc with an intermediate vertex).
For each arc without an intermediate vertex,
we assign games as shown in
Figs.~\ref{edge1} and \ref{edge2},
which means that teams $i$ and $j$ have
two games at $i$'s and $j$'s home venues
if they are put on the opposite vertices of an arc.
For the arc with an intermediate vertex
(see Figs.~\ref{edge3} and \ref{edge4}), some pairs of teams
play exactly two games as home and away games.
In the second phase, teams $i$ and $j$ who have not played
each other play games at $i$'s and $j$'s home venues.
Thus the schedule is a double round-robin tournament.
Next we consider the no-repeater constraint.
If team~$i$ plays with team~$j$ in the first phase,
two games between $i$ and $j$ appear in an identical block.
The no-repeater constraint is satisfied for games
in a block as shown in Figs.~\ref{edge1} to \ref{edge4}.
If team~$i$ plays with team~$j$ in the second phase,
two games between $i$ and $j$ do not appear in a consecutive
two slots since we apply the mirroring technique to single
round-robin tournaments in Figs.~\ref{second} and \ref{second2}.
Finally we check whether the schedule satisfies the at-most constraint.
In each block of the first phase, every team plays two home games
and two away games. If a team plays home (away) game at the
last slot of a block, the team plays away (home) game at the
first slot of the next block. Thus, all teams have at most two
consecutive home/away games.
We note that, in the first phase,
the reason why we put two gray vertices on the rightmost positions
(two vertices for teams 23 to 26 for the case with $n=30$)
is that the teams on the rightmost black vertices
cannot play games each other under both of
the no-repeater and at-most constraints.
The reason why we need an intermediate gray vertex
(for teams 29 and 30) in Fig.~\ref{first1} to Fig.~\ref{first3}
is that the number of teams $n = 4m + 2$ cannot be divided by four.
Every team has HA or AH pattern at the end of the first phase.
This property is also held at the beginning of the second phase.
Hence, three consecutive home/away games cannot appear
at the junction of two phases.
In the second phase, as shown in Fig.~\ref{second} and
Fig.~\ref{second2}, no team have more than two consecutive
home/away games.
Therefore, the constructed double round-robin tournament
satisfies the at-most constraint.
\vspace*{-3mm}
\paragraph{Approximation ratio}
We then estimate the total traveling distances of the teams in our
double round-robin tournament and evaluate the approximation ratio.
When a team plays two consecutive away games,
as stated in Section~\ref{introduction},
the team is able to go directly from the venue of the first opponent
to that of another opponent without returning to its home venue
(with this shortcut, the team can shorten its traveling distance).
However, in this analysis, we consider that the team returns to
its home venue unless two consecutive away games are within a block
in the first phase.
In the first phase, except for the last block, we consider three types of
arcs as shown in Figs.~\ref{edge1}, \ref{edge2} and \ref{edge3}.
Games for the arc in Fig.~\ref{edge1} are the ideal;
the total traveling distances of the four teams (team 1 to 4)
in this block is $(d_{13} + d_{14} + d_{23} + d_{24} + d_{31} + d_{32}
+ d_{41} + d_{42}) + 2 \times (d_{12} + d_{34})$,
where edges $\langle 1, 2 \rangle$ and $\langle 3, 4 \rangle$
are belonging to the minimum weight perfect matching~$M$.
Traveling distances for games for the arc in Fig.~\ref{edge2} are
$(d_{13} + d_{14} + d_{23} + d_{24} + d_{31} + d_{32} + d_{41} + d_{42})
+ (d_{31} + d_{32} + d_{41} + d_{42}) + 2 \times (d_{34})$,
where the second term $(d_{31} + d_{32} + d_{41} + d_{42})$ is surplus.
This kind of arc appears at the top right and leftmost in Figs.~\ref{first1}
and \ref{first2}; we assume that teams on the gray vertices
(teams 25 to 28 if $n=30$) take this surplus.
Traveling distances for games for the arc in Fig.~\ref{edge3} are
$(d_{13} + d_{1x} + d_{24} + d_{2x} + d_{31} + d_{3y} + d_{42} + d_{4y}
+d_{x1} + d_{x2} + d_{y3} + d_{y4}) + (d_{x3} + d_{x4} + d_{y1} + d_{y2})
+ (d_{12} + d_{34})$.
Teams on the gray vertex (i.e., teams 29 and 30 for $n=30$)
take the surplus $(d_{x3} + d_{x4} + d_{y1} + d_{y2})$.
At the last block of the first phase, we have different home away patterns
as shown in Fig.~\ref{edge2} and Fig.\ref{edge4}.
Traveling distances for games for the arc in Fig.~\ref{edge2} are,
as stated in the previous paragraph,
$(d_{13} + d_{14} + d_{23} + d_{24} + d_{31} + d_{32} + d_{41} + d_{42})
+ (d_{31} + d_{32} + d_{41} + d_{42}) + 2 \times (d_{34})$,
where the second term $(d_{31} + d_{32} + d_{41} + d_{42})$ is surplus.
Traveling distances for games for the arc in Fig.~\ref{edge4} are
$(d_{13} + d_{1x} + d_{24} + d_{2x} + d_{31} + d_{3y} + d_{42} + d_{4y}
+d_{x1} + d_{x2} + d_{y3} + d_{y4}) + (d_{x3} + d_{x4})
+ (d_{1x} + d_{2x} + d_{3y} + d_{4y} + d_{13} + d_{24}) + d_{34}$.
Team $x$ (team 29 if $n=30$) takes the surplus $d_{x3} + d_{x4}$.
We now analyze the total traveling distances of the teams in the first phase.
For each edge belonging to the minimum weight perfect matching~$M$,
at most two teams use it for one block, and there are $n/2 -4$ blocks.
Hence the total traveling distances related to the matching~$M$ is
\begin{equation}
\left(\frac{n}{2} -4\right) \times 2 \times d(M)
= (n-8) \cdot d(M).
\label{eq1}
\end{equation}
We then consider the surplus for the last block;
surpluses for (normal) edges with two vertices and that for the edge
with an intermediate vertex,
namely $(d_{31} + d_{32} + d_{41} + d_{42})$ for Fig.~\ref{edge2}
and $(d_{1x} + d_{2x} + d_{3y} + d_{4y} + d_{13} + d_{24})$
for Fig.~\ref{edge4}.
As stated in Section~\ref{algorithm}, there are $n/2 - 4$ possibilities
for the initial position of teams on the black vertices.
If an edge with weight $d_{ij}$ appears as the above surpluses
for an initial position, the edge cannot appear as surpluses
for different initial positions.
Thus the total surpluses for the last slot of the first phase is at most
$\Delta/2(n/2 - 4)$ on average. Choosing the best initial position,
the surplus for the last slot is at most
\begin{equation}
\Delta/2(n/2 - 4) = \Delta/(n - 8).
\label{eq2}
\end{equation}
In the second phase, we evaluate traveling distances of
teams 1 to $n-8$ (set $T_1$ with $n-8$ teams)
and teams $n-7$ to $n$ (set $T_2$) independently.
We first consider teams in $T_1$ with Fig.~\ref{second}.
For example,
team 7 goes to the home venues of teams 4,5,6,8,9,10 and 12.
Except for single trips from the home venue of team 7 to those of the other teams,
team 7 takes the surplus $(d_{74} + d_{75} + d_{76} + d_{78}
+ d_{79} + d_{7,10} + d_{7,12})$. With the triangle inequality,
it is at most $(d_{56} + 7 \times d_{78} + d_{9,10}) +
(d_{46} + 3 \times d_{68} + 3 \times d_{8,10} + d_{10,12})$.
Similarly, team~8 visits the home venues of teams 3,5,6,7,9,10 and 11.
Except for single trips,
team 8 takes the surplus $(d_{83} + d_{85} + d_{86} + d_{87}
+ d_{89} + d_{8,10} + d_{8,11})$, and it is at most
$(d_{34} + d_{56} + d_{78} + d_{9,10} + d_{11,12}) +
(d_{46} + 3 \times d_{68} + 3 \times d_{8,10} + d_{10,12})$.
By considering all the teams in $T_1$, the surplus for teams in $T_1$
in the second phase is at most
\begin{align}
& 14 \times (d_{12} + d_{34} + \cdots + d_{n-9,n-8}) \nonumber \\
& + 16 \times (d_{24} + d_{46} + \cdots + d_{n-10,n-8} + d_{n-8,2}) \nonumber \\
& \le 14 \times d(M) + 16 \times (d(T) + d(M)). \label{eq3}
\end{align}
Teams belonging to the set $T_2$ play a double round-robin tournament
of eight teams in the second phase.
We evaluate surpluses in this schedule except for the single trips of the teams.
Among teams in $T_2$, teams $n-5, n-4, \ldots, n$ have already taken
some surpluses in the first phase.
We can evaluate the surpluses of those teams $i$ within both of the phases as $s(i)$.
For the remaining surpluses, team $n-7$ takes $t(n-7)$, team $n-6$ takes $t(n-6)$
and we have $2 \times d_{n-7,n-6}$, where edge $\langle n-7, n-6 \rangle$ belongs to $M$.
The rules (a) and (b) for the numbering of the teams give the following
inequalities:
\begin{align}
& s(n-5) + s(n-4) + \cdots + s(n) \le 6 \Delta / n, \label{eq4} \\
& t(n-7) + t(n-6) \le 12 \Delta / \{n(n-6)\}. \label{eq5}
\end{align}
Finally, we evaluate the total traveling distance of the double round-robin
tournament we constructed.
We assume that the number of teams $n$ is at least 30;
for the case with $n < 30$ teams, it is possible to enumerate all the possible
schedules and choose an optimal solution in constant time (since 30 is a constant number).
Under this assumption, we can use the following inequality
\begin{equation}
\Delta/(n-8) + 12 \Delta / \{n(n-6)\} \le 2 \Delta/n.
\label{eq6}
\end{equation}
With the sum of single trips of $n$ teams ($= \Delta$)
and equations (\ref{eq1}) to (\ref{eq6}), the total traveling distance is at most
\begin{align}
& \left(1+\frac{8}{n}\right)\Delta + (n+6) \cdot d(M) + 16 \cdot (d(T) + d(M)) \nonumber \\
\le & \left(1 + \frac{8}{n} \right) (\Delta + n \cdot d(M)) +
\frac{16}{n} (n \cdot (d(T) + d(M)).
\end{align}
By using two kinds of lower bounds (\ref{lb1}) and (\ref{lb2}),
we have derived a 1 + $24/n$ approximation for TTP(2)
with $n = 4m + 2 \ (\ge 30)$ teams.
With the 1 + $16/n$ approximation algorithm for TTP(2)
with $n = 4m \ (\ge 12)$ teams by \citeasnoun{Thielen2012}
and a naive enumeration algorithm for the case with a constant
number of teams ($n < 30$), we attain the first $1 + O(1/n)$
approximation algorithm for TTP(2).
\section{CONCLUSIONS AND FUTURE WORK}
This paper studied an approximation algorithm for
the traveling tournament problem with constraints that
the maximum length of home/away consecutive games is two.
\citeasnoun{Thielen2012} proposed a 1 + $16/n$
approximation algorithm for the problem with $n = 4m$ teams.
In this paper we proposed a 1 + $24/n$ approximation
algorithm for the case with $n = 4m+2$ teams, this new result
completes a 1 + $O(1/n)$ approximation algorithm for TTP(2).
A remained open problem is to reveal the complexity of the problem.
It was showed by \citeasnoun{Thielen2011} that
TTP($k$) is NP-hard for $k \ge 3$. However, the complexity
of TTP(2) is still open.
Another direction of future research is a single round-robin tournament
version of TTP(2).
It is easy to design a constant factor approximation algorithm
(because any feasible solution yields a 4 approximation),
but attaining better results (such as 1 + $O(1/n)$ approximation)
are future work.
| {
"attr-fineweb-edu": 1.875,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUc_05qYVBVLDkljeb | \section{Introduction}
Reliable tracking of players is a critical component of any automated video-based soccer analytics solution~\cite{bornn2018soccer, stensland2014bagadus}.
Player tracks extracted from video recordings are the basis for players' performance data collection, events tagging, and tactical and player fitness information generation.
This paper presents a multi-camera multi-object tracking method suitable for tracking soccer players in long-shot videos.
Our method is intended for tracking multiple players from a few cameras (e.g. 4-6) installed at fixed positions around the playing field. An exemplary input from such a multi-camera system is shown in Fig~\ref{fig:gt2_4cameras}.
Such a setup makes tracking players a very challenging problem.
Most of the existing multi-object tracking methods~\cite{bredereck2012data, zhang2015online, xu2016multi, wen2017multi} follow \emph{tracking-by-detection} paradigm and heavily rely on the appearance of tracked objects to link detections between consecutive video frames into individual tracks.
In our case scenario, it's very difficult to distinguish players based on their appearance.
Players from the same team wear jerseys of the same color.
Cameras are distant from the ground plane and cover a large part of the pitch. Thus images of players are relatively small. As seen in Fig.~,\ref{fig:closeup1} players' jersey numbers and facial features are hardly visible. Frequent occlusions, when players rush towards the ball, make the problem even more challenging.
Another problem is how to reliably aggregate player detections from multiple cameras in the presence of camera calibration errors, inaccuracies in single-camera player detections and player occlusions.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\linewidth, trim={0 2cm 14cm 2cm},clip]{images/Overview.pdf}
\end{center}
\caption{Overview of our tracking solution. Detection heat maps from single-camera detectors are transformed into bird's-eye view using homographies and stacked as a multi-dimensional tensor, with one dimension per camera. For visualization purposes, we color-code detection maps from different cameras (1). The tracking network uses detection heat maps (2) and existing player tracks (3) to regress new player positions (4). Finally, existing tracks are extended using regressed player positions. A simple heuristic is used to terminate tracks or initiate new tracks (5).}
\label{fig:overview}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth, trim={0 1cm 0 2.2cm},clip]{images/players_some_with_num.PNG}
\end{center}
\caption{A cropped region from a long shot camera view illustrating the difficulty of player identification using visual features. Players from the same team have a similar appearance and jersey numbers are mostly not recognizable.}
\label{fig:closeup1}
\end{figure}
This paper presents a multi-camera multi-object tracking method intended for tracking soccer players in long shot video recordings from cameras installed around the playing field.
We assume cameras are initially calibrated and homographies between a camera plane and a ground plane are estimated by manually matching distinctive points (e.g. four corners of the playing field) in each camera view with the bird's-eye view diagram of the pitch.
Our solution is an online tracking method, where time-synchronized frames from cameras are processed sequentially and detections at each timestep are used to extend existing or initiate new tracks.
In the prevailing tracking-by-detection paradigm~\cite{bredereck2012data}, detections at each timestep are linked with existing players' tracks by solving the so-called \emph{assignment problem}.
The problem is not trivial to solve in a single-camera setup but becomes much more complicated for multiple-camera installations.
Detections of the same player from multiple cameras must be first aggregated before linking them with existing tracks. This is a challenging task due to occlusions, camera calibration errors, and inaccuracies in single-camera detector outputs.
Recent methods use sophisticated techniques with explicit occlusion modeling, such as Probabilistic Occupancy Maps~\cite{fleuret2007multicamera, zhang2020multi, liang2020multi} or generative models consisting of Convolutional Neural Network and Conditional Random Fields~\cite{baque2017deep}, to reason about objects ground plane positions consistent with detections from multiple cameras. Such aggregated bird's-eye view detection map is fed into the tracker.
On the contrary, our method follows \emph{tracking-by-regression} principle and does not need such preprocessing step.
It processes raw detection maps from each camera before the non-maxima suppression step.
Aggregation of multiple single-camera detector outputs is moved from the preprocessing stage into the tracking method itself and is end-to-end learnable.
This is achieved by using a homography to transform detection heat maps from multiple cameras onto the common bird's-eye view plane and stacking them as a multi-channel tensor. Each channel corresponds to one camera view.
Extracting and aggregating information from multiple detection maps is done within the tracking network itself.
In the absence of discriminative players' appearance cues, our method focuses on individual player dynamics and interactions between neighborhood players to improve the tracking performance and reduce the number of identity switches.
We model player dynamics using an LSTM-based Recurrent Neural Networks (RNN)~\cite{gers2002learning} and interaction between players using a Graph Neural Network (GNN) with message passing mechanism~\cite{gilmer2017neural}.
Our learnable tracking method, using deep networks to fuse detections from multiple cameras and model player dynamics/interactions, requires a large amount of annotated data for training.
Manual labeling of player tracks in a sufficiently large number of video recordings is prohibitively expensive.
We train our model using the synthetic data generated with Google Research Football Environment~\cite{kurach2020google} (GRF) to overcome this problem. We adapted GRF code to allow recording time-synchronized videos from four virtual cameras placed at the fixed locations around the playing field with accompanying ground truth player tracks.
An exemplary frame from a soccer game video generated using GRF is shown in Fig.~\ref{fig:gfootball1}.
To bridge the domain gap between real and synthetic data, we employ a two-step training approach. First, our model is trained on a large synthetic dataset generated using GRF environment, and then it's fine-tuned using a smaller, manually labeled real-world dataset.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth, trim={2cm 2cm 5cm 3cm},clip]{images/GF_closeup2.PNG}
\end{center}
\caption{A crop from a synthetic video sequence generated using Google Research Football Environment.}
\label{fig:gfootball1}
\end{figure}
In summary main contributions of our work are as follows. First, we present a learning-based approach for multi-camera multi-object tracking, where aggregation of multiple single-camera detector outputs is a part of the end-to-end learnable tracking network and does not need a sophisticated preprocessing.
Second, our tracking solution exploits player dynamics and interactions, allowing efficient tracking from long shot video recordings, where it's difficult to distinguish individual objects based on their visual appearance.
\section{Related work}
\textbf{Multi-object tracking}.
The majority of recent multi-object tracking methods follow the
\emph{tracking-by-detection} paradigm, which splits the problem into two separate phases. First, objects are detected in each frame using a pretrained object detector, such as YOLO~\cite{redmon2016you}.
Then, detections from consecutive frames, usually in the form of object bounding boxes, are linked together to form individual tracks.
These methods can be split into two categories: online and offline approaches.
Online methods~\cite{milan2017online, chu2017online} sequentially process incoming video frames. Detections from a new frame are compared with existing tracks and either linked with existing tracks or used to initiate a new track (so-called \emph{data association} problem). The similarity between detection and an exiting track is computed, based on positional, visual appearance, or motion cues, using different techniques, such as Recurrent Neural Networks~\cite{milan2017online} or attention mechanism~\cite{chu2017online}.
Online approaches use global optimization techniques, such as graph optimization~\cite{zhang2008global, shitrit2013multi} or hierarchical methods~\cite{ristani2018features}, to find optimal tracks over large batches of frames or even entire video sequences.
\textbf{Multi-camera tracking.}
Existing Multi-Camera Multi-Object (MCMO) tracking methods can be split into two main groups.
Earlier methods track targets separately in each view, then merge tracklets recovered from multiple views to maintain target identities~\cite{kang2004tracking, wen2017multi, he2020multi}.
E.g.~\cite{wen2017multi} encodes constraints on 3D geometry, appearance, motion continuity, and trajectory smoothness among 2D tracklets using a space-time-view hyper-graph. Consistent 3D trajectories are recovered by finding dense sub-hypergraphs using a sampling-based approach.
Single-camera tracking methods are sensitive to occlusions and detection misses, which can lead to track fragmentation.
Fusion of fragmented tracklets to generate consistent trajectories maintaining identities of tracked targets is a challenging task.
The second group of methods first aggregates detections from multiple cameras at each time step, then links aggregated detections to form individual tracks using a single-camera tracking approach.
\cite{fleuret2007multicamera} constructs a Probability Occupancy Map (POM) to model the probability of a person's presence in discrete locations in a bird's-eye view plan of an observed scene. The occupancy map is created using a generative model by finding the most probable person locations given background-segmented observations from multiple cameras.
This concept is extended in ~\cite{zhang2020multi} by incorporating sparse players' identity information into the bird's-eye view occupancy map. Aggregated detections are linked into tracks using a graph-based optimization (k-Shortest Path).
\cite{baque2017deep} aggregates detections from multiple cameras using a convolutional neural network and Conditional Random Field (CRF) to model potential occlusions on a discretized ground plane. The method is trained end-to-end and outputs probabilities of an object's presence in each ground plane location. The aggregated detections are linked into tracks using a graph-based optimization.
\cite{lima2021generalizable} formulates detection fusion problem as a clique cover problem. Appearance features are exploited during the fusion using a person re-identification model.
\section{Multi-camera soccer players tracker}
Multi-Object Tracking (MOT) aims to detect multiple targets at each frame and match their identities in different frames, producing a set of trajectories over time.
Our setup uses video streams produced by a few (from 4 to 6) high-definition cameras installed around the playing pitch. See Fig.~\ref{fig:gt2_4cameras} for an exemplary input.
We assume cameras are initially calibrated. A homography between each camera plane and a ground plane is estimated by manually matching distinctive points (e.g. four corners of the playing field) in each camera view with the bird's-eye view diagram of the pitch.
A popular \emph{tracking-by-detection} paradigm requires prior aggregation of detections from multiple cameras at each time step. In our camera configuration, this is problematic due to the following factors.
The initial camera calibrations become less accurate over time due to environmental factors, such as strong wind or temperature variations altering the length of metal elements to which cameras are fixed.
A software time synchronization mechanism is not perfect, and due to the network jitter, there are inaccuracies in timestamps embedded in recorded video frames.
There are frequent occlusions as players compete for the ball.
As a result, it's difficult to reliably link detections from multiple cameras corresponding to the same player.
Instead of developing and fine-tuning a heuristic for aggregating detections of the same player from different cameras, we choose a learning-based approach based on raw outputs from multiple single-camera detectors.
Contrary to the typical approach, we do not use players' bounding boxes detected in each camera view.
We take raw detection heat maps from a pretrained player's feet detector as an input.
For this purpose, we use FootAndBall~\cite{footandball} detector, modified and trained to detect a single class: player's feet (to be more precise, the center point between two players' feet).
In the detector, we omit the last non-maxima suppression (NMS) and bounding box calculation steps and output raw detection heat maps for player feet class. A detection heat map is a single channel tensor whose values can be interpreted as the likehood of a player's feet presence at a given location.
The idea behind our method is illustrated in Fig.~\ref{fig:overview}.
Detection heat maps from multiple single-camera detectors at a time step $t_k$ are transformed into bird's-eye view using homographies and stacked as a multi-dimensional tensor, with one dimension per camera (1). Note that detections of the same player from different cameras, shown in different colors, sometimes have little or no overlap.
The \textbf{tracking network} uses detection heat maps (2) and existing player tracks (3) to regress new player positions at the time step $t_k$ (4).
Existing tracks are extended by appended the regressed player positions (5).
We use a simple heuristic to initiate new tracks or terminate inactive tracks.
Details of each component are given in the following sections.
\subsection{Tracking network architecture}
The aim of the \textbf{tracking network} is to regress the new position of tracked players (at a time step $t_k$), based on their previous trajectory (up to a time step $t_{k-1}$), interaction with other players, and output from player feet detectors (at a time step $t_k$).
For this purpose, we use a Graph Neural Network~\cite{gilmer2017neural}, where each node encodes player state and interaction between neighborhood players is modeled using a message passing mechanism.
The high-level overview of the tracking network architecture is shown in Fig.~\ref{fig:high_level}.
As an input, we use detection heat maps from player's feet detectors transformed into a bird's-eye view using homographies and stacked together to form a multi-channel tensor $\mathcal{T}_D$, with each channel corresponding to one camera.
For computational efficiency, for each tracked player, we take a rectangular crop from $\mathcal{T}_D$, centered at the last known player position at a time step $t_{k-1}$. The left side of Fig.~\ref{fig:high_level} shows a color-coded example of such a crop, with each color corresponding to one channel (one camera).
For each player, we use a feed-forward neural network to compute an embedding of its corresponding crop from the detection map $\mathcal{T}_D$ and a recurrent neural network for the embedding of its previous track. We concatenate these two embeddings to form a fused player embedding $\textbf{p}_i$.
We model interactions between neighborhood players' by building an undirected graph.
Each node represents a player, and its initial state is set to a fused player embedding $\textbf{p}_i$.
All players in the radius of $k=3$ meters are connected with edges, and the distance between players (computed using their last known positions in a time step $t_{k-1}$) is used as an initial edge attribute.
See the middle part of Fig.~\ref{fig:high_level} for visualization of the positions graph. Note that for simplicity, only edges originating from one node (player) are shown.
Information between neighborhood nodes in the graph is exchanged using a message passing algorithm~\cite{gilmer2017neural}, and node attributes are updated.
Finally, a new position of each player at a time step $t_k$ is regressed using its corresponding node attributes.
A player's track is extended using this regressed position.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\linewidth, trim={0.5cm 7.4cm 0.5cm 0.2cm},clip]{images/HighLevel.pdf}
\end{center}
\caption{High-level architecture of our tracking network.
For each player, we encode its previous trajectory and stacked crops from detections maps at time step $t_k$ centered at the last player position at $t_{k-1}$. Position graph is built, where each player corresponds to a graph node and edges connect neighborhood players (for clarity, only edges from one player are shown).
Vertex attributes are formed by concatenating trajectory and detection embeddings. Edge attributes are initialized with a relative position between players.
Information between neighborhood nodes is exchanged using a message passing algorithm, and vertex attributes are updated.
Finally, a player position at time $t_k$ is regressed based on updated vertex attributes.
}
\label{fig:high_level}
\end{figure*}
\paragraph{Detections encoding}. For each tracked player, we take a rectangular crop from stacked bird's-eye view detection maps $\mathcal{T}_D$ centered at the last known player position at a time step $t_{k-1}$. In our implementation, each crop is a 4x81x81 tensor, with the number of channels equal to the number of cameras and 81x81 spatial dimensions.
Multiple detection maps are aggregated using a convolutional layer with a 1x1 kernel, producing a one-dimensional feature map.
Then, the crop is downsampled to 32x32 size, flattened, and processed using a detection encoding network $f_D$, a multi-layer perceptron (MLP) with four layers having 1024, 512, 256, and 128 neurons and ReLU non-linearity.
This produces 128-dimensional detection embedding.
Detection encoding subnetwork is intended to extract information about probable positions of the player's feet at a time step $t_k$, taking into account detection results from all cameras.
We experimented with other architectures, such as 2D convolutions, but they produced worse results. See the ablation study section for more details.
\paragraph{Trajectory encoding} uses a simple recurrent neural network with a single-layer LSTM cell.
For each tracked player, at a time step $t_k$ it accepts its previous trajectory $(x_{k-n}, y_{k-n}), \ldots, (x_{k-1}, y_{k-1})$ and produces 128-dimensional trajectory embedding.
It should be noted that there's no need to process the entire player trajectory at each time step at the inference stage.
For each tracked player, at a time step $t_k$ , we feed only previous player position $(x_{k-1}, y_{k-1})$ to trajectory encoding subnetwork and use the LSTM state of the prior step.
\paragraph{Position graph} models interactions between neighborhood players.
We build an undirected graph, where each node represents a tracked player.
For each player, we compute a 256-dimensional, fused player embedding $\mathbf{p}_i$ by concatenating its detection and trajectory embeddings.
Initial features $\mathbf{x}_i^{(0)}$ of $i$-th graph vertex are set to this concatenated player embedding $\mathbf{p}_i$.
To model interactions between neighborhood players, we add an edge between all pairs of vertices that are closer than $3$ meter threshold.
Edge features $\mathbf{e}_{i,j}$ of an $(i, j)$ edge are set as the relative position of the $j$-th player with respect to the $i$-th player at the previous time step $t_{k-1}$.
$\mathbf{e}_{i,j} = (x^{j}_{k-1} - x^{i}_{k-1}, y^{j}_{k-1} - y^{i}_{k-1})$, where
$x^{i}_{k-1}, y^{i}_{k-1}$ are coordinates of $i$-th player in the world reference frame at the time step $t_{k-1}$.
See middle part of Fig.~\ref{fig:high_level} for visualization of the positions graph. For readability only edges originating from one player are shown.
To model interaction between neighborhood players, we run two iterations of a message passing algorithm~\cite{gilmer2017neural}.
First, messages between pairs of neighbourhood nodes in iteration $r$, where $1 \leq r \leq 2$, are calculated using the following equation:
\begin{equation}
\mathbf{m}_{i,j}^{(r)} =
f_M
\left(
\mathbf{x}_i^{(r-1)},
\mathbf{x}_j^{(r-1)},
\mathbf{e}_{i,j}
\right) \\,
\end{equation}
where $f_M$ is a message generation function modeled using a neural network.
$f_M$ takes features of two neighbourhood nodes $\mathbf{x}_i^{(r-1)}$, $\mathbf{x}_j^{(r-1)}$ and edge features $\mathbf{e}_{i,j}$, concatenates them and passes through a two layer MLP with 128 and 32 neurons and ReLU non-linearity. It produces a 32-dimensional message $\mathbf{m}_{i,j}^{(r)}$.
Then, messages from neighbourhood nodes are aggregated and used to update node features, using the below equation:
\begin{equation}
\mathbf{x}_i^{(r)} = f_N
\left(
\mathbf{x}_i^{(r-1)},
\frac{1}{|N(i)|}
\sum_{j \in N(i)}
\mathbf{m}_{i,j}^{(r)}
\right) \\ ,
\end{equation}
where $f_N$ is a node update function.
$f_N$ concatenates node features $\mathbf{x}_i^{(k-1)}$ from the previous message passing iteration with aggregated messages from neighborhood nodes and passes through a two-layer MLP with 384 and 384 neurons and ReLU non-linearity. It produces updated node features $\mathbf{x}_i^{(k)}$.
\paragraph{Regression of a players' position} is the final element of the processing pipeline.
The node features $\mathbf{x}_i^{(n)}$ obtained after two rounds of message passing are fed to the neural network $f_P$, which regresses $i$-th player position $(x_i, y_i)$ in a world coordinate frame at a time step $t_k$.
Details of message generation network $f_M$, node update network $f_N$, and a position regressor $f_P$ are given in Table~\ref{jk:tab:details}.
\begin{table}
\caption{Details of the network architecture. MLP is a multi-layer perceptron with a number of neurons in each layer given in brackets and ReLU non-linearity.}
\begin{center}
\begin{tabular}{l@{\quad}l@{\quad}l}
\begin{tabular}{@{}c@{}}Subnetwork \end{tabular}
& \begin{tabular}{@{}c@{}}Function \end{tabular}
& \begin{tabular}{@{}c@{}}Details \end{tabular}
\\
[2pt]
\hline
\Tstrut
$f_D$ & detection embedding net & Conv (1x1 filter, 1 out channel) \\
& & MLP (1024, 512, 256, 128) \\
$f_T$ & trajectory encoding & single-layer LSTM \\
$f_M$ & message generation & MLP (128, 32) \\
$f_N$ & node update & MLP (384, 384) \\
$f_P$ & position regressor & MLP (128, 2) \\
[2pt]
\hline
\end{tabular}
\end{center}
\label{jk:tab:details}
\end{table}
\subsection{Network training}
Out tracker is trained end-to-end, taking as an input a bird's-eye view player detection map at a time step $t_k$; a sequence of previous raw video frames from each camera at time steps $t_{k-n}, \ldots, t_k$; and previous positions of tracked players at time steps $t_{k-n-1}, \ldots, t_{k}$.
At first, we crop rectangular regions around each player from both bird's-eye view player detection map and a sequence of previous raw video frames from each camera. These crops are centered at the previous position of the tracked player. Crops from a bird's-eye view detection map are centered at a player position at $t_{k-1}$. Crops from previous raw video frames are centered at player position at time steps $t_{k-n-1}, \ldots, t_{k-1}$.
The reason is that we do not know what the current player position (at timestep $t_k$) is at the inference stage. We are going to regress it.
We need to center crops at the last known position of each tracked player (at timestep $t_{k-1}$).
We should note that processing the sequence of previous video frames is needed only during the network training.
In the inference phase, we use RNN hidden state to carry the information from previous frames.
The network predicts the new position of each tracked player at a time step $t_k$ in a world coordinate frame.
The network is trained using a mean squared error (MSE) loss defined as:
$
\mathcal{L} = \sum_{i}
\left (
p_i^{(t)} - \hat{p}_i^{(t)}
\right)
$,
where $\hat{p}_i^{(t)}$ is the regressed position of $i$-th player at the timestep $t_k$ and $p_i^{(t)}$ is the ground truth position.
We train our network using synthetic videos and ground truth player tracks generated using Google Research Football environment~\cite{kurach2020google} and fine-tune using a smaller real-world dataset.
The description of datasets is given in Section~\ref{sec:dataset}.
\subsection{Track initialization and terminal}
The main focus of this work is the tracking network, intended to regress a new position of tracked players in each timestep, based on their previous motion trajectory, the interaction between neighborhood players, and input detection maps.
We use simple heuristics for the track initialization and termination,
Tracks are initialized by extracting local maxima from aggregated detection heatmaps transformed to a bird's-eye view.
A new tracked object is initiated if there's a local maximum detected at a position different from the positions of already tracked objects.
We terminate a tracked object if no local maxima are detected in its vicinity for $k=20$ consecutive frames.
\section{Experimental results}
\subsection{Datasets and evaluation methodology}
\label{sec:dataset}
\paragraph{Training dataset}
Initially, we used manually labeled data from video recordings of live events to train our system.
However, manual labeling turned out to be a very labor-intensive and error-prone task.
For each player, we need to tag its track on each of four cameras and preserve its identity across different cameras.
It's not a trivial task, as the jersey numbers are often not readable due to the player pose, occlusions, or large distance from the camera. In our initial experiments, training with a small set of manually labeled data resulted in poor generalization.
Instead, we resort to synthetic data generated using the Football Engine from Google Research Football environment~\cite{kurach2020google}.
The Football Engine is an advanced football simulator intended for reinforcement learning research that supports all the major football rules such as kickoffs, goals, fouls, cards, corner, and penalty kicks.
The original environment generates videos from one moving camera that focuses on the playfield part with a player possessing the ball.
We modified the code to generate videos from four views at fixed locations around the pitch. This simulates the real-world setup where four fixed-view cameras are mounted on poles around the playfield. Generated videos are split into episodes, where each episode is a part of the game with a continuous player trajectory.
When an event resulting in players' teleportation in the game engine happens, such as a goal, a new episode is started.
Altogether, we generated 418 episodes spanning almost 0.5 million time steps, each containing videos from four virtual cameras with accompanying ground truth player tracks in a world coordinate frame.
Fig~\ref{fig:gfootball1} shows an exemplary view from our modified Football Engine setup.
\paragraph{Evaluation dataset}
Evaluation is done using a synthetic evaluation set and two real-world evaluation sets containing manually tagged recordings of real matches from two locations.
The synthetic evaluation set, denoted as GRF, consists of 4 game episodes containing recordings from four virtual cameras.
Video rendering parameters and virtual cameras configuration are the same as in the training environment.
To verify if the model trained on the synthetic data works on real-world data, we use two manually tagged sets of recordings, named GT2 and GT3.
Each consists of 5 minutes (12 thousand frames at 30 fps) of manually tagged recordings from four cameras installed at two different stadiums.
The camera configuration at each location is similar to the configuration we use in the Football Engine virtual environment.
Four synchronized frames from the GT2 sequence are shown in Fig.~\ref{fig:gt2_4cameras}.
Evaluation is done using the first 2000 frames (corresponding to 100 seconds of the game) from each episode.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{images/gt2_4cameras_new.png}
\end{center}
\caption{Four synchronized views from the GT2 evaluation sequence. Views from cameras 3 and 4 are horizontally flipped to align with views from the other side of the pitch. Cameras on the same side (left: 1-3 and right: 2-4) show the same part of the playing field from two sides.}
\label{fig:gt2_4cameras}
\end{figure*}
\paragraph{Evaluation metrics}
We follow the same evaluation protocol as used in MOT Challenges 2019~\cite{dendorfer2019cvpr19}.
First, the correspondences between the ground truth tracks and predicted tracks are established using Kuhn-Munkres algorithm~\cite{munkres1957algorithms}.
After establishing track correspondences, we calculate MOTA, IDSW, mostly tracked, partially tracked, and mostly loss metrics.
MOTA (Multiple Object Tracking Accuracy) is a popular metric to report overall multi-object tracker accuracy, based on a number of false positives, false negatives, identity switches, and ground truth objects.
For MOTA definition, please refer to~\cite{dendorfer2019cvpr19}.
An identity switch error (IDSW) is counted if a ground truth target $i$ is matched in the new frame to a different track than in the previous frame.
Each ground truth trajectory is classified as mostly tracked (MT), partially tracked (PT), and mostly lost (ML).
A target successfully tracked for at least 80\% of its life span is considered mostly tracked. This metric does not require that a player ID remains the same during the tracking.
If the target is tracked for less than 80\% and more than 20\% of its ground truth track, it's considered partially tracked (PT).
Otherwise, the target is mostly lost (ML).
Reported evaluation results for the synthetic dataset are averaged over four game episodes.
\subsection{Results and discussion}
\label{sec:results}
Figure~\ref{fig:tracking_results} shows a visualization of tracking results superimposed onto detection heatmaps transformed to a bird's-eye view.
For visualization purposes, the detection heatmap from each camera is drawn with a different color.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\linewidth]{images/tracking_results2.png}
\end{center}
\caption{Visualization of tracking results plot onto player detection heatmaps transferred to a bird's-eye view.
Detection heatmap values from each camera are color-coded for visualization purposes.}
\label{fig:tracking_results}
\end{figure}
\paragraph{Evaluation results.}
Table~\ref{jk:tab:results1} compares the performance of our proposed tracking method with the baseline solution.
The baseline tracker is a multiple target tracking method using particle filters~\cite{jinan2016particle}.
It's based on an aggregated bird's-eye view detection heatmap, constructed by transforming detection heatmaps from each camera to a ground plane view using a homography and summing them together.
Our method outperforms the baseline on all three test sets: a synthetic GRF set and real-world GT2 and GT3 sets.
MOTA metric is higher by 2-4\,p.p. Most importantly, the number of identity switches (ID) is significantly lower, by 50\% on average.
Without incorporating appearance cues, such as players' jersey numbers, our method significantly reduces identity switches compared to the baseline approach. This proves the validity and the potential of the proposed solution.
\begin{table*}
\caption{Tracker evaluation results.
IDSW = number of identity switches, MT = mostly tracked, PT = partially tracked, ML = mostly lost.
GRF is synthetic, and GT2, GT3 are real-world evaluation sets.}
\begin{center}
\begin{tabular}{l@{\quad}|r@{\quad}r@{\quad}r@{\quad}r@{\quad}r@{\quad}|r@{\quad}r@{\quad}r@{\quad}r@{\quad}r@{\quad}|r@{\quad}r@{\quad}r@{\quad}r@{\quad}r}
& \multicolumn{5}{c}{GRF (synthetic)} & \multicolumn{5}{c}{GT2 (real)} & \multicolumn{5}{c}{GT3 (real)}
\\
& \begin{tabular}{@{}c@{}}MOTA$\uparrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}IDSW$\downarrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}MT$\uparrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}PT$\downarrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}ML$\downarrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}MOTA$\uparrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}IDSW$\downarrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}MT$\uparrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}PT$\downarrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}ML$\downarrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}MOTA$\uparrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}IDSW$\downarrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}MT$\uparrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}PT$\downarrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}ML$\downarrow$ \end{tabular}
\\
[2pt]
\hline
\Tstrut
Baseline tracker & 0.823 & 13 & 20 & 2 & 0 & 0.925 & 30 & 25 & 0 & 0 & 0.828 & 41 & 24 & 1 & 0\\
\textbf{Tractor (ours)} & \textbf{0.874} & \textbf{7} & 20 & 2 & 0 & \textbf{0.949} & \textbf{11} & 25 & 0 & 0 & \textbf{0.867} & \textbf{23} & 24 & 1 & 0 \\
[2pt]
\hline
\end{tabular}
\end{center}
\label{jk:tab:results1}
\end{table*}
\paragraph{Ablation study.}
In this section, we analyze the impact of individual components of the proposed method on tracking performance.
In all experiments, we used a synthetic evaluation set (GRF).
Table~\ref{jk:tab:ablation1} shows the performance of reduced versions of our architecture versus the full model performance.
Disabling players' motion (players' movement trajectory) cues when regressing new player positions almost double the number of identity switches (ID), as shown in 'no player trajectory'. The tracking method is much more likely to confuse tracking object identities without knowing previous players' trajectories.
Switching off the message passing mechanism in the graph ('no message passing' row), where information is exchanged between neighborhood nodes (players), has a similar effect on the tracking performance. The number of identity switches increases from 7 to 12.
We can conclude that both components, player trajectory encoding using RNN and modeling neighborhood players interaction using GNN, are crucial for the good performance of the proposed method.
\begin{table}
\caption{
Performance of the full model compared to reduced versions.
Performance evaluated on the synthetic evaluation set.
MT = mostly tracked, PT = partially tracked, ML = mostly lost, IDSW = number of identity switches.
}
\begin{center}
\begin{tabular}{l@{\quad}|r@{\quad}r@{\quad}r@{\quad}r@{\quad}r}
& \begin{tabular}{@{}c@{}}MOTA$\uparrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}IDSW$\downarrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}MT$\uparrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}PT$\downarrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}ML$\downarrow$ \end{tabular}
\\
[2pt]
\hline
\Tstrut
\textbf{Tractor (full model)} & \textbf{0.874} & \textbf{7} & 20 & 2 & 0 \\
no player trajectory & 0.772 & 13 & 20 & 2 & 0 \\
no message passing & 0.776 & 12 & 20 & 2 & 0 \\
[2pt]
\hline
\end{tabular}
\end{center}
\label{jk:tab:ablation1}
\end{table}
Table~\ref{jk:tab:ablation2} the tracking performance for different choices of the detection heatmap embedding subnetwork.
We evaluated the following approaches: processing unaggregated crops from detection heatmaps (each is a 4-channel, crop size by crop size tensor) using four-layer MLP with 1024, 512, 256, and 128 neurons in each layer (denoted as MLP);
aggregating crops from detection heatmaps using a convolutional layer with 1x1 kernel (this produces 1-channel tensor), followed by four-layer MLP with 1024, 512, 256, and 128 neurons in each layer (denoted as Mixed1 CNN+MLP);
summing crops from detection heatmaps across a channel dimension (this produces 1-channel tensor), followed by four-layer MLP with 1024, 512, 256, and 128 neurons in each layer (denoted as Mixed2 Sum+MLP);
using a 2D convolutional network with positional encoding using a CoordConv~\cite{liu2018intriguing} layer (denoted as CoordConv) and final global average pooling layer;
using a 2D convolutional network without the positional encoding and with the final global average pooling layer (denoted as Conv).
As expected, using a convolutional architecture without positional encoding to compute embedding of crops from input detection heatmaps yields worse results.
Convolutions are translation invariant and have difficulty extracting the spatial positions of detection heatmap maxima. This adversely affects the network's ability to predict the next position of each player.
Better results are obtained by using a simple positional encoding with the CoordConv~\cite{liu2018intriguing} layer.
CoordConv is used as a first network layer to encode spatial $x$, $y$ coordinates of each pixel in two additional channels, making the further processing coordinate-aware.
This improves the ability to localize detection heatmap maxima and improves the final tracking results.
The best results are achieved by concatenating detection heatmaps from multiple cameras (transformed to the bird's-eye view) using a convolution with 1x1 kernel and processing the resulting heatmap with a multi-layer perceptron (Mixed1: CNN+MLP).
This gives the highest tracking accuracy (MOTA) and the lowest number of identity switches.
Using multi-layer perceptron without aggregating detection heatmaps from multiple cameras yields worse results due to higher overfitting.
\begin{table}
\caption{
Performance of different architectures of the detection embedding subnetwork on the synthetic evaluation set.
MT = mostly tracked, PT = partially tracked, ML = mostly lost, IDSW = number of identity switches.
}
\begin{center}
\begin{tabular}{l@{\quad}|r@{\quad}r@{\quad}r@{\quad}r@{\quad}r}
& \begin{tabular}{@{}c@{}}MOTA$\uparrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}IDSW$\downarrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}MT$\uparrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}PT$\downarrow$ \end{tabular}
& \begin{tabular}{@{}c@{}}ML$\downarrow$ \end{tabular}
\\
[2pt]
\hline
\Tstrut
MLP & 0.796 & 12 & 20 & 2 & 0 \\
\textbf{Mixed1 (CNN+MLP)} & \textbf{0.874} & \textbf{7} & 20 & 2 & 0 \\
Mixed2 (Sum+MLP) & 0.823 & 11 & 20 & 2 & 0 \\
CoordConv & 0.815 & 13 & 20 & 2 & 0 \\
Conv & 0.795 & 21 & 20 & 2 & 0 \\
[2pt]
\hline
\end{tabular}
\end{center}
\label{jk:tab:ablation2}
\end{table}
\section{Conclusion}
The paper presents an efficient multi-camera tracking method intended for tracking soccer players in long shot video recordings from multiple calibrated cameras.
The method achieves better accuracy and a significantly lower number of identity switches than a baseline approach, based on a particle filter.
Due to a large distance to the camera, visual cues, such as a jersey number, cannot be used.
Our method exploits other cues, such as a player movement trajectory and interaction between neighborhood players, to improve the tracking accuracy.
A promising future research direction is an integration of sparse visual cues, such as a jersey number which is readable in relatively few frames, into the tracking pipeline. This would allow further increasing the tracking accuracy.
\subsubsection*{Acknowledgments}
This study was prepared within realization of the Project co-funded by polish National Center of Research and Development, Ścieżka dla Mazowsza/2019.
{\small
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.495117,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdgA5qdmDBjMqeQnE | \section*{Abstract}
\label{Abstract}
Frequently in sporting competitions it is desirable to compare teams based on records of varying schedule strength. Methods have been developed for sports where the result outcomes are win, draw, or loss. In this paper those ideas are extended to account for any finite multiple outcome result set. A principle-based motivation is supplied and an implementation presented for modern rugby union, where bonus points are awarded for losing within a certain score margin and for scoring a certain number of tries. A number of variants are discussed including the constraining assumptions that are implied by each. The model is applied to assess the current rules of the Daily Mail Trophy, a national schools tournament in England and Wales.
{\bf Keywords:} Bradley-Terry, entropy, networks, pairwise comparison, ranking, sport.
\section{Introduction}
\label{Introduction} \label{sec: Intro}
There is a deep literature on ranking based on pairwise binary comparisons. Prominent amongst proposed methods is the Bradley-Terry model, which represents the probability that team $i$ beats team $j$ as
\[P(i\succ j) = \frac{\pi_i}{\pi_i+\pi_j} \quad ,\]
where \(\pi_i\) may be thought of as representing the positive-valued strength of team $i$. The model was originally proposed by \citet{zermelo1929berechnung} before being rediscovered by \citet{bradley1952rank}. It was further developed by \citet{davidson1970extending} to allow for ties (draws); by \citet{davidson1977extending} to allow for order effects, or, in this context, home advantage; and by Firth (\url{https://alt-3.uk/}) to allow for standard association football scoring rules (three points for a win, one for a draw). \citet{buhlmann1963pairwise} showed that the Bradley-Terry model is the unique model that comes from taking the number of wins as a sufficient statistic. Later, \citet{joe1988majorization} showed that it is both the maximum entropy and maximum likelihood model under the retrodictive criterion that the expected number of wins is equal to the actual number of wins, and derived maximum entropy models for home advantage and matches with draws. These characterisations of the Bradley-Terry model may be seen as natural expressions of a wider truth about exponential families, that if one starts with a sufficient statistic then the corresponding affine submodel, if it exists, will be uniquely determined and it will be the maximum entropy and maximum likelihood model subject to the `observed equals expected' constraint \citep{geyer1992constrained}. In this paper, the maximum entropy framing is used as it helps to clarify the nature of the assumptions being made in the specification of the model.
Situations of varying schedule strength occur frequently in rugby union. They are apparent in at least five particular scenarios. First, in two of the top club leagues in the world --- Pro14 and Super Rugby --- the league stage of the tournament is not a round robin, but a conference system operated with an over-representation of matches against teams from the same conference and country. Second, in professional rugby such situations occur at intermediate points of the season, whether the tournament is of a round robin nature or not. Third, in Europe, a significant proportion of teams in the top domestic leagues --- Pro14, English Premiership, Top14 --- also compete in one of the two major European rugby tournaments, namely the European Rugby Champions Cup and the European Rugby Challenge Cup. The preliminary stage of both these tournaments is also a league-based format. If the results from the European tournaments are taken along with the results of the domestic tournaments then a pan-European system of varying schedule strengths may be considered. Fourth, fixture schedules may be disrupted by unforeseen circumstances causing the cancellation of some matches in a round robin tournament. This has been experienced recently due to COVID19. Fifth, schools rugby fixtures often exist based on factors such as geographical location and historical links and so do not fit a round robin format. The Daily Mail Trophy is an annual schools tournament of some of the top teams in England and one team from Wales that ranks schools based on such fixtures.
In modern rugby union the most prevalent points system is as follows:
\begin{itemize}[noitemsep,label={}]
\item 4 points for a win
\item 2 points for a draw
\item 0 points for a loss
\item 1 bonus point for losing by a match score margin of seven or fewer
\item 1 bonus point for scoring four or more tries
\end{itemize}
This is the league points system used in the English Premiership, Pro14, European Rugby Champions Cup, European Rugby Challenge Cup, the Six Nations\footnote{In the Six Nations there are an additional three bonus points for any team that beats all other teams in order to ensure that a team with a 100\% winning record cannot lose the tournament because of bonus points}, and the pool stages of the most recent Rugby World Cup, which was held in Japan in 2020. In the southern hemisphere, the two largest tournaments, Super Rugby and the Rugby Championship, follow the same points system except that a try bonus point is awarded when a team has scored three more tries than the opposition, so at most one team will earn a try bonus. In the French Top14 league the try bonus point is also awarded on a three-try difference but with the additional stipulation that it may only be awarded to a winning team. The losing bonus point in the Top14 is also different in awarding the point at a losing margin of five or fewer instead of seven or fewer. Together these represent the largest club and international tournaments in the sport. For the rest of this paper the most prevalent system, the one set out above, will be used. The others share the same result outcomes formulation and hence a substantial element of the model. When appropriate, methodological variations will be mentioned that might better model the alternative try-bonus method, where the bonus is based on the difference in the number of tries and may only be awarded to one team. For the avoidance of confusion, for the remainder of this paper, the points awarded due to the outcomes of matches and used to determine a league ranking will be referred to as `points' and will be distinguished from the in-game accumulations on which match outcomes are based, which will henceforth be referred to as `scores'. Likewise, `ranking' will refer to the attribution of values to teams that signify their ordinal position, while `rating' will refer to the underlying measure on which a ranking is based.
It is not the intention here to make any assessment of the relative merits of the different points systems, rather to take the points system as a given and to construct a coherent retrodictive model consistent with that, whilst accounting for differences in schedule strength. In doing so, a model where points earnt represent a sufficient statistic for team strength is sought. It is important to understand therefore that this represents a `retrodictive' rather than a predictive model. This is a concept familiar in North America where the KRACH (``Ken's Rating for American College Hockey") model, devised by Ken Butler, is commonly used to rank collegiate and school teams in ice hockey and other sports \citep{wobus2007KRACH}.
The paper proceeds in Section \ref{Model} with derivation of a family of models based on maximum entropy, a discussion of alternatives, and the choice of a preferred model for further analysis. Section \ref{sec: Estimation} proposes estimation of the model through a loglinear representation. In addition, the implementation of a more intuitive measure of team strength, the use of a prior, and an appropriate identifiability constraint, are also discussed. In Section \ref{sec: DMT} the model is used to analyse the current Daily Mail Trophy ranking method, and in Section \ref{sec: Concluding Remarks} some concluding remarks are made.
\section{Model} \label{Model}
\subsection{Entropy maximisation} \label{sec: Maximum Entropy}
The work of \citet{jaynes1957information} as well as the Bradley-Terry derivations of \citet{joe1988majorization} and \citet{henery1986interpretation}
suggest that a model may be determined by seeking to maximise the entropy under the retrodictive criterion that the points earnt in the matches played are equal to the expected points earnt given the same fixtures under the model. Taking the general case, suppose there is a tournament where rather than a binary win/loss there are multiple possible match outcomes. Let $p^{ij}_{a,b}$ denote the probability of a match between $i$ and $j$ resulting in $i$ being awarded $a$ points and $j$ being awarded $b$, with $m_{ij}$ the number of matches between $i$ and $j$. Then we may define the entropy as
\begin{equation}
S(p) = -\sum_{i,j}m_{ij}\sum_{a,b}p^{ij}_{a,b}\log p^{ij}_{a,b} \quad .
\end{equation}
This may be maximised subject to the conditions that for each pair of teams the sum of the probabilities of all possible outcomes is 1,
\begin{equation}
\sum_{a,b}p^{ij}_{a,b}=1 \quad \text{for all $i,j$ such that $m_{ij}>0$},
\end{equation}
and the retrodictive criterion that for each team $i$, given the matches played, the expected number of points earnt is equal to the actual number of points earnt,
\begin{equation}
\sum_{j}m_{ij}\sum_{a,b} ap^{ij}_{a,b} =
\sum_{j}\sum_{a,b} am^{ij}_{a,b}\quad ,
\end{equation}
where $m^{ij}_{a,b}$ represents the number of matches which result in $i$ being awarded $a$ points and $j$ being awarded $b$.
The entropy, $S(p)$, is strictly concave and so the Lagrangian has a unique maximum. With $\lambda_{ij}$ being the Lagrange multiplier associated with teams $i,j$ in condition (2), and $\lambda_i$ those for the retrodictive criterion applied to team $i$ from condition (3), then the solution satisfies
\begin{equation}
\log p^{ij}_{a,b} = -\lambda_{ij} -a\lambda_i - b\lambda_j -1 \quad \text{for all $i,j$ such that $m_{ij}>0$} \quad ,
\end{equation}
which gives us that
\begin{equation}
p^{ij}_{a,b} \propto \pi_i^{a}\pi_j^{b} \quad \text{for all $i,j$ such that $m_{ij}>0$} \quad,
\end{equation}
where the $\pi_i = \exp(-\lambda_i)$ may be used to rank the teams, and $\exp(-\lambda_{ij}-1)$ is the constant of proportionality. This result holds for $i,j$ such that $m_{ij}>0$ and a reasonable modelling assumption is that it may then be applied to all pairs $(i,j)$.
This derivation presents the most general form of the maximum entropy model, but various specific models may be motivated in this way by imposing a variety of independence assumptions or additional conditions. Some of the main variants are considered next.
\subsection{Model alternatives} \label{sec: model alternatives}
\subsubsection{Independent result and try-bonus outcomes} \label{sec: independent result and try}
Conceptually one might consider that points awarded for result outcomes and the try bonus are for different and separable elements of performance within the predominant points system that is being considered here. If that were not the case then a stipulation similar to that imposed in Top14, which explicitly connects the try bonus and the result outcome, could be used. While the result outcome in rugby union is commonly presented as a standard win, draw, loss plus a losing bonus point, it may be thought of equivalently as five possible result outcomes --- wide win, narrow win, draw, narrow loss, wide loss. This leads to a representation of the five non-normalised result probabilities as
\begin{align*}
P(\text{team $i$ beats team $j$ by wide margin}) &\propto \pi_i^4\\
P(\text{team $i$ beats team $j$ by narrow margin}) &\propto \rho_n\pi_i^4\pi_j\\
P(\text{team $i$ draws with team $j$}) &\propto \rho_d\pi_i^2\pi_j^2\\
P(\text{team $j$ beats team $i$ by narrow margin}) &\propto \rho_n\pi_i\pi_j^4\\
P(\text{team $j$ beats team $i$ by wide margin}) &\propto \pi_j^4 \quad ,
\end{align*}
where $\rho_n$ and $\rho_d$ are structural parameters related to the propensity for narrow or drawn result outcomes respectively.
Taking the conventional standardisation of the abilities that the mean team strength is 1, as in \citet{ford1957solution}, then the probability of a narrow result outcome (win or loss) in a match between two teams of mean strength is $2\rho_n/(2+2\rho_n+\rho_d)$, and that for a draw outcome is $\rho_d/(2+2\rho_n+\rho_d)$.
A nice feature for this particular setting is that the try bonus point provides information on the relative strength of the teams, so that the network is more likely to be connected. It may thus supply differentiating information on team strength even where more than one team has a 100\% winning record.
There are four potential try bonus outcomes that may be modelled by the probabilities:
\begin{align*}
P(\text{team $i$ and team $j$ both awarded try bonus point}) &\propto \tau_b\pi_i\pi_j\\
P(\text{only team $i$ awarded try bonus point}) &\propto \pi_i\\
P(\text{only team $j$ awarded try bonus point}) &\propto \pi_j\\
P(\text{neither team awarded try bonus point}) &\propto \tau_z \quad ,
\end{align*}
so that in a match between two teams of mean strength the probability of both being awarded a try bonus is $\tau_b/(2+\tau_b+\tau_z)$ and that for neither team gaining a try bonus is $\tau_z/(2+\tau_b+\tau_z)$.
This model would be derived through a consideration of entropy maximisation by taking the result outcome and try bonus outcome as separable maximisations, but then enforcing that the $\pi_i$ are consistent. Each of the structural parameters may be derived by an appropriate additional condition. For example in the case of $\rho_d$, the relevant condition would be that, given the matches played, the expected number of draws is equal to the actual number of draws.
\subsubsection{Try bonus independent of opposition} \label{sec: independent try bonus}
One might choose to make an even stronger independence assumption, that the probability of gaining a try bonus is solely dependent on a team's own strength and independent of that of the opposition. This has the advantage of greater parsimony. It may be expressed as
\begin{align*}
P(\text{team $i$ gains try bonus point}) &\propto \tau \pi_i\\
P(\text{team $i$ does not gain try bonus point}) &\propto 1 \quad ,
\end{align*}
where $\tau/(1+\tau)$ is the probability that a team of mean strength gains a try bonus. This model would clearly not be appropriate to the southern hemisphere system where the try bonus was awarded for scoring three more tries than the opposition.
\subsubsection{Try bonus dependent on result outcome} \label{sec: try dependent on result}
Alternatively, the try bonus could be conditioned on the result outcome. This of course would be necessary if modelling the points system employed in the Top14 for example, where only the winner is eligible for a try bonus. The conditioning could be done in a number of ways. One could consider the five result outcomes noted already, or consider a simplifying aggregation, either into wide win, close result (an aggregation of narrow win, draw, and narrow loss), or wide loss; or win (an aggregation of narrow win and wide win), draw, or loss (narrow loss and wide loss).
\subsubsection{Independent offensive and defensive strengths} \label{sec: offensive defensive}
It seems not unreasonable to consider that a team's ability to earn a try bonus point might be modeled as being dependent on its own attacking strength and the opposition's defensive strength and independent of its own defensive strength and the opposition's attacking strength. This may be captured by considering team strength to be the product of its offensive and defensive strength \begin{equation}
\pi_i = \omega_i \delta_i \quad,
\end{equation}
where we consider the probability of a team $i$ scoring a try bonus in a match as proportional to their offensive strength parameter $\omega_i$, and the probability of them not conceding a try bonus as proportional to their defensive strength parameter $\delta_i$.
Given $\pi_i$, only one further parameter per team need be defined and so the non-normalised try bonus outcome probabilities may be expressed as
\begin{align*}
P(\text{both $i$ and $j$ gain try bonuses}) &\propto \frac{\pi_i}{\delta_i}\frac{\pi_j}{\delta_j} = \frac{\pi_i\pi_j}{\delta_i\delta_j} \\
P(\text{only $i$ gains try bonus}) &\propto \frac{\pi_i}{\delta_i}\delta_i = \pi_i \\
P(\text{only $j$ gains try bonus}) &\propto \frac{\pi_j}{\delta_j}\delta_j = \pi_j\\
P(\text{neither team gains try bonus}) &\propto \delta_i\delta_j \quad .
\end{align*}
Thus the model replaces the symmetric try bonus parameters $\tau_b$ and $\tau_z$ with team-dependent parameters. This may be derived from an entropy maximisation by considering the try bonus outcome independently from the result outcome. The familiar retrodictive criterion that for each team, the expected number of try bonus points scored is equal to the actual number of try bonus points scored, is then supplemented by a second criterion that for each team, the expected number of matches where no try bonus is conceded is equal to the actual number of matches where no try bonus is conceded. See Appendix for details.
\subsubsection{Home advantage}
\label{sec: Home advantage}
Home advantage could be parametrised in several ways. Following the example of Davidson and Beaver (1977) and others, one possibility is to use a single parameter, for example by applying a scaling parameter to the home team and its reciprocal to the away team. This may be derived via entropy maximisation with the inclusion of a condition that, given the matches played, the difference between the expected points awarded to home teams and away teams is equal to the actual difference. An alternative explored by \citet{joe1988majorization} is to consider the home advantage of each team individually so that the rating for each team could be viewed as an aggregation of their separate home team and away team ratings. See Appendix for details.
\subsection{Preferred model for the most prevalent points system}
\subsubsection{Motivating considerations}
In Section \ref{sec: model alternatives}, four different possible result and try-bonus models were presented, each representing different assumptions around the independence of these points as they related to team strengths. Additionally two possible home advantage models were discussed.
The assumption of the independence of the result outcomes and try bonus of Section \ref{sec: independent result and try} was proposed based on the conceptual separation of result outcome and try bonus inherent in the most prevalent points system. Clearly, for a scenario such as that faced in the Top14 where try bonus is explicitly dependent on result outcome, a version of the dependent models presented in Section \ref{sec: try dependent on result} would be required. However, for modelling based on the most prevalent points system, the introduction of so many additional structural parameters seems unwarranted compared to the greater interpretability of the independent model of Section \ref{sec: independent result and try}. On the other hand, the more parsimonious model from taking the try bonus of the two teams as independent events requires only one less structural parameter. In work available in the Appendix, it was found to substantially and consistently have weaker predictive ability compared to the opposition-dependent try bonus model of Section \ref{sec: independent result and try} when tested against eight seasons of English Premiership rugby results. While predictive ability is not the primary requirement of the model, it was in this case considered a suitable arbiter, and so the opposition-dependent try bonus model of Section \ref{sec: independent result and try} is preferred.
Both the offensive-defensive model and the team specific home advantage model require an additional parameter for every team. There may be scenarios where this is desirable but given the sparse nature of fixtures in the Daily Mail Trophy they do not seem to be justified here, and so the combination of a single strength parameter for each team and a single home advantage parameter is chosen for the model in this case.
\subsubsection{A summary of the model} \label{sec: Model summary}
For a match where $i$ is the home team, and $j$ the away team, the model for the result outcome may be expressed as
\begin{align*}
P(\text{team $i$ beats team $j$ by wide margin}) &\propto \kappa^4\pi_i^4\\
P(\text{team $i$ beats team $j$ by narrow margin}) &\propto \rho_n\kappa^3\pi_i^4\pi_j\\
P(\text{team $i$ draws with team $j$}) &\propto \rho_d\pi_i^2\pi_j^2\\
P(\text{team $j$ beats team $i$ by narrow margin}) &\propto \frac{\rho_n\pi_i\pi_j^4}{\kappa^3}\\
P(\text{team $j$ beats team $i$ by wide margin}) &\propto \frac{\pi_j^4}{\kappa^4} \quad ,
\end{align*}
and for the try bonus point as
\begin{align*}
P(\text{team $i$ and team $j$ both gain try bonus point}) &\propto \tau_b \pi_i \pi_j\\
P(\text{only team $i$ gains try bonus point}) &\propto \kappa \pi_i\\
P(\text{only team $j$ gains try bonus point}) &\propto \frac{\pi_j}{\kappa}\\
P(\text{neither team gains try bonus point}) &\propto \tau_z \quad ,
\end{align*}
where $\kappa$ is the home advantage parameter, and $\rho_n$ and $\rho_d$ are structural parameters related to the propensity for narrow or drawn result outcomes respectively as before.
In order to express the likelihood, additional notation is required. From now on, the paired $ij$ notation will indicate the ordered pair where $i$ is the home team and $j$ the away team, unless explicitly stated otherwise. Let the frequency of each result outcome be represented as follows:
\begin{description}[noitemsep]
\item {\makebox[3cm]{\(r^{ij}_{4,0}\)\hfill} home win by wide margin}
\item {\makebox[3cm]{\(r^{ij}_{4,1}\)\hfill} home win by narrow margin}
\item {\makebox[3cm]{\(r^{ij}_{2,2}\) \hfill} draw}
\item {\makebox[3cm]{\(r^{ij}_{1,4}\)\hfill} away win by narrow margin}
\item {\makebox[3cm]{\(r^{ij}_{0,4}\)\hfill} away win by wide margin}
\end{description}
and the frequency of each try outcome as
\begin{description}[noitemsep]
\item {\makebox[3cm]{\(t^{ij}_{1,1}\)\hfill} both try bonus}
\item {\makebox[3cm]{\(t^{ij}_{1,0}\)\hfill} home try bonus only}
\item {\makebox[3cm]{\(t^{ij}_{0,1}\)\hfill} away try bonus only}
\item {\makebox[3cm]{\(t^{ij}_{0,0}\)\hfill} zero try bonus}
\end{description}
Then define the number of points gained by team $i$,
\(p_i = \displaystyle\sum_{j}4(r^{ij}_{4,0}+r^{ij}_{4,1}+r^{ji}_{0,4}+r^{ji}_{1,4}) + 2(r^{ij}_{2,2} + r^{ji}_{2,2}) + (r^{ji}_{4,1} + r^{ij}_{1,4}) + (t^{ij}_{1,1}+t^{ij}_{1,0}+t^{ji}_{0,1}),
\)
and let \(n = \displaystyle\sum_{i}\sum_{j} (r^{ij}_{4,1} + r^{ij}_{1,4})\) be the total number of narrow wins, \(d = \displaystyle\sum_{i}\sum_{j} (r^{ij}_{2,2} + r^{ji}_{2,2})\) the total number of draws, \(b = \displaystyle\sum_{i}\sum_{j} t^{ij}_{1,1} + t^{ji}_{1,1}\) the total number of matches where both teams gain try bonus points, \(z = \displaystyle\sum_{i}\sum_{j} (t^{ij}_{0,0} + t^{ji}_{0,0})\) the total number where zero teams gain a try bonus, and $h = \displaystyle\sum_{i}\sum_{j}4(r^{ij}_{4,0}-r^{ij}_{0,4})+3(r^{ij}_{4,1}-r^{ij}_{1,4}) + (t^{ij}_{1,0}-t^{ij}_{0,1})$ be the difference between points scored by home teams and away teams. Then the likelihood can be expressed as
\[
L(\boldsymbol{\pi},\rho_n,\rho_d,\tau_b,\tau_z,\kappa\ \mid R, T) \propto \rho_n^n\rho_d^d\tau_b^{b}\tau_z^z\kappa^h\displaystyle\prod_{i=1}^{m}\pi_i^{p_i} \quad ,
\]
where $R,T$ are the information from the result and try outcomes respectively. It is therefore the case that the statistic \((\boldsymbol{p},n,d,b,z,h)\) is a sufficient statistic for \((\boldsymbol{\pi},\rho_n,\rho_d,\tau_b,\tau_z,\kappa)\).
This gives a log-likelihood, up to a constant term, of
\begin{align*}
\log L(\boldsymbol{\pi},\rho_n,\rho_d,\tau_b,\tau_z, \kappa|R,T) &= n\log\rho_n+d\log\rho_d+b\log\tau_b+z\log\tau_z+h\log\kappa+\displaystyle\sum_{i=1}^{m}p_i\log \pi_i.
\end{align*}
\section{Estimation and parametrization} \label{sec: Estimation}
\subsection{Log-linear representation}
As the form of the log-likelihood suggests, and following \citet{fienberg1979log}, the estimation of the parameters may be simplified by using a log-linear model. Let \(\theta_{ijkl}\) denote the observed count for the number of matches with home team $i$, away team $j$, result outcome $k$, and try bonus outcome $l$. Furthermore let \(\mu_{ijkl}\) be the expected value corresponding to \(\theta_{ijkl}\). The log-linear version of the model can then be written as
\[
log\: \mu_{ijkl} = \theta_{ij} + \theta_{ijk\cdot} + \theta_{ij\cdot l} \quad ,
\]
where \(\theta_{ij}\) is a normalisation parameter, and \(\theta_{ijk\cdot}\) and \(\theta_{ij\cdot l}\) represent those parts due to the result outcome and try outcome respectively. That is
\newcommand*{\LongestName}{\ensuremath{\theta_{ij\cdot l}}
\newcommand*{\LongestValue}{\ensuremath{4\delta_{i} + \delta_{j} + \beta_{n} + 3\eta}
\newcommand*{\LongestText}{if both home and away try bonuses
\newlength{\LargestNameSize}%
\newlength{\LargestValueSize}%
\newlength{\LargestTextSize}%
\settowidth{\LargestNameSize}{\LongestName}%
\settowidth{\LargestValueSize}{\LongestValue}%
\settowidth{\LargestTextSize}{\LongestText}%
\newcommand*{\MakeBoxName}[1]{{\makebox[\LargestNameSize][r]{\ensuremath{#1}}}}%
\newcommand*{\MakeBoxValue}[1]{\ensuremath{\makebox[\LargestValueSize][l]{\ensuremath{#1}}}}%
\newcommand*{\MakeBoxText}[1]{\makebox[\LargestTextSize][l]{#1}}%
\renewcommand{\labelitemi}{}
\begin{itemize}
\item
\begin{equation*}
\MakeBoxName{\theta_{ijk\cdot}} = \left\{
\begin{array}{l l}
\MakeBoxValue{4\alpha_{i}+ 4\eta} & \MakeBoxText{if home win by wide margin} \\
\MakeBoxValue{4\alpha_{i} + \alpha_{j}+ \beta_{n} + 3\eta} & \MakeBoxText{if home win by narrow margin} \\
\MakeBoxValue{2\alpha_{i} + 2\alpha_{j} + \beta_{d}} & \MakeBoxText{if draw} \\
\MakeBoxValue{\alpha_{i} + 4\alpha_{j} + \beta_{n} - 3\eta} & \MakeBoxText{if away win by narrow margin} \\
\MakeBoxValue{4\alpha_{j} - 4\eta} & \MakeBoxText{if away win by wide margin} \\
\end{array} \right.
\end{equation*}
\item
\begin{equation*}
\MakeBoxName{\theta_{ij\cdot l}} = \left\{
\begin{array}{l l}
\MakeBoxValue{\alpha_{i} + \alpha_{j} + \gamma_{bb}} & \MakeBoxText{if both home and away try bonuses} \\
\MakeBoxValue{\alpha_{i} + \eta} & \MakeBoxText{if home try bonus only} \\
\MakeBoxValue{\alpha_{j} - \eta} & \MakeBoxText{if away try bonus only} \\
\MakeBoxValue{\gamma_{zb}} & \MakeBoxText{if no try bonus for either side} \\
\end{array} \right.
\end{equation*}
\end{itemize}
where \(\pi_i=\exp(\alpha_i)\), \(\rho_n =\exp(\beta_n)\), \(\rho_d=\exp(\beta_d)\), \(\tau_b=\exp(\gamma_{b})\), \(\tau_z=\exp(\gamma_{z})\), \(\kappa=\exp(\eta)\).
The gnm package in R \citep{turner2007gnm} is used to give maximum likelihood estimates for \((\boldsymbol{\alpha}, \beta_n, \beta_d, \gamma_{b}, \gamma_{z}, \eta)\) and thus for our required parameter set \((\boldsymbol{\pi},\rho_n,\rho_d,\tau_b,\tau_z,\kappa)\). An advantage of gnm for this purpose is that it facilitates efficient elimination of the `nuisance' parameters $\theta_{ij}$ that are present in this log-linear representation.
If modelling the try bonus dependent on the result outcome then \(\theta_{ijkl}\) would not be separated into the independent parts \(\theta_{ijk\cdot}\) and \(\theta_{ij\cdot l}\), and \(\theta_{ijkl}\) would need to be specified for each result-try outcome combination. This would be the case for example in modelling the Top14 tournament. There would be some simplification in that case however, given that, conditional on the result outcome there are only two try bonus outcomes, namely, winning team gains try bonus, and winning team does not gain try bonus.
\subsection{A more intuitive measure}
Once the parameters have been estimated, they can be used to compute the outcome probabilities. This allows for a calculation of the projected points per match for team $i$, PPPM$_i$, by averaging the expected points per match were team $i$ to play each of the other teams in the tournament twice, once at home and once away:
\[
\text{PPPM}_i = \frac{1}{2(m-1)}\displaystyle\sum_{j\neq i}\sum_{a,b}\big(ap^{ij}_{a,b}+bp^{ji}_{a,b}\big) \quad ,
\]
where $p^{ij}_{a,b}$ now denotes specifically the probability that $i$ as the home team gains $a$ points and $j$ as the away team gains $b$ points.
It may readily be shown that the derivative of PPPM$_i$ with respect to $\pi_i$ is strictly positive so a team ranked higher based on strength \(\pi_i\) will also be ranked higher based on projected points per match PPPM\(_i\) and vice versa. Thus PPPM\(_i\) may be used as an alternative, more intuitive, rating.
\subsection{Adding a prior}
\label{sec: Prior}
One potential criticism of the model proposed so far is that it gives no additional credit to a team that has achieved their results against a large number of opponents as compared to a team that has played only a small number. This is an intuitive idea in line with those discussed by \citet{efron1977stein} in the context of shrinkage with respect to strength evaluation in sport.
An obvious way to address such a concern in the context of the model considered in this paper is to apply a prior distribution to the team strength parameters. According to \citet{schlobotnik2018KRACH}, this is an idea considered by Butler in the development of the KRACH model. In some scenarios, one might consider applying asymmetric priors based on, for example, previous seasons' results. This may be appropriate if one were seeking to use the model to predict outcomes, for example. Even then, given the large variation in team strength that can exist from one season to the next in, for example, a schools environment, where there is enforced turnover of players, then the use of a strong asymmetric prior may not be advisable. In the context of computing official rankings, it would seem more reasonable as a matter of fairness to instead apply a symmetric prior so that rating is based solely on the current season's results.
This may be achieved through the consideration of a dummy `team $0$', against which each team plays two notional matches with binary outcome. From one match they `win' and gain a point and from the other they `lose' and gain nothing. Recalling that \(p_i\) represents the total points gained by team $i$, this adds the same value to each team's points. The influence of this may then be controlled by weighting this prior. As the prior weight increases, the proportion of \(p_i\) due to the prior increases.
Including a prior has two main advantages in this setting. One is that it ensures that the set of teams is connected so that a ranking may be produced after even a small number of matches. The second is that it provides one method of ensuring that there is a finite mean for the team strength parameters, which in turn enables the reinterpretation of the structural parameters as the more intuitive probabilities that were originally introduced in Section \ref{sec: model alternatives}.
In the three scenarios of varying schedule strength highlighted in Section \ref{sec: Intro} that related to professional club teams, the ranking is unlikely to be sensitive to the choice of prior weight, since at any given point in the season teams are likely to have played a similar number of matches or to have played sufficiently many such that the prior will not be a large factor in discriminating between teams. Indeed when estimating rankings mid-season for a round robin tournament there may be a preference not to include a prior so that the estimation of PPPM$_i$ is in line with the actual end-of-season PPPM$_i$, without any adjustments being required. In the context of schools ranking, and the Daily Mail Trophy in particular, this is not the case, with teams playing between five and thirteen matches as part of the tournament in any given season. One could consider selecting the weight of the prior based on how accurately early-season PPPM$_i$ using different prior weights predicts end-of-season PPPM$_i$. However there are some practical challenges to this that are discussed further in Section \ref{sec: DMT Model calibration}. Perhaps more fundamentally however the determination of the weight of the prior to be used may be argued to not be a statistical one but rather one of fairness. Its effect is to favour either teams with limited but proportionately better records or teams with longer but proportionately worse records, for example should a 5-0 record (five wins and no losses) be preferred to a 9-1 record against equivalent opposition or a 6-1 record preferred to a 10-2 record? This is a matter for tournament stakeholders and will be discussed further in the context of the Daily Mail Trophy in Section \ref{sec: DMT}.
\subsection{Mean strength}
Choosing to constrain the team strength parameters by ascribing a mean strength of one is desirable as it allows for an intuitive meaning to be asserted from the structural parameters in the model. This could be done in a number of ways, two of which are discussed here.
One way would be to fit the model with no constraint and afterwards apply a scaling factor to achieve an arithmetic mean of 1. That is let \(\mu\) be the arithmetic mean of the abilities \(\pi_i\) derived from the model
\[
\mu = \frac{1}{m}\displaystyle\sum_{i}^{m} \pi_{i} \quad .
\]
Then by setting \(\pi'_i = \pi_i / \mu\) a mean team strength of 1 for the \(\pi'_i\) is ensured.
Alternatively we might motivate an alternative mean by considering the strength of the prior. Consider the projected points per match for a dummy `team \(0\)' that achieves one `win' and one `loss' against each other team in the tournament, as described in section \ref{sec: Prior}. If zero points are awarded for a `loss', and, without loss of generality, one point is awarded for a `win', and there is assumed to be no home advantage and bonuses, then
\[
\text{PPPM}_0 = \frac{1}{m}\displaystyle\sum_{i=1}^{m}\frac{\pi_0}{\pi_0+\pi_i} \quad .
\]
The strength of team 0, \(\pi_0\), may be selected to take any value, since it is not a real participant in the tournament and so it may be set arbitrarily to \(\pi_0=1\). Intuitively since it has an equal winning and losing record against every team one might expect it to be the mean team and therefore have a strength of one. More formally we are setting
\begin{align*}
\frac{1}{2} &= \frac{1}{m}\displaystyle\sum_{i=1}^{m}\frac{1}{1+\pi_i}=\sum_{i=1}^{m}\frac{\pi_i}{1+\pi_i} \quad ,
\end{align*}
and so rearranging gives
\[
1 = \frac{1}{m}\displaystyle\sum_{i=1}^{m}\frac{2 \pi_i}{1+\pi_i} \quad ,
\]
and by defining a generalised mean as the function on the right hand side of this equation the required mean of one for the team strength parameters is returned.
While the prior has been used here to give this generalised mean an intuitive interpretation, it may be applied even without choosing to use a prior. As such it could be particularly beneficial in the context of a tournament such as the Daily Mail Trophy, because it is quite possible that a team will have achieved full points and so the estimated team strength parameter $\pi_i$ may be infinite. If this were the case then it would not be possible to achieve a mean of 1 using, for example, an arithmetic mean. This in turn would mean that some of the structural parameters would be undefined also and so one could no longer make the intuitive interpretations around propensity for draws or narrow results based on those structural parameters. The generalised mean defined here, on the other hand, is always finite and is therefore used in the analysis below.
\section{The Daily Mail Trophy} \label{sec: DMT}
\subsection{Tournament format} \label{sec: tournament format}
The Daily Mail Trophy is a league-based schools tournament that has existed since 2013, competed for by the 1$^{\text{st}}$ XVs of participating schools. Participation is based on entering and playing at least five other participating schools. Since fixtures are scheduled on a bilateral basis there is a large amount of variability in schedule strength. In 2017/18 season, the tournament consisted of 102 school teams, each playing between five and twelve other participating teams, in a total of 436 matches overall. There is an existing ranking method that has evolved over time and which seeks to adjust for schedule strength by means of awarding additional points for playing stronger teams as determined by their position in the previous season's tournament. Further details of this and of the tournament can be found at the tournament website (\url{https://www.schoolsrugby.co.uk/dailymailtrophy.aspx}).
\subsection{Data summary} \label{sec: Data summary}
The data for the Daily Mail Trophy have been kindly supplied by \url{www.schoolsrugby.co.uk}, the organisation that administers the competition. The match results are entered by the schools themselves. The score is entered, and this is used to suggest a number of tries for each team which can then be amended. These inputs are not subject to any formal verification. This might suggest that data quality, especially as it relates to number of tries, may not be reliable. However the league tables are looked at keenly by players, coaches and parents, and corrections made where errors are found, and so data quality, especially at the top end of the table, is thought to be good. This analysis uses results from the three seasons 2015/16 to 2017/18. Over this period there were 24 examples of inconsistencies or incompleteness found in the results that required assumptions to be made. All assumptions were checked with SOCS. Full details of these are given in the Appendix.
The results are summarised in Figure \ref{fig:Outcome distribution}. In order to provide a comparison, they are plotted above those for the English Premiership for the same season. In comparison to the English Premiership result outcomes, there is a reduced home advantage and a reduced prevalence of narrow results, though the overall pattern of a higher proportion of wide than narrow results, and a low prevalence of draws is maintained. With respect to the try bonus outcomes, the notable difference is the higher prevalence of both teams gaining a try bonus in the Premiership.
\begin{figure}[htbp]
\centering
Daily Mail Trophy \\
\subfloat{\includegraphics[width=0.3\linewidth]{DMT18.pdf}}
\subfloat{\includegraphics[width=0.3\linewidth]{DMT17.pdf}}
\subfloat{\includegraphics[width=0.3\linewidth]{DMT16.pdf}} \\
\vspace{1cm}
English Premiership \\
\subfloat{\includegraphics[width=0.3\linewidth]{Prem18.pdf}}
\subfloat{\includegraphics[width=0.3\linewidth]{Prem17.pdf}}
\subfloat{\includegraphics[width=0.3\linewidth]{Prem16.pdf}}
\caption{Distribution of outcomes for Daily Mail Trophy and English Premiership 2015/16 - 2017/18. Result outcome labels: HW - home wide win, HN - home narrow win, DD - Draw, AN -away narrow win, AW - away wide win; Shades indicate try bonus outcome, becoming lighter along scale: both bonus, home bonus, away bonus, zero bonus}
\label{fig:Outcome distribution}
\end{figure}
\subsection{Model calibration}
\label{sec: DMT Model calibration}
In the context of this model, calibration consists of two parts: a determination of the value of the structural parameters (the model parameters not related to a particular team); and a determination of the weight of the prior. One approach to the structural parameters would be to allow them to be determined each season. However, it would seem clear that, in regard to the structural parameters, data from proximate seasons is relevant to an assessment of their value in the current season. For example, one would not really expect the probability of a draw between two equally matched teams to change appreciably from season to season and so data on that should be aggregated across seasons in order to produce a more reliable estimate for the parameters. As can be seen from Table \ref{tbl: Params DMT} the range for each was not large. It was also found that varying the parameters used within that range did not materially impact ratings under the model. The structural parameters are therefore fixed at the mean of the three seasons' estimated values. An intuitive way to interpret these is by calculating, based on these parameters, the probability of specific outcomes for a match between two teams of mean strength. For example it can then be determined that under the model, in such a match, the probability of a wide result is 65\%, of both teams gaining a try bonus only 1\%, and perhaps most notably that the home team is 2.2 times as likely to win as the away team.
\begin{table}
\centering
\resizebox{5.5cm}{!}{
\begin{tabular}{c|ccc|c}
& 2017/18 & 2016/17 & 2015/16 & Mean\\
\hline
$\rho_n$ & 0.384 & 0.498 & 0.463 & 0.448\\
$\rho_d$ & 0.210 & 0.184 & 0.243 & 0.212\\
$\tau_b$ & 0.042 & 0.035 & 0.050 & 0.042\\
$\tau_z$ & 2.489 & 3.075 & 2.838 & 2.801\\
$\kappa$ & 1.049 & 1.190 & 1.100 & 1.113
\end{tabular}}
\caption{Structural parameter values for Daily Mail Trophy 2015/16 - 2017/18}
\label{tbl: Params DMT}
\end{table}
As previously mentioned there is limited scope with the Daily Mail Trophy data to compare the prior weights based on their predictive capabilities, since it is not a round robin format. One could look at an earlier state in the tournament and compare to a later state where more information has become available, but such an approach is limited both by the number of matches that teams play (many play only five in total), by only having three seasons' worth of data on which to base it, and by the fact that even in the later more informed state the estimation of the team strength will be defined by the same model. Therefore no analysis of this kind is performed.
As discussed in section \ref{sec: Prior} the main aim of the use of a non-negligible prior is to reasonably account for the greater certainty one can have on the estimate of a team's strength with the greater number of matches played. In this context, details of the ranking produced using various prior weights are presented and compared to the relevant team's record (played, won, drawn, lost).
\begin{figure}
\centering
\subfloat{\includegraphics[width=0.4\linewidth]{PPPM_Priors.pdf}}
\subfloat{\includegraphics[width=0.6\linewidth]{Rank_Priors_DMT2018.pdf}}
\caption{Top10 PPPM and Rank variation with prior weight for Daily Mail Trophy 2017/18 }
\label{fig: DMT_Priors 2017/18}
\end{figure}
\begin{table}
\centering
\resizebox{9cm}{!}{
\begin{tabular}{|l|ccccc|}
\hline
School & P & W & D & L & LPPM\\
\hline
Kingswood & 4 & 4 & 0 & 0 & 5.00\\
Sedbergh & 11 & 11 & 0 & 0 & 4.91\\
Reed's & 10 & 10 & 0 & 0 & 4.80\\
Harrow & 8 & 8 & 0 & 0 & 4.50\\
Cranleigh & 8 & 8 & 0 & 0 & 4.63\\
Northampton & 7 & 7 & 0 & 0 & 4.71\\
St Peter's, York & 7 & 7 & 0 & 0 & 4.43\\
Wellington College & 12 & 10 & 0 & 2 & 4.08\\
Haileybury & 7 & 6 & 0 & 1 & 4.29\\
Queen Elizabeth Grammar & 7 & 6 & 0 & 1 & 4.14\\
\hline
\end{tabular}}
\caption{Playing record for Top10, for Daily Mail Trophy 2017/18. LPPM - league points per match - the total number of points gained, including bonuses, divided by number of matches; P - Played, W - Win, D - draw, L - loss}
\label{tbl: DMT18 PWDL}
\end{table}
Looking at Figure \ref{fig: DMT_Priors 2017/18} and comparing to the information in Table \ref{tbl: DMT18 PWDL} it can be seen that as the prior weight is increased that, in general, teams who have played fewer matches move lower, most notably Kingswood, and those who have played more move higher, most notably Sedbergh. This is not uniformly true with, for example, St Peter's moving higher despite having played relatively few matches and having a lower league points per match than either Kingswood or Northampton, who they overtake when prior weight is set to 8. Of course while the general pattern is clear and expected, the question of interest is what absolute size for the prior should be chosen. It seems reasonable to state that a team with a 100\% winning record from four matches should not generally be ranked higher than a team with a 100\% wining record from eight matches, assuming their schedule strength is not notably different. It certainly seems undesirable that all of the six other teams with 100\% winning records below Kingswood should be ranked lower than them, which would imply a prior weight of at least one and more likely 4 or higher.
Results for the other two seasons are included in the Appendix. Considerations and comparisons in line with those above were made across the three seasons. A reasonable case could be made for prior weights between 2 and 8, and ultimately it is a decision that should be made by the stakeholders of the tournament with regard to their view on the relative merit of a shorter more perfect record as compared to a longer but less perfect record. For the purposes of further analysis here a prior weight of 4 was chosen.
\subsection{Results}
The model may then be used to assess the current ranking method used in the Daily Mail Trophy. As can be seen in Figure \ref{fig:PPPM vs DMT} there is at least broad agreement between the two measures. However this is not a particularly helpful way to look at the quality of the Daily Mail Trophy method, as this agreement can be ascribed largely to the base scoring rule of league points per match, LPPM, which both methods essentially have in common. What is of more interest is the effectiveness of the adjustment made for schedule strength. This is shown in Figure \ref{fig:Points adjustment}.
\begin{figure}
\centering
\subfloat{\includegraphics[width=0.3\linewidth]{PPPMvsDMT_MP_18.pdf}}
\subfloat{\includegraphics[width=0.3\linewidth]{PPPMvsDMT_MP_17.pdf}}
\subfloat{\includegraphics[width=0.3\linewidth]{PPPMvsDMT_MP_16.pdf}}
\caption{Scatterplot of Model PPPM, on x-axis, against Daily Mail Trophy (DMT) ranking measure on y-axis. Darker colours represent higher rank in Daily Mail Trophy. Top three teams as ranked by Daily Mail Trophy labeled.}
\label{fig:PPPM vs DMT}
\end{figure}
\begin{figure}
\centering
\subfloat{\includegraphics[width=0.3\linewidth]{PPPMvsDMT18.pdf}}
\subfloat{\includegraphics[width=0.3\linewidth]{PPPMvsDMT17.pdf}}
\subfloat{\includegraphics[width=0.3\linewidth]{PPPMvsDMT16.pdf}}
\caption{Scatterplot of points adjustment to league points per match. PPPM-LPPM, on x-axis. Adjustment due to Daily Mail Trophy (DMT) method, DMT-LPPM, on y-axis. Darker colours represent higher rank in Daily Mail Trophy ranking. Top three teams as ranked by Daily Mail Trophy labeled.}
\label{fig:Points adjustment}
\end{figure}
Here clear differences can be seen and there is a low correlation between the measures. Not surprisingly, some of the teams who perform well in the Daily Mail Trophy rankings seem to be those that are benefiting most from these differences, with Wellington College in particular, winner of the Daily Mail Trophy in two of the three seasons, being a serial outlier in this regard. While this is concerning in its own right, the requirements on the measure are related almost solely to the ranking that they produce, rather than the rating. Figure \ref{fig:PPPM vs DMT Rank} looks at that.
\begin{figure}
\centering
\subfloat{\includegraphics[width=0.3\linewidth]{PPPMvsDMT_Rank_18.pdf}}
\subfloat{\includegraphics[width=0.3\linewidth]{PPPMvsDMT_Rank_17.pdf}}
\subfloat{\includegraphics[width=0.3\linewidth]{PPPMvsDMT_Rank_16.pdf}}
\caption{Scatterplot of PPPM rank on x-axis, against Daily Mail Trophy (DMT) rank on y-axis. Darker colours represent higher rank in Daily Mail Trophy.}
\label{fig:PPPM vs DMT Rank}
\end{figure}
Here considerable differences are seen between the rankings produced by the two different methods. In order to focus more clearly on this aspect, the difference in ranking under the two methods is plotted against the Daily Mail Trophy rank in Figure \ref{fig:PPPM vs DMT Rank Change}.
\begin{figure}
\centering
\subfloat{\includegraphics[width=0.3\linewidth]{PPPMvsDMT_RankChange_18.pdf}}
\subfloat{\includegraphics[width=0.3\linewidth]{PPPMvsDMT_RankChange_17.pdf}}
\subfloat{\includegraphics[width=0.3\linewidth]{PPPMvsDMT_RankChange_16.pdf}}
\caption{Scatterplot of Daily Mail Trophy rank on x-axis, against the gain in rank from Daily Mail Trophy method vs PPPM. Darker colours represent higher rank in Daily Mail Trophy.}
\label{fig:PPPM vs DMT Rank Change}
\end{figure}
If the top (and bottom) quintile of the Daily Mail Trophy ranking are considered then a disproportionately positive (and negative) impact from the Daily Mail Trophy method is seen. What is perhaps more notable is the size of some of these rank differences, up to 28 places in a tournament of approximately one hundred teams. Looked at across the population the mean absolute difference in rank is approximately eight places. Looking at the typical difference in points between two teams eight places apart then this can be observed to be worth approximately 0.4 points per match.
It seems reasonable therefore to say that over the general population of teams there is scope for improvement in the Daily Mail Trophy method in its approach to adjusting for schedule strength.
Given the nature of a tournament where there is a winner but no relegation then there is a natural focus on the top end of the ranking. Comparisons of the rankings for the top ten teams under the current Daily Mail Trophy method and the model presented here are shown in Tables \ref{tbl: Top10 2017/18}, \ref{tbl: Top10 2016/17}, and \ref{tbl: Top10 2015/16}.
\begin{table}
\resizebox{!}{2.4cm}{
\begin{tabular}{|l|cc|cc|}
\hline
~ & DMT & ~ & PPPM & ~\\
School & Rank & DMT & Rank & PPPM \\
\hline
Sedbergh & 1 & 7.41 & 1 & 4.65 \\
Wellington College & 2 & 7.18 & 7 & 4.18 \\
Cranleigh & 3 & 6.33 & 4 & 4.32 \\
Harrow & 4 & 6.20 & 3 & 4.33 \\
Cheltenham College & 5 & 6.16 & 8 & 4.07 \\
St Peter's, York & 6 & 5.83 & 6 & 4.19 \\
Brighton College & 7 & 5.63 & 20 & 3.59 \\
Reed's & 8 & 5.50 & 2 & 4.38 \\
Clifton College & 8 & 5.50 & 16 & 3.72 \\
Haileybury & 10 & 5.49 & 10 & 4.02 \\
\hline
\end{tabular}}
\quad
\resizebox{!}{2.4cm}{
\begin{tabular}{|l|cc|cc|}
\hline
~ & PPPM & ~ & DMT & ~\\
School & Rank & PPPM & Rank & DMT \\
\hline
Sedbergh & 1 & 4.65 & 1 & 7.41 \\
Reed's & 2 & 4.38 & 8 & 5.50 \\
Harrow & 3 & 4.33 & 4 & 6.20 \\
Cranleigh & 4 & 4.32 & 3 & 6.33 \\
Kingswood & 5 & 4.31 & NR & NR \\
St Peter's, York & 6 & 4.19 & 6 & 5.83 \\
Wellington College & 7 & 4.18 & 2 & 7.18 \\
Cheltenham College & 8 & 4.07 & 5 & 6.16 \\
Northampton & 9 & 4.05 & 11 & 5.41 \\
Haileybury & 10 & 4.02 & 10 & 5.49 \\
\hline
\end{tabular}}
\caption{2017/18: Top 10 by Daily Mail Trophy method and PPPM. NR - not ranked due to requirement to play at least five matches}
\label{tbl: Top10 2017/18}
\end{table}
\begin{table}
\resizebox{!}{2.35cm}{
\begin{tabular}{|l|cc|cc|}
\hline
~ & DMT & ~ & PPPM & ~\\
School & Rank & DMT & Rank & PPPM \\
\hline
Wellington College & 1 & 7.22 & 3 & 4.37 \\
Sedbergh & 2 & 6.50 & 2 & 4.43 \\
Harrow & 3 & 6.34 & 6 & 4.22 \\
St Peter's, York & 4 & 6.23 & 8 & 4.06 \\
Kirkham & 5 & 6.15 & 1 & 4.61 \\
Canford & 6 & 6.10 & 9 & 4.02 \\
Clifton College & 7 & 6.00 & 5 & 4.25 \\
Rugby & 8 & 5.96 & 7 & 4.06 \\
Brighton College & 9 & 5.90 & 4 & 4.29 \\
Woodhouse Grove & 10 & 5.81 & 12 & 3.93 \\
\hline
\end{tabular}}
\quad
\resizebox{!}{2.35cm}{
\begin{tabular}{|l|cc|cc|}
\hline
~ & PPPM & ~ & DMT & ~\\
School & Rank & PPPM & Rank & DMT \\
\hline
Kirkham Grammar & 1 & 4.61 & 5 & 6.15 \\
Sedbergh & 2 & 4.43 & 2 & 6.50 \\
Wellington College & 3 & 4.37 & 1 & 7.22 \\
Brighton College & 4 & 4.29 & 9 & 5.90 \\
Clifton College & 5 & 4.25 & 7 & 6.00 \\
Harrow & 6 & 4.22 & 3 & 6.34 \\
Rugby & 7 & 4.06 & 8 & 5.96 \\
St Peter's, York & 8 & 4.06 & 4 & 6.23 \\
Canford & 9 & 4.02 & 6 & 6.10 \\
St John's, Leatherhead & 10 & 4.01 & 14 & 5.39 \\
\hline
\end{tabular}}
\caption{2016/17: Top 10 by Daily Mail Trophy method and PPPM.}
\label{tbl: Top10 2016/17}
\end{table}
\begin{table}
\resizebox{!}{2.27cm}{
\begin{tabular}{|l|cc|cc|}
\hline
~ & DMT & ~ & PPPM & ~\\
School & Rank & DMT & Rank & PPPM \\
\hline
Wellington College & 1 & 6.46 & 7 & 3.73 \\
Kirkham & 2 & 6.44 & 1 & 4.41 \\
Bedford & 3 & 6.35 & 2 & 4.37 \\
Bromsgrove & 4 & 6.21 & 4 & 4.15 \\
Sedbergh & 5 & 6.10 & 5 & 3.99 \\
Woodhouse Grove & 6 & 5.65 & 19 & 3.31 \\
Millfield & 7 & 5.21 & 13 & 3.64 \\
Clifton College & 8 & 5.11 & 8 & 3.73 \\
Solihull & 9 & 5.10 & 11 & 3.67 \\
St Paul's & 9 & 5.10 & 14 & 3.58 \\
\hline
\end{tabular}}
\quad
\resizebox{!}{2.27cm}{
\begin{tabular}{|l|cc|cc|}
\hline
~ & PPPM & ~ & DMT & ~\\
School & Rank & PPPM & Rank & DMT \\
\hline
Kirkham Grammar & 1 & 4.41 & 2 & 6.44 \\
Bedford & 2 & 4.37 & 3 & 6.35 \\
Stockport Grammar & 3 & 4.22 & NR & NR \\
Bromsgrove & 4 & 4.15 & 4 & 6.21 \\
Sedbergh & 5 & 3.99 & 5 & 6.10 \\
Seaford College & 6 & 3.93 & 13 & 5.04 \\
Wellington College & 7 & 3.73 & 1 & 6.46 \\
Clifton College & 8 & 3.73 & 8 & 5.11 \\
Queen Elizabeth Grammar & 9 & 3.69 & 17 & 4.98 \\
Tonbridge & 10 & 3.69 & 18 & 4.94 \\
\hline
\end{tabular}}
\caption{2015/16: Top 10 by Daily Mail Trophy method and PPPM. NR - not ranked due to requirement to play at least five matches}
\label{tbl: Top10 2015/16}
\end{table}
Ignoring teams that are not ranked as part of the Daily Mail Trophy having played fewer than five matches against other participants, the top five always appear within the top ten of the other ranking method, and the top ten always within the top twenty. On the other hand they only agree on the first placed team in one of the three seasons, and in the two seasons where they differ there was no prior weight that would have resulted in the same winner as the Daily Mail Trophy method. In particular, in 2015/16 Wellington College, who were the winners of the tournament, are ranked seventh under the model and were a full 0.68 projected points per match behind the leader.
\section{Concluding Remarks} \label{sec: Concluding Remarks}
In its most general application the model presented here allows for a ready extension of the well-known Bradley-Terry model to a system of pairwise comparisons where each comparison may result in any finite number of scored outcomes. For example, the model could be adapted to a situation where judges are asked to assign pairwise preferences on the seven-category symmetric scale made up of `strongly prefer', `prefer', `mildly prefer', `neutral' etc. if one were prepared to assign score values to each. The maximum entropy derivation provides a principled basis for a family of models. The application of entropy maximisation to motivate these models also helps to clarify the various assumptions and considerations that are essential to each. In the more particular implementation for rugby union the family of models provided a method for assessing teams in situations where schedule strengths vary in a way that is consistent with the points norm of the sport. Within that family, different models may be suitable depending on the try bonus stipulations of the tournament, the density of matches, and the similarity of the number of fixtures played across teams.
In the investigation of the Daily Mail Trophy the model studied here proved to be a useful tool in highlighting concerns about the ranking method that is currently used. It may be tempting to advocate its use directly as a superior method for evaluating performance in that tournament. However a key element that it lacks for a wider audience is transparency, for example as represented in the ability of stakeholders to calculate their rating, to understand the impact of winning or losing in a particular match, and to evaluate what rating differences between themselves and similarly ranked teams mean in terms of how rankings would change given particular results. The strength of the model in accounting for all results in the rating of each team is, in this sense, also a weakness for wider application. But even if transparency of method is seen as a dominating requirement the model may still be useful as a means by which alternative, more transparent methods can be assessed.
\section{Appendix}
\subsection{Maximum entropy derivations}
\subsubsection{Offensive Defensive Strength}
The offensive-defensive strength model assumes independence of the result and try outcomes. The maximum entropy derivation is thus related solely to the try outcome and is linked to the result outcome by the assumption that for each team the overall strength parameter is equal to the product of the offensive and defensive parameters. The same notation may be used as in the general derivation, though in this case $a,b \in \{0,1\}$. Entropy is defined as before as
\begin{equation}
S(p) = -\sum_{i,j}m_{ij}\sum_{a,b}p^{ij}_{a,b}\log p^{ij}_{a,b} \quad ,
\end{equation}
and we have the familiar condition that for each pair of teams the sum of the probabilities of all possible outcomes is 1,
\begin{equation}
\sum_{a,b}p^{ij}_{a,b}=1 \quad \text{for all $i,j$ such that $m_{ij}>0$},
\end{equation}
But in this case we have two additional retrodictive criteria per team. First that, given the matches played, for each team $i$, the expected number of matches in which a try bonus point was gained is equal to the actual number of matches in which a try bonus point was gained,
\begin{equation}
\sum_{j}m_{ij}\sum_{a,b} ap^{ij}_{a,b} =
\sum_{j}\sum_{a,b} am^{ij}_{a,b}\quad .
\end{equation}
Second that, given the matches played, for each team $i$, the expected number of matches where a try bonus point was not conceded is equal to the actual number of matches where a try bonus point was not conceded,
\begin{equation}
\sum_{j}m_{ij}\sum_{a,b} (1-b)p^{ij}_{a,b} =
\sum_{j}\sum_{a,b} (1-b)m^{ij}_{a,b}\quad .
\end{equation}
Then, for all $i,j$ such that $m_{ij}>0$, the solution satisfies
\begin{equation}
\log p^{ij}_{a,b} = -\lambda_{ij} -a\lambda_i - b\lambda_j -(1-b)\lambda'_i -(1-a)\lambda'_j -1 \quad ,
\end{equation}
which gives us that
\begin{equation}
p^{ij}_{a,b} \propto \omega_i^{a}\omega_j^{b}\delta_{i}^{(1-b)}\delta_{j}^{(1-a)} \quad,
\end{equation}
where $\omega_i = \exp(-\lambda_i)$, $\delta_i = \exp(-\lambda'_i)$, and we take $\pi_i=\omega_i\delta_i$.
\subsubsection{Single parameter home advantage}
In order to identify the home team, let the ordered pair ${ij}$ now denote $i$ as the home team and $j$ as the away team. Then under this amended notation, define entropy as before
\begin{equation}
S(p) = -\sum_{i,j}m_{ij}\sum_{a,b}p^{ij}_{a,b}\log p^{ij}_{a,b} \quad ,
\end{equation}
and we have the familiar condition that for each pair of teams the sum of the probabilities of all possible outcomes is 1,
\begin{equation}
\sum_{a,b}p^{ij}_{a,b}=1 \quad \text{for all $i,j$ such that $m_{ij}>0$},
\end{equation}
The retrodictive criterion is now altered to reflect the new notation,
\begin{equation}
\sum_{j}m_{ij}\sum_{a,b} \left( ap^{ij}_{a,b} + bp^{ji}_{a,b}\right) =
\sum_{j}\sum_{a,b} \left(am^{ij}_{a,b} + bm^{ji}_{a,b} \right)\quad .
\end{equation}
But now we also have a condition that says that the expected difference between the number of home points and the number of away points is equal to the actual difference,
\begin{equation}
\sum_{i,j}m_{ij}\sum_{a,b} (a-b)p^{ij}_{a,b} =
\sum_{i,j}\sum_{a,b} (a-b)m^{ij}_{a,b}\quad .
\end{equation}
Then, for all $i,j$ such that $m_{ij}>0$, the solution satisfies
\begin{equation}
\log p^{ij}_{a,b} = -\lambda_{ij} -a\lambda_i - b\lambda_j -(a-b)\lambda_0 -1 \quad ,
\end{equation}
which gives us that
\begin{equation}
p^{ij}_{a,b} \propto \kappa^{(a-b)}\pi_i^a \pi_j^b \quad,
\end{equation}
where $\kappa = \exp(-\lambda_0)$, $\pi_i = \exp(-\lambda_i)$, and the constant of proportionality is $\exp(-\lambda_{ij}-1)$.
\subsubsection{Team specific home advantage}
Using the same notation, define entropy in the now familiar way
\begin{equation}
S(p) = -\sum_{i,j}m_{ij}\sum_{a,b}p^{ij}_{a,b}\log p^{ij}_{a,b} \quad ,
\end{equation}
and we have the familiar condition that for each pair of teams the sum of the probabilities of all possible outcomes is 1,
\begin{equation}
\sum_{a,b}p^{ij}_{a,b}=1 \quad \text{for all $i,j$ such that $m_{ij}>0$},
\end{equation}
The retrodictive criteria are now split into home and away parts, so that we have that, for all teams, the expected number of home points gained is equal to the actual number of home points gained,
\begin{equation}
\sum_{j}m_{ij}\sum_{a,b} ap^{ij}_{a,b} =
\sum_{j}\sum_{a,b} am^{ij}_{a,b} \quad,
\end{equation}
and that, for all teams, the expected number of away points gained is equal to the actual number of away points gained,
\begin{equation}
\sum_{j}m_{ij}\sum_{a,b} bp^{ji}_{a,b} =
\sum_{j}\sum_{a,b} bm^{ji}_{a,b} \quad.
\end{equation}
Then, for all $i,j$ such that $m_{ij}>0$, the solution satisfies
\begin{equation}
\log p^{ij}_{a,b} = -\lambda_{ij} -a.{}_H\lambda_i - b.{}_A\lambda_j -1 \quad ,
\end{equation}
where ${}_H\lambda_i$ and ${}_A\lambda_j$ are the Lagrangian multipliers relating to the home and away criteria respectively. This gives
\begin{equation}
p^{ij}_{a,b} \propto {}_H\pi_i^a {}_A\pi_j^b \quad,
\end{equation}
where the strength parameters, ${}_H\pi_i = \exp(-{}_H\lambda_i)$ and ${}_A\pi_j= \exp(-{}_A\lambda_j)$ denote the home and away strengths of $i$ and $j$ respectively.
\subsection{Data cleaning}
While there were no means to validate the data independently, there were 24 occasions of identifiable self-inconsistencies or incompleteness in the data across the three seasons of interest, 15 of which impacted the result or try outcomes for at least one of the teams involved. The treatment of all of these is described below. They were checked for reasonableness with SOCS, the administrator for the tournament.
\begin{enumerate}
\item Where the score could not have produced the try outcome. Since a try is worth five points in rugby union, then the score of any team may not be less than five times their number of tries. If swapping the number of tries recorded for home and away teams produced consistency then this was done. If this did not resolve the issue then the number of tries was adjusted down to the maximum number of tries possible given the score. The number of incidences of this were: two in 2017/18, five in 2016/17, four in 2015/16. Of those, the number that meant a team's try bonus status changed was just one in 2017/18 and two in 2016/17.
\item Where Venue had been entered as "tbc", the Venue was set to Neutral. Five incidences in 2016/17, two in 2015/16.
\item Where matches were entered as a win for one side but score and tries were both given as 0-0. On speaking to SOCS, their speculation was that these may have related to matches where there had been some sort of `gentleman's agreement' e.g. the teams had agreed to deselect certain players (in particular those with representative honours), and the recording of the match was a means of recognising that a fixture had taken place, but not giving it full status. In our analysis, the winning team is awarded four points for a win, the losing team one for a narrow loss, and no try bonus is awarded to either side. There were two such results in 2017/18, and two in 2016/17.
\item Where the try count was blank for one of the two teams, the number of tries was taken to be the maximum number of tries possible given the score. There was one case of this in 2016/17.
\item Where the result outcome (Won, Draw, Loss) did not agree with the score but did agree with the try outcome, but became consistent if the score were reversed, then the score was reversed. One case in 2017/18. This did not impact the analysis.
\end {enumerate}
\subsection{Daily Mail Trophy methodology}
Currently the ranking is based on Merit Points, which are defined as the average number of League Points per match plus Additional Points, awarded in order to adjust for schedule strength.
League Points are awarded, in line with the standard scoring rule for rugby union leagues in the UK, as:
\begin{description}[noitemsep]
\item 4 points for a win
\item 2 points for a draw
\item 0 points for a loss
\item 1 bonus point for losing by less than seven points
\item 1 bonus point for scoring four or more tries
\end{description}
Additional Points in the Daily Mail Trophy are awarded based on the ranking of the current season's opponents in the previous season's tournament:
\begin{description}[noitemsep]
\item {\makebox[3cm]{Rank 1 to 25:\hfill} 0.3}
\item {\makebox[3cm]{Rank 26 to 50:\hfill} 0.2}
\item {\makebox[3cm]{Rank 51 to 75: \hfill} 0.1}
\item {\makebox[3cm]{Otherwise: \hfill} 0}
\end{description}
So, for example, a team with eight fixtures qualifying for the Daily Mail Trophy, with one of those against a top 25 team, three against 26-50th placed teams, and two against 51-75th placed teams, averaging 3.2 League Points per match, would get a Merit Points total of \(3.2 + 1 \times 0.3 + 3\times0.2 + 2 \times 0.1 = 4.3\).
\subsection{Further results}
\begin{figure}[htbp!]
\subfloat{\includegraphics[width=0.4\linewidth]{PPPM_Priors_DMT2017.pdf}}
\subfloat{\includegraphics[width=0.6\linewidth]{Rank_Priors_DMT2017.pdf}}
\caption{Top10 PPPM and Rank variation with prior weight for Daily Mail Trophy 2016/17 }
\label{fig: DMT_Priors 2016/17}
\end{figure}
\begin{table}[htbp!]
\centering
\resizebox{9cm}{!}{
\begin{tabular}{|l|ccccc|}
\hline
School & P & W & D & L & LPPM \\
\hline
Kirkham Grammar & 12 & 12 & 0 & 0 & 4.75 \\
Sedbergh & 10 & 9 & 0 & 1 & 4.5 \\
The Manchester Grammar & 4 & 4 & 0 & 0 & 4.75 \\
Brighton College & 8 & 8 & 0 & 0 & 4.5 \\
Wellington College & 12 & 11 & 0 & 1 & 4.42 \\
Harrow & 9 & 8 & 0 & 1 & 4.44 \\
Clifton College & 10 & 9 & 0 & 1 & 4.5 \\
St John's School, Leatherhead & 9 & 7 & 0 & 2 & 3.89 \\
St Peter's, York & 9 & 9 & 0 & 0 & 4.33 \\
Canford & 10 & 9 & 0 & 1 & 4.2 \\
\hline
\end{tabular}}
\caption{Playing record for Top10 as ranked by Model 2, for Daily Mail Trophy 2016/17. LPPM - league points per match - the total number of points gained, including bonuses, divided by number of matches; P - Played, W - Win, D - draw, L - loss}
\label{tbl: DMT17 PWDL}
\end{table}
\begin{figure}[htbp!]
\subfloat{\includegraphics[width=0.4\linewidth]{PPPM_Priors_DMT2016.pdf}}
\subfloat{\includegraphics[width=0.6\linewidth]{Rank_Priors_DMT2016.pdf}}
\caption{Top10 PPPM and Rank variation with prior weight for Daily Mail Trophy 2015/16 }
\label{fig: DMT_Priors 2015/16}
\end{figure}
\begin{table}[htbp!]
\centering
\resizebox{9cm}{!}{
\begin{tabular}{|l|ccccc|}
\hline
School & P & W & D & L & LPPM \\
\hline
Stockport Grammar & 4 & 4 & 0 & 0 & 5 \\
Bedford & 8 & 8 & 0 & 0 & 4.75 \\
Kirkham Grammar & 11 & 11 & 0 & 0 & 4.64 \\
Bromsgrove & 9 & 8 & 1 & 0 & 4.11 \\
Sedbergh & 10 & 7 & 1 & 2 & 3.70 \\
Seaford College & 7 & 6 & 0 & 1 & 4.14 \\
Clifton College & 9 & 7 & 2 & 0 & 4.11 \\
Wellington College & 13 & 9 & 0 & 4 & 3.46 \\
Tonbridge & 9 & 7 & 0 & 2 & 3.44 \\
Solihull & 10 & 9 & 0 & 1 & 4.10 \\
\hline
\end{tabular}}
\caption{Playing record for Top10 as ranked by Model 2 for Daily Mail Trophy 2015/16. LPPM - league points per match - the total number of points gained, including bonuses, divided by number of matches; P - Played, W - Win, D - draw, L - loss}
\label{tbl: DMT16 PWDL}
\end{table}
Looking at Figures \ref{fig: DMT_Priors 2016/17} and \ref{fig: DMT_Priors 2015/16} and comparing them to their respective leagues in Tables \ref{tbl: DMT17 PWDL} and \ref{tbl: DMT16 PWDL}, it may be noted, as before, that a prior weight on the larger end of the scale is required before teams playing a smaller number of matches are sufficiently penalised. Looking at the 2017/18 and 2015/16 seasons and the ranking of Kingswood School and Stockport Grammar School respectively, in particular, might suggest that of the tested priors, 4 or 8 would be most appropriate.
An argument against this assertion might be that under current Daily Mail Trophy rules, teams playing fewer than five matches are excluded from the league table. In the 2017/18 and 2015/16 seasons Kingswood School and Stockport Grammar School respectively therefore did not appear in the final Daily Mail Trophy league table. This rule could continue to be used to deal with cases of teams playing low numbers of matches rather than relying on the prior to do the job entirely.
On the other hand one can credibly argue that a robust ranking model should be able to deal with all result outcomes without an arbitrary inclusion cut off. It is also reasonable to assert that there is still useful information from these teams for the calibration of the model, whether they are included or not in the final table. With this in mind it seems sensible to select a prior of 4 or 8 from the values presented here. Other than the re-ranking of Kingswood School and Stockport Grammar School already noted the only other differences from selecting 8 rather than 4 are that Harrow and Cranleigh swap in 2017/18, Clifton and Brighton in 2016/17 and Solihull and Tonbridge in 2015/16. It is not possible to say that either of these alternative rankings is definitively right in any of these three cases. In all these cases the projected points per match of the two teams remain very similar, and both alternatives would pass the sensible criterion that a ranking method should be such that all other relative rankings should not be perceivable as unreasonable by a large proportion of the tournament stakeholders.
| {
"attr-fineweb-edu": 1.939453,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdRo5qX_BkksYXXnF | \section{Assumptions of this Model}
We construct here a simple description of Hockey Playoffs as between just 3 Teams (such as: \textcolor{blue}{Toronto}, \textcolor{red}{Montreal}, and \textcolor{gold}{Ottawa}), where each Team possesses different strengths in just 3 independent competitive variables (such as \textit{Offence, Defence}, and a \textit{Goalie}), expressed in different whole numbers (such as \$ millions), summing to the same total (a 'salary cap' such as \$6 million /Team). Such 'goalie-centred', 'balanced' and 'offence-defence' spending could be represented, for example as:
{
\begin{multicols}{2}
\begin{center}
\textbf{}\\\textbf{}\\
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{cccc}
&\textit{Offence} (\$M) &\textit{Defence} (\$M) &\textit{Goalie} (\$M) \\
\textcolor{red}{\textcolor{red}{Montreal}} & 1 & 1 & 4 \\
\textcolor{gold}{\textcolor{gold}{Ottawa}} & 2 & 2 & 2 \\
\textcolor{blue}{\textcolor{blue}{Toronto}} & 3 & 3 & 0
\end{tabular}
\end{center}
\columnbreak
\begin{center}
\textbf{}\\
\includegraphics[scale=0.3]{dice.png}
\end{center}
\end{multicols}
}
The Model is run by assuming that each pair of Teams plays each other over a long series (approaching $\infty$), and that the winner of that series is the Team who wins the most 'head-to-head match-ups' of these 9 possible combinations of competitive variables, similar to rolling differing dice against each other many times, see which die 'wins'.
Which strategy is best? \textit{i.e.} is it really better for \textcolor{blue}{Toronto} to spend so much on \textit{Offence} and \textit{Defence}, or for \textcolor{red}{Montreal} to concentrate resources in their \textit{Goalie}, or can \textcolor{gold}{Ottawa} end up victorious with balanced spending?
\newpage
\section{Results from the Model}
\label{sec:headings}
The 9 independent 'head-to-head match-ups' between each pair of Teams facing each other in a Playoff Series, might be most easily visualized as rolling 3 different coloured dice, representing the 3 Team's weighting strategies in \textit{Off, Def,} and \textit{Goal} variables (repeating the same 3 numbers on the backside of each 6-sided die):
\textbf{Playoff Series Winners} can be presented by charting results of the 9 possible match-ups, then declaring as winner the Team who out-rolls their opponent in the majority of the 9 possible combinations, for \textit{e.g.:}
\begin{multicols}{2}
\begin{center}
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c|ccc|}
\hline
\diagbox{\textcolor{red}{MTL}}{\textcolor{gold}{OTT}} & 2 & 2 & 2\\
\hline
1 & \textcolor{gold}{OTT} & \textcolor{gold}{OTT} & \textcolor{gold}{OTT} \\
1 & \textcolor{gold}{OTT} & \textcolor{gold}{OTT} & \textcolor{gold}{OTT} \\
4 & \textcolor{red}{MTL} & \textcolor{red}{MTL} & \textcolor{red}{MTL} \\
\hline
\end{tabular}
\end{center}
\begin{center}
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c|ccc|}
\hline
\diagbox{\textcolor{gold}{OTT}}{\textcolor{blue}{TOR}} & 3 & 3 & 0\\
\hline
2 & \textcolor{blue}{TOR} & \textcolor{blue}{TOR} & \textcolor{gold}{OTT}\\
2 & \textcolor{blue}{TOR} & \textcolor{blue}{TOR} & \textcolor{gold}{OTT}\\
2 & \textcolor{blue}{TOR} & \textcolor{blue}{TOR} & \textcolor{gold}{OTT}\\
\hline
\end{tabular}
\end{center}
\end{multicols}
Where (left) in a match-up with \textit{Goalie}-heavy \textcolor{red}{Montreal}, a balanced \textcolor{gold}{Ottawa} Team would be expected to prevail eventually, 'winning' 6 of the possible 9 total match-ups of Team strength. Similarly (right), \textcolor{gold}{Ottawa} then playing an \textit{Offence-Defence} oriented \textcolor{blue}{Toronto} would be expected to be defeated, again in 6 out of 9 possible match-ups.
Since \textcolor{blue}{Toronto} triumphs over \textcolor{gold}{Ottawa}, after \textcolor{gold}{Ottawa} has clearly vanquished \textcolor{red}{Montreal}, one might be tempted to assume that a \textcolor{blue}{Toronto} \textit{vs} \textcolor{red}{Montreal} final would be as predictable as: \textcolor{blue}{TOR} > \textcolor{gold}{OTT} > \textcolor{red}{MTL}, so therefore \textcolor{blue}{TOR} > \textcolor{red}{MTL}.
\textbf{A Possibly Unexpected Final Outcome} can be confirmed by the match-up chart between \textcolor{red}{Montreal} and \textcolor{blue}{Toronto} (below), where examining the 9 combinations reveals that in only 4 of 9 match-ups does \textcolor{blue}{Toronto} prevail, yet \textcolor{red}{Montreal} emerges victorious, winning 5 of 9 match-ups, and thus defeating the \textcolor{blue}{Toronto} Team. Such possibly surprising and disappointing final outcomes can be described as 'intransitive', with much written elsewhere about such potentially unfortunate relationships, using many variations of such intransitive dice.
\begin{center}
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c|ccc|}
\hline
\diagbox{\textcolor{red}{MTL}}{\textcolor{blue}{TOR}}&3&3&0\\
\hline
1 & \textcolor{blue}{TOR} & \textcolor{blue}{TOR} & \textcolor{red}{MTL}\\
1 & \textcolor{blue}{TOR} & \textcolor{blue}{TOR} & \textcolor{red}{MTL}\\
4 & \textcolor{red}{MTL} & \textcolor{red}{MTL} & \textcolor{red}{MTL} \\
\hline
\end{tabular}
\end{center}
\section{Conclusions}
It is demonstrated here by this Model that no matter what distribution of funding adopted by any Team (for example: \textcolor{blue}{Toronto}) under a uniform salary cap, a superior distribution of the same resources can be adopted by their opponent (such as: \textcolor{red}{Montreal}) to ensure victory, and \textcolor{blue}{Toronto}'s eventual, continued, and inescapable defeat.
\section{Acknowledgements \& Competing interests}
Profs. C.B. and O.M. are grateful to NSERC Canada for research support. The Canadian Broadcasting Corporation is thanked for open broadcast of \textit{'Hockey Night in Canada'} during which this collaborative paper was written, as well as Moosehead Breweries Limited (St. John, NB). The authors declare no competing interests, aside from traditional geographical hockey allegiances.
\small{
\bibliographystyle{unsrt}
| {
"attr-fineweb-edu": 1.836914,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfW85qhLBuVPowZRo | \section{\label{sec:intro}Introduction}
Teams, media, experts and fans have always analyzed football and try to best explain what is happening on the pitch. Until recently, broadcast footage was the only true source of information, leading to qualitative analysis based on observation. As the first analysis software appeared and information technology matured, thus enabling real-time data transmission, it became possible to bookmark events of interest in the game for several purposes --coaching, highlight editing or third party applications. From this, a standardized dataset known as \textit{event data} emerged. Event data has traditionally been a file containing all manually collected events that occurred in a given football match. These datasets are nowadays used by most stakeholders in football, some of which are starting to find limitations due to the nature of the dataset.
Furthermore, in the recent years there has been a rise in demand for \textit{tracking data}, namely technologies providing center-of-mass coordinates for all players and the ball several instances per second, obtained through Electronic Performance \& Tracking Systems (EPTS) \cite{fifaepts}. These ever-improving systems have motivated the appearance of new opportunities to quantify many of the observations that had previously only been qualitative --one of those being the determination of events. With a view of making technology more globally available, FIFA started a research stream to analyze whether the events could be identified using tracking data (and potentially computer vision), thus eliminating the need for manual coding. With a vision of extracting tracking data from broadcast footage in the foreseeable future, the overarching objective of this research is to be able to provide video, tracking and event data from a single camera going forward, thus contributing to the development of the game while improving the consistency and repeatability of the event data collection.
Event data for major football leagues and tournaments started to be collected by Opta Sports (now Statsperform) \cite{statsperform} at the end of the 20th century, and its widespread adoption has propelled the development of advanced football statistics for analytics, broadcast and sports-betting \cite{wang2015discerning,marchiori2020secrets,van2021leaving,montoliu2015team,szczepanski2016beyond,gyarmati2014searching,bekkers2019flow,gonccalves2017exploring,lucey2012characterizing,brooks2016using,decroos2019actions,tuyls2021game,decroos2020player}. There are now various commercial providers that manually collect event data for different leagues \cite{statsperform,stswebsite,wyscout,statsbomb}, event data from past seasons is widely available. Nonetheless, collection of event data presents some challenges: (1) since it serves different purposes, there is a lack in consistency in both terminology and event definitions, as well as granularity and accuracy of the time annotation; (2) the nature of the task is subjective, hence there may be substantial tagging differences (10-15\%) among analysts; (3) event data needs to be post-processed for quality control, thus the final dataset for some providers may not typically available until several hours after the match; (4) only information for the player executing the event is available, thus there is no information on the broader game context (\textit{e.g.} location of other players at time of event) --although some providers \cite{statsbomb} have recently started including this information; and (5) manual data collection is resource-intensive, and thus cannot be readily extended to the majority of football tournaments.
To address the latter, there have been many efforts in the field of computer vision to automatically detect events using broadcast video \cite{ekin2003automatic,d2010review,assfalg2003semantic,tavassolipour2013event,kapela2014real,ballan2009action}. More recently, convolutional and recurrent neural networks have been employed for this task \cite{giancola2018soccernet,baccouche2010action,jiang2016automatic,tsagkatakis2017goal,deliege2021soccernet,rongved2020real,cioppa2018bottom}, which has been enabled by the extensive availability of manually tagged datasets and the recent advances in action recognition. However, automatic event detection with video has thus far focused on a subset of football events, namely goals, shots, cards, substitutions and some set pieces, devoted mainly to highlight generation. Therefore, the automatically generated event logs are sparse and lack game context, thus limiting its applicability for advanced football analytics and granular game description.
The absence of game context on event data has been partially addressed in the recent years with the advent of tracking data. In particular, tracking data collected with optical EPTS is the most common due to its accuracy (which has benefited from the recent advances in deep learning and computer vision methodologies) and minimal invasiveness \cite{fifaepts}. There are a myriad of commercial providers that collect football tracking data using optical systems for clubs, leagues and federations \cite{metricawebsite,tracabwebsite,t160website,kogniawebsite,sswebsite,heiwebsite}. The main drawback of optical EPTS is the need for a camera installation in every stadium, albeit there are promising research and commercial avenues \cite{metricawebsite,sportlogiq,footovision,skillcorner} to extract tracking data from broadcast or tactical footage that mitigate the costs. Another notable drawback of optical EPTS is that the data quality depends on the stadium, namely the height at which the cameras are installed.
Tracking data provides a richer context than event data, since information on all players, their trajectories and velocities is readily available, which enables the evaluation of off-ball players and team dynamics. Consequently, storing and computing with tracking data is more resource-intensive than with event data --a football match at $\SI{25}{\Hz}$ contains roughly 3 million tracking data points, compared to 3K events. In the recent years, tracking data has been extensively used to perform football analytics, and its proliferation has given rise to several advanced metrics, for instance the quantification of team tactics, pitch control or expected possession value to name a few \cite{lucey2014quality,le2017data,bialkowski2014identifying,gudmundsson2014football,spearman2018beyond,shaw2020routine,power2017not,link2016real,cakmak2018computational,fernandez2019decomposing,dick2019learning,fernandez2018wide,stockl2022making}.
To the best of our knowledge, this article represents the first attempt to use tracking data as a means to automatically generate event data. There are many advantages to this approach: (1) it generates event data that would otherwise need to be manually annotated; (2) tracking data has been automatically extracted from video, thus containing highly-curated information on players and ball; (3) events generated from tracking data are not only synced with tracking data, but also highly specific, since information on the other players and the match context is available; and (4) combining automatic event generation with tracking data from broadcast/tactical footage will further the democratization of tracking and event data.
Here, we propose to extract possession information and football event data from 2D player and ball tracking data. To that end, we have developed a deterministic decision tree-based algorithm that evaluates the changes in distance between the ball and the players to generate in-game events, as well as the spatial location of the players during dead ball intervals to detect set pieces, hence there is \textbf{no learning involved}. The output consists of a chronological log of possession information and discrete events. This article is organized as follows. In Section \ref{sec:Methods}, we describe in detail the proposed algorithm and the different datasets used. In Section \ref{sec:results}, we benchmark the automatically generated events against manually annotated events and showcase how the auto-eventing algorithm can be used for football analytics. In Section \ref{sec:discussion}, we discuss the benchmarking results, limitations of the algorithm and the data and perspectives for future research.
\section{Materials and Methods}\label{sec:Methods}
\subsection{Data resources}\label{sec:data}
We have used tracking data from three different tracking data providers: Track160 \cite{t160website} (hereafter referred to as provider A), Tracab \cite{tracabwebsite} (hereafter referred to as provider B) and Hawk-Eye \cite{heiwebsite} (hereafter referred to as provider C) across three tournaments as follows: for Track160, six games in the FIFA Club World Cup 2019 (FCWC19) provided by FIFA and three games in the 2019-2020 Bundesliga season provided by Track160; for Tracab, seven games (three of them processed with version 5.0 and the remaining four withprovider A version 4.0) in the FIFA Club World Cup 2020 (FCWC20) provided by FIFA and twelve games (version 4.0) in the 2019-2020 Men's Bundesliga season provided by Deutsche Fussball Liga (DFL); and for Hawk-Eye, three games in the FCWC20 provided by FIFA --data for these three games was also collected with Tracab 4.0. In all cases, the tracking data consists of $(x,y)$ coordinates for all the players and the ball sampled at $\SI{25}{\Hz}$. Since the $z$-coordinate of the ball is not available for all datasets, we only use 2D ball information. In addition to the ball and player coordinates, tracking data contains information on the status of the game at every frame, either directly with a boolean that switches between in-play or dead ball (Tracab) or indirectly with missing ball data when the game is dead (Track160 and Hawk-Eye).
To benchmark the automatically detected events, we have used official event data collected by Sportec Solutions (STS) \cite{stswebsite} for all games, provided by FIFA for FCWC19-20 and by DFL for the Bundesliga games. These official events are indexed by game, half, minute, second and player or players that executed the event.
All data subjects were informed ahead of collection that "Optical player tracking data, including limb-tracking data, will be collected, and used for officiating, performance analysis, research, and development purposes" thus providing the basis for legitimacy of use in this research study. The authors received human research ethics approval to conduct this work from the Committee on the Use of Humans as Experimental Subjects (COUHES-MIT).
\subsection{Computational framework}\label{sec:framework}
We propose a two-step algorithm to detect events in football using 2D player and ball tracking data, see Fig. \ref{fig:framework}a for a depiction of the algorithm's flowchart where all the relevant information generated at each step is detailed. The input is a tracking data table for a given game, formatted as one entry per player and frame (with ball data incorporated as column).
The first step is determining ball possession, which is the backbone of the computational framework, as well as players' configuration during dead ball intervals. In the second step, we propose a deterministic decision tree based on the Laws of the Game \cite{ifabwebsite} that enables the extraction of in-game and set piece events from the possession information that has been established on the first step.
\begin{figure}[h!]
\centering
\includegraphics[scale=1]{events_table.pdf}
\caption{a) Proposed computational framework, along with information generated at each step. b) Schematic detailing all possible labels for the attributes \textbf{ball control}, \textbf{event name}, \textbf{dead ball event} and \textbf{from set piece} on the output events table.}
\label{fig:framework}
\end{figure}
The output of the algorithm is a table that for each frame in the tracking data contains automatically generated event data. In the output events table, besides information on time and players involved, we include the attributes \textbf{ball control}, \textbf{event name}, \textbf{from set piece} and \textbf{dead ball event}, see Fig. \ref{fig:framework}b. \textbf{Ball control} takes four possible values, that is: \texttt{dead ball}, \texttt{no possession}, \texttt{possession} and \texttt{duel} --the last two if at least one player is in close proximity of the ball. Ball control thus represents a continuous action, and since events occur only when ball control is either possession or duel, we may drop the rows where \textbf{ball control} is either \texttt{dead ball} or \texttt{no possession} for convenience. \textbf{Event name} refers to the in-game actions that occur on a discrete time: \texttt{pass}, \texttt{cross}, \texttt{shot on target}, \texttt{shot off target}, \texttt{reception}, \texttt{interception} and \texttt{own goal}. The goalkeepers feature additional events, namely \texttt{save} (\texttt{deflect} or \texttt{retain}) and \texttt{claim} (\texttt{deflect} or \texttt{retain}), \texttt{unsuccessful save} (the goalkeeper touches the ball but a goal is conceded) and \texttt{reception from loose ball}. The list of in-game events that we propose to detect is by no means comprehensive, but rather we focused on events that are both descriptive of the game and can be identified from tracking data using rules and without learning. Additional data streams, such as the $z$-coordinate of the ball, player limb tracking or video, may be leveraged to expand the automatically detectable events, for instance tackles, air/ground duels or dribbles.
\textbf{Dead ball event} is an attribute of the event immediately preceding a dead ball interval, namely, \texttt{out for corner kick}, \texttt{out for goal kick}, \texttt{out for throw-in}, \texttt{foul}, \texttt{penalty awarded} and \texttt{goal}, whereas \textbf{from set piece} is an attribute of the event (a pass, shot or cross) that resumes the game event after a dead ball interval, namely \texttt{corner kick}, \texttt{goal kick}, \texttt{throw-in}, \texttt{free kick}, \texttt{penalty kick} and \texttt{kickoff}. An additional pair \texttt{foul?}-\texttt{free kick?} is introduced to account for instances where the algorithm is confused due to inaccuracies in the tracking data, see Section S1 of Online Resource 1 for further details.
\begin{figure*}[h!]
\centering
\includegraphics[scale=1]{distance_event-eps-converted-to.pdf}
\caption{Distance between ball (horizontal black line) and closest player of each team (blue and red lines) for each frame within first minute of the 2019 FIFA U20 World Cup opening game, along with annotated events as black diamonds. This illustrates how in-game events occur whenever at least one player is in close proximity of the ball.}\label{fig:distance_event}
\end{figure*}
\subsection{Possession}\label{sec:possession}
\subsubsection{Asserting possession from tracking data}
Ball possession is paramount, because in-game events in football occur whenever at least one player is close to the ball, see Fig. \ref{fig:distance_event}. To establish possession, we introduce the concept of possession zone (PZ), which for simplicity we define as a circular area of radius $R_{pz}$ around every player, such that if at any given frame the ball is within a player's PZ, then that player is deemed to be in possession. Similarly, we introduce a duel zone (DZ), defined as a circular area of radius $R_{dz} \ge R_{pz}$ around the ball, such that if at least two opponents are within the DZ, then we deem there is a duel situation.
The possession algorithm reads the tracking data and applies the PZ/DZ conditions above to every frame. If both possession and duel conditions are triggered, the duel condition prevails. A frame where either possession or duel is selected is hereafter referred to as a \textit{control frame}. In addition to possession/duel information, for each control frame $f$ we store the players' distance to the ball, the ball displacement $\Delta s$ from frame $f$ to $f+1$, and the incoming ball direction vector ${\bf d}_f^0$ and speed $v_f^0$ (magnitude of velocity vector) using data from frame $f-1$ (resp. outgoing ${\bf d}_f^1$ and $v_f^1$ using data from frame $f+1$ frame), see Fig. \ref{fig:loss_gain}a. If the ball positional data has been smoothed, the speed may be computed using finite differences. Conversely, if the positional data is noisy we apply a Savitzky-Golay filter \cite{savitzky1964smoothing} to both $x-y$ ball coordinates, using a second-order polynomial and a window of seven frames around each datapoint (which for a \SI{25}{\Hz} feed corresponds to using the data from the neighboring \SI{0.25}{\s} to smooth the signal).
\begin{figure*}[h!]
\centering
\includegraphics[scale=.8]{losses_gains.pdf}
\caption{Schematic of ball information collection and possession losses and gains. (a) Ball information collected on each tracking data frame, including incoming/outgoing direction, speed and displacement. (b) Different potential losses, where $f$ is loss frame and $f_c > f$ is next frame where any player is in control: (b1) player moving away from static ball $\rightarrow$ no loss; (b2) player losing possession and regaining afterwards without any other player having been in control $\rightarrow$ no loss; (b3,b4) player losing possession and next player in control is either a teammate or an opponent $\rightarrow$ loss. (c) Different potential gains for a control frame interval $[f_0,f_4]$, where player is assumed static and ball position is shown in consecutive frames, moving in the direction of the arrow: (c1) ball changes trajectory and speed $\rightarrow$ gain; (c2) ball trajectory and speed remain constant $\rightarrow$ no gain.}
\label{fig:loss_gain}
\end{figure*}
Once the control frames have been established, the next step is detecting changes in ball control, i.e. gains or losses, that will later be classified as in-game events by the event detection step. Gains and losses are determined upon the control frames extracted from tracking data as follows.
\subsubsection{Possession losses}
Player A loses possession at control frame $f$ if the following conditions are both satisfied:
\begin{enumerate}
\item the ball is outside the PZ of player A at frame $f+1$ and the ball displacement $\Delta {s_f}$ is above a given threshold, specified by the hyperparameter $\epsilon_s$,
\item player A is not present on the subsequent control frame $f_c > f$ where there is either a possession or a duel.
\end{enumerate}
The first condition enables the algorithm to not detect as a loss situations where the ball remains static and the player moves without it, see Fig. \ref{fig:loss_gain}b$_1$. The second condition enables the detection of longer ball possessions by a player, where the ball eventually leaves the player's PZ but re-enters it after a number of frames where no other player has interacted with the ball, hence player A is still in possession and no loss is recorded, see Fig. \ref{fig:loss_gain}b$_2$. In all other circumstances, a possession loss is annotated, see Fig. \ref{fig:loss_gain}b$_3$-b$_4$.
\subsubsection{Possession gains}
Determining gains in possession requires asserting whether a given player not only is close to the ball, but also if they effectively make contact with it. Furthermore, since ball tracking data may lack $z$ information, additional logic is required to differentiate between actual possession gains and instances where the player(s) near the ball do not touch the ball. Following football intuition, we hypothesize that a change in both ball direction and ball speed is a strong indicator of players establishing contact with the ball, and thus gaining possession.
For a given sequence of control frames $f_0,\ldots,f_n$ where the ball is within the PZ of the same player and $f_n\ge f_0$, we ascertain if there is an actual possession gain by introducing two hyperparameters, the minimum change in ball direction $\epsilon_{\theta}$ and minimum change in ball speed $\epsilon_v$. The ball is deemed to have changed direction within $[f_0,\,f_n]$ if the ball trajectory has changed from start to end of the control interval, namely ${\bf d}^0_{f_0}\cdot {\bf d}^1_{f_n}<\epsilon_\theta$; similarly, we consider the ball has changed speed if $\lbrace\abs{v^0_{f_i} - v^1_{f_i}}
>\epsilon_v\rbrace_{i=0}^n$ on at least one frame. All in all, we assume that if the ball has either changed direction or speed during a given control sequence $[f_0,\,f_n]$ involving the same player, then a possession gain occurs at $f_0$, see Fig. \ref{fig:loss_gain}c$_1$. Naturally, if both the ball trajectory and speed are not altered during a control frame interval, we consider that these control frames are false positives and not include them in the possession step, since it corresponds to instances where the ball travels near one or more players but none of them make explicit contact with the ball, see Fig. \ref{fig:loss_gain}c$_2$.
\subsubsection{Set piece triggers}
In addition to possession information, we incorporate several set piece triggers that inspect the spatial location of all players when the game is interrupted to determine which set piece event that resumes the game. For nomenclature purposes, we shall refer to the goal a team is attacking as \textit{active} goal, and analogously for other notable locations (penalty mark, corner mark, penalty area, goal area). The different triggers considered, along with tolerances to accommodate tracking data inaccuracies, are listed as follows:
\begin{itemize}
\item dead ball trigger: the ball is dead, signaled by either a binary boolean or by the absence of ball tracking data.
\item kickoff trigger: all players are within their own halves (with a tolerance of $\epsilon_{\rm k1}$) and there is at least one player within $\epsilon_{\rm k2}$ of the center mark, according to IFAB Law 8, see Fig. \ref{fig:db_trigger}a.
\item penalty kick trigger: only one player is at their goal line between the posts (with tolerance bounding box of $\epsilon_{\rm p1}$), only one opponent is within a square bounding box from $\epsilon_{\rm p2}/4$ in front to $3\epsilon_{\rm p2}/4$ behind the active penalty mark, the other players are neither within the penalty area nor within \SI{9.15}{\m} from the penalty mark (with a tolerance of $\epsilon_{\rm p3}$), according to IFAB Law 14, see Fig. \ref{fig:db_trigger}b.
\item goal kick trigger: at least one player is within their own goal area (with tolerance bounding box of $\epsilon_{\rm c}$), according to IFAB Law 16, see Fig. \ref{fig:db_trigger}c.
\item corner kick trigger: at least one player is within $\epsilon_{\rm c}$ of one of their active corner marks, according to IFAB Law 17, see Fig. \ref{fig:db_trigger}d.
\item throw-in trigger: at least one player is beyond the auxiliary sideline (sideline minus $\epsilon_{\rm t}$), according to IFAB Law 15, see Fig. \ref{fig:db_trigger}e.
\end{itemize}
\begin{figure*}[h!]
\centering
\includegraphics[scale=.9]{detectors_new.pdf}
\caption{Schematic of set piece triggers (player configurations within the highlighted black/red/blue shape), triggering players (filled red/blue markers) and patterns (player in control of the ball within the grey shaded zones) for different set piece events: (a) Kickoff trigger with own half tolerance $\epsilon_{\rm k1}$; kickoff pattern with center mark tolerance $\epsilon_{\rm k2}$. (b) Penalty kick trigger with goal line tolerance $\epsilon_{\rm p1}$, no-player zone with tolerance $\epsilon_{\rm p3}$ and trigger and pattern with penalty mark tolerance $\epsilon_{\rm p2}$ and(c) Goal kick trigger and pattern with goal area tolerance $\epsilon_{\rm g}$. (d) Corner kick trigger and pattern with corner mark tolerance $\epsilon_{\rm c}$. (e) Throw-in trigger and pattern with sideline tolerance $\epsilon_{\rm t}$. Note that trigger and pattern zones coincide for corners, throw-ins and goal kicks.}
\label{fig:db_trigger}
\end{figure*}
The output of the possession step is the set of set piece triggers for each dead ball interval, since more than one may be triggered, as well as a table that features ball control (possession/duel) information and possession gains and losses. The former span multiple frames, whereas the latter are discretely annotated and will be mapped to football events in the event detection step described below.
\subsection{Event detection}\label{sec:events}
In this section, we discuss how the possession information and set piece triggers obtained in the previous one may be translated to both set piece and in-game football events.
\subsubsection{Set piece events}\label{sec:dbe}
The most straightforward segmentation of a football game is between in-game and dead ball intervals. A dead ball event (DBE) occurs immediately before a dead ball interval, and is followed by a set piece event (SPE) to resume the game. To that end, we establish the one-to-one correspondence between DBEs and SPEs detailed in Fig. \ref{fig:framework}b. Note that offsides are treated as fouls throughout this work, since from the tracking data perspective is a nontrivial task to distinguish between offsides and other infractions. Furthermore, we identify DBE-SPEs combining triggers and patterns, see Fig. \ref{fig:db_trigger}, as follows: (1) detect triggers in the spatial configuration of the players; (2) confirm DBE-SPE by ensuring the pattern is satisfied, namely the triggering player is within the pattern zone and the ball is within that player's possession zone on the first in-play frame. If there are multiple triggering players within the pattern zone and within $R_{pz}$ of the ball, we choose the closest player to the ball as the executor of the set piece event. Lastly, since free kicks lack distinct trigger configurations, we may only define a free kick pattern as a player having the ball within their possession zone on the first in-play frame.
For an arbitrary dead ball interval indexed by frames $[d_0,d_c]$, we examine if any of the set piece triggers are activated from an arbitrary intermediate frame $d_1$ ($d_0\le d_1\le d_c$) until the last dead ball frame $d_c$, hereafter referred to as complete triggers. The hierarchy established in Fig. \ref{fig:db_flowchart} is used to break ties between more than one complete triggers. Once a potential SPE using triggers has been identified, the algorithm aims to confirm it using the pattern. In the absence of tracking data inaccuracies, all set piece triggers that have been activated should be satisfied until the last dead ball frame $d_c$, and at the first in-play frame the ball should be at least within $R_{pz}$ of the set piece executor. If none of the patterns are satisfied, the algorithm will assume a free kick as a default option. The flowchart of this detection process is outlined in Fig. \ref{fig:db_flowchart}, where the set piece events are shown as grey circles and are always preceded in the auto-generated events table by the corresponding dead ball event. However, errors in tracking data impact the performance of this approach, and we refer the reader to Section S1 of Online Resource 1 for a detailed explanation on how to extend this framework if errors in tracking data are present.
\begin{figure*}[h!]
\centering
\includegraphics[scale=1.5]{dbe_no_errors-eps-converted-to.pdf}
\caption{Flowchart to detect set piece events following a dead ball interval in the absence of tracking data errors.}
\label{fig:db_flowchart}
\end{figure*}
Finally, two exceptions are accounted for regarding kickoffs. First, the one-to-one relation goal-kickoff no longer holds for last-minute goals whereby the period ends after the goal is scored and before the ball is kicked off. Therefore, a \texttt{goal?} dead ball event is added for shot sequences that cross the goal in the 2D plane at the end of the period, to express there is uncertainty whether a goal has been scored. Discerning whether these sequences are actual goals using only 2D tracking data is complex. In cases where the $z$-coordinate of the ball is available, the immediate solution would be to check whether the ball is below the crossbar when it crosses the goal line.
Second, if during the game a kickoff is not properly executed the referee will order its repetition, which from the tracking data perspective could be mistaken as a distinct kickoff that would lead to an incorrect match score. To resolve this situation, assuming there are $k=1,\ldots,K$ kickoffs throughout a period, for each kickoff $k\ge2$ we check whether the ball has reached at least one of the penalty areas in the time interval between kickoff $k-1$ and kickoff $k$. If not, we assume the kickoff $k-1$ was mandated to be repeated and update the \textbf{from set piece} field to \texttt{incorrect kickoff} instead of \texttt{kickoff}, as well as the \textbf{dead ball event} occurring immediately prior to kickoff $k$ from \texttt{goal} to \texttt{referee interruption}.
In summary, the errors that we assume with this DBE detection process are due to the player-position tolerances we use for the triggers, as well as limitations of the tracking data: (1) throw-ins/free kicks occurring near the corner mark wrongfully classified as corner kicks; (2) free kicks occurring near the sidelines wrongfully classified as throw-ins; (3) free kicks occurring within the goal area wrongfully classified as goal kicks; (4) offsides being classified as fouls. Some of these errors could be circumvented by incorporating the $z$-coordinate of the ball for throw-ins, the pose of the referee or a video-based classifier.
\subsubsection{Shots and saves}\label{sec:shotsave}
The proposed framework of possession losses and gains extends naturally to the detection of shots and saves, arguably the most important in-game events. We define a shooting event as a possession loss by a player of the attacking team that is succeeded by a goal, a corner kick, a goal kick or a save. Furthermore, we define a saving event as a possession gain by the goalkeeper of the defending team where they are located inside the penalty area, and is preceded by a shooting event. Note that blocked shots are not encompassed in these definitions, since from the tracking data perspective a blocked shot is a possession loss succeeded by a possession gain of another player (either teammate or opponent) who is not the opposing goalkeeper. Blocked shots cannot therefore be associated with saving events, and hence will be identified as passes --either completed or intercepted. Another error that we assume are saves that occur after a ball deflection from a defender, since the algorithm will label them as a pass from the defender followed by a reception from the goalkeeper.
For shooting events, we differentiate between shot on/off target, cross and pass. For saving events, we differentiate between save, claim, reception from a loose ball and unsuccessful save (a goal is conceded despite the goalkeeper touching the ball). The variables that are examined for each shot-save sequence are: whether a dead ball interval occurs before or after the goalkeeper's possession gain; the spatial location of the shooter, (crossing zone, shooting zone or other, see Fig. \ref{fig:saveshot}f); the direction of the ball after the possession loss occurs and whether it is moving towards the active goal; and the number of opponents in the penalty area. In addition, for the save/claim events we investigate if the goalkeeper loses possession within one second of the saving event, in order to distinguish between retention and deflection. Using these variables, we identify five main categories of shot-save sequences, which can be summarized in Fig. \ref{fig:saveshot} along with the distinct shooting (black) and saving (red) events that are extracted. The distinction between shot on/off target is made solely based on the ball trajectory immediately after the possession loss, with a \SI{0.25}{\m} tolerance beyond the goalposts.
\begin{figure}[h!]
\centering
\includegraphics[scale=.8]{save_shot-eps-converted-to.pdf}
\caption{(a)-(e) Shot-save decision trees depending on the sequence of shooter, goalkeeper and dead ball. Shooting events are highlighted in black, whereas corresponding saving events are highlighted in red. (f) Sketch of football pitch distinguishing between cross zone and shot zone.}
\label{fig:saveshot}
\end{figure}
\subsubsection{Crosses, receptions and interceptions}
Once the shots have been established, the distinction is made between crosses and passes using the same logic as described in Fig. \ref{fig:saveshot}. That is, for a possession loss to be labeled as a cross it needs to satisfy a few conditions: origin in the cross zone, the next player in control (attacking or defending) needs to be within the active penalty area and there should be at least one attacking player in the active penalty area. The remaining possession losses are therefore labeled as passes. Similarly, besides saving events the remaining possession gains are either receptions or interceptions, depending on whether the previous loss is made by a teammate or an opponent.
Furthermore, football events can be complemented with several contextual attributes that can be computed or modeled from the tracking data, in the form of additional columns to the final events table. Examples include outcome of events, player location relative to the other team or number of opponents overtaken by passes \& possessions. We refer the reader to Section S5 of the Online Resource 1 for an exhaustive explanation on how contextual attributes are incorporated.
\section{\label{sec:results}Results}
In this section, we present the results of applying the above event detection framework to the datasets introduced in Section \ref{sec:data}. The chosen possession zone radii were $\SI{50}{\cm}$ for provider A and $\SI{1}{\m}$ for both provider B and provider C, whereas the radius of the duel zone were taken to be $R_{dz} = \SI{1}{\m}$ for all providers, see Section S3-S4 of Online Resource 1 for an extensive discussion on hyperparameters and tolerances.
In terms of computational resources, the datasets are stored in Google Cloud's BigQuery and we perform the computations on a Virtual Machine in Google Cloud featuring 1 CPUs and 4 GB of RAM. The code was developed in \texttt{Python 3}, and the mean computational wall time to execute the possession and event detection algorithm for a 90 minute match was three minutes.
\subsection{Benchmarking with official event data}
First, we present the results of benchmarking the automatically detected events with the manually annotated events by STS. The benchmarking criteria is outlined in Section S2 of Online Resource 1, and the results are shown with confusion matrices in Fig. \ref{fig:confusion}. We do not benchmark player possession data as this is not currently collected by event data providers. In addition, we should emphasize that manually collected event data is not without errors, hence detection rate can never be 100\%; we have observed several instances of non-annotated events, wrong timestamps or wrong players in the annotated events during the course of this research.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.9]{confusion_matrices-eps-converted-to.pdf}
\caption{Confusion matrices comparing the events predicted by the event detection algorithm with the annotated events from STS, together with precision and recall for each category. Matrix cells are colored according to the relative number of instances per row. a) Open play passing events. b) Set piece events. c) Goals.}
\label{fig:confusion}
\end{figure}
Open play passing events (passes, shots, crosses..) constitute the majority of events under consideration, with over 30K instances across all datasets. The confusion matrix, along with precision and recall for each category, is shown in Fig. \ref{fig:confusion}a. The most salient takeaway is the supra 90\% precision and recall in detecting passes. Furthermore, the shot precision was higher than the recall (78 vs 53) for a multitude of reasons: based on our definition of shot in Section \ref{sec:shotsave}, shots that were blocked by other players were labeled as passes by the algorithm (70\% of the 254 misclassifications), since the goalkeeper did not intervene; for shots that were not blocked, errors arise from situations with multiple players near the ball in which the shooter was wrongly identified. In addition, shots tended to take place on areas where players accumulate, hence it is not surprising that 15\% of shots (compared to less than 6\% of passes) were either not detected or attributed to another player. The main reason was that the tracking data (and consequently the event detection algorithm) exhibits more inaccuracies and errors when player occlusions occur. Regarding crosses, the main source of confusion were labeled crosses that the algorithm annotates as passes (569 misclassifications) based on the logic described in Fig. \ref{fig:saveshot}. Due to the absence of a gold standard definition of cross, these discrepancies are expected.
The results for dead ball/set piece events are collected in Fig. \ref{fig:confusion}b. The ability to perfectly capture kickoffs and penalty kicks is paramount, since they constitute the best high-level descriptors of a game from the events perspective. The other categories for which a pattern exists (corner, throw-in, goal kick) also exhibit supra-90\% precision and recall, and the errors stem from mistakes in the inbounding player (e.g. more than one inbounding players are close, inbounding player is not tracked) and limitations of the tracking data (e.g. throw-in close to corner marks, free kick close to sideline or corner, free kick inside/near the goal area). Incorporating the ball $z$-coordinate would help in distinguishing throw-ins from corner kicks and free kicks. Finally, the worse results were for free kicks, which hold no specific pattern and were selected if no other spatial configuration was detected, as explained in Fig. \ref{fig:db_flowchart}. The presence of inaccuracies in player/ball tracking data discussed in Section S1 of Online Resource 1 lowers the precision of free kick detection as they were assigned to \texttt{free kick?}, which signals the algorithm was confused due to tracking data inaccuracies and requires external input.
The detection of goals is intrinsically related to the detection of kickoffs, whereby the goal (dead ball event) triggers a kickoff (set piece) as both the start and end of a dead ball interval. Nonetheless, benchmarking for goals separately allows us to analyze the performance of the proposed algorithm specifically on situations with many players involved (e.g. goal scored after a corner kick) where the algorithm may confuse a goal for an own goal, as well as goals scored at the end of the period (for which no kickoff pattern follows). The confusion matrix with the goal results is shown in Fig. \ref{fig:confusion}c, showing only six mistakes (goalscorer wrongly identified) and two last-minute goal events that did not correspond to a goal (algorithm unsure whether a goal was scored). Upon further inspection, the six goalscoring mistakes can be broken down as follows: the ball goes missing after the assist was made (3) and the data reflects an inaccurate situation (3). The two correctly matched last-minute goals correspond to the same late penalty kick goal, where tracking data from two different providers was available. The incorrectly detected last-minute goal corresponded to a shot that went above the crossbar, which could be corrected with the ball $z$-coordinate.
\subsection{Applications}\label{sec:applications}
In this section, we illustrate how both predicted event and possession information can be leveraged to perform statistical analyses. There is a plethora of different ways to slice and aggregate the event data, hence the choice largely depends on the objective of the study or the question that is put forth by the coach or analyst. A sample of potential analyses is presented below, which is by no means exhaustive and only intends to showcase how autodetected event and possession data may be used in a football analytics context. More applications can be found in Section S6 of Online Resource 1. For simplicity, the attacking direction is assumed to be from left to right.
\subsubsection{Possession-informed player heatmap}
First, we choose one player on the first half of a game and visualize the heatmap of their locations when in possession, see Fig. \ref{fig:single_player}a, as well as the spatial distribution of passing events (distinguishing between passes, shots and crosses) and their outcome (completed, intercepted and dead ball), see Section S5 of Online Resource 1. The more traditional heatmap containing the player location at every in-play frame (where the player can be both with and without possession) is shown in Fig. \ref{fig:single_player}b for comparison.
The main takeaway is that the possession-informed heatmap exhibits differences with respect to both the passing events distribution and the complete player heatmap, which signals the importance of capturing possession to more accurately understand the contribution of each player during the match. This approach can be seamlessy extended on many directions, e.g. composing the player heatmap when one of the teammates is in possession, when a specific opponent is in possession, or in a given interval of the match to name a few.
\begin{figure}[h!]
\centering
\includegraphics[scale=1]{single_player-eps-converted-to.pdf}
\caption{Heatmaps of spatial locations of a player during the first half of a game. a) Position of player only when the player is in possession, with passing events and outcomes overlaid. b) Position of player when ball is in-play, regardless of possession.}
\label{fig:single_player}
\end{figure}
\subsubsection{Multiple match-aggregated event information}
We can aggregate and visualize data for the same team, player or both across multiple matches. The examples below correspond to a team for which we had data on five different games (two as the home team and three as the away team). We refer the reader to Section S5 and Fig. S4 of Online Resource 1 for further details on how the attributes discussed here (location of player, opponents overtaken, angle of passes, distance travelled by ball, pass origin) were evaluated.
\begin{figure}[h!]
\centering
\includegraphics[scale=1.1]{contextual_scatter-eps-converted-to.pdf}
\caption{Scatter plot of receptions, where symbol refers to the nature of the prior passing event. a) Receptions behind the opponent's defense, colored by opponents overtaken by prior passing event. The ball trajectory from prior pass is shown in dash-dot. b) Receptions in the flanks of opponent, colored by change in $y$-span of opposing team between passing event and reception.}
\label{fig:contextual}
\end{figure}
First, we analyze all receptions by a player on the team where the recipient is behind the opponents' defense --hence in a theoretically advantageous position to score-- along with depicting the pass trajectory, the nature of the event that lead to each reception and amount of opponents overtaken by it, see Fig. \ref{fig:contextual}a. Second, we examine the location of all receptions by a player on the flanks while illustrating the change in the opposing team's $x/y$-span between the prior passing event and the reception, see Fig. \ref{fig:contextual}b, where all flank receptions are colored by the change in y-span of the opposing team.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.85]{angles-eps-converted-to.pdf}
\caption{Angular information of ball trajectories for four midfielders with the most passes. a) Polar histogram of incoming and outgoing trajectories. b) Polar scatterplot of outgoing trajectories, where the symbol shows the outcome of the pass, the radial position shows how advanced was the player in the pitch when the pass was made (origin for own endline and outer circle for opposing endline) and the color refers to the distance traveled by the ball during the pass.}
\label{fig:angles}
\end{figure}
Finally, we can investigate the trajectories of the passes by visualizing the incoming (at reception) and outgoing (at pass) trajectory angles of several players within the same team. The polar histogram of incoming and outgoing angles for the four midfielders with the most amount of passes is shown in the top row of Fig. \ref{fig:angles}a. Furthermore, polar scatterplots allow us to visualize all the outgoing angles for a given player (in the angular direction) while including information such as the outcome of the pass, how advanced was the player in the pitch when the pass was made (shown in the radial direction, circle origin for own endline and outer circle for opposing endline), and color-coded by the distance traveled by the ball from the time of the pass until reception/interception/out of bounds, see the bottom row of Fig. \ref{fig:angles}b.
\section{Discussion}\label{sec:discussion}
In light of the these results, we can conclude that the proposed framework is effective in leveraging in-stadium tracking data to detect the majority (+90\%) of in-game and set piece events. However, as anticipated above the performance of the algorithm can be impacted by errors and availability of tracking data, errors in event data and modeling limitations, which are discussed below.
The main limitation of the algorithm is that ball tracking data needs to be available, since we propose to detect events by assessing the change in distance between players and the ball. The other limitation is the absence of in-play/dead information, which is critical for set piece detection. Moreover, tracking data errors inevitably lead to wrongly predicted events, for instance player swaps or inaccurate in-play/dead ball boolean. Even though the available datasets feature accurate ball tracking data, namely ball-player distances at passing time are less than \SI{1}{\m} (see Section S3 of Online Resource 1), the proposed framework can be seamlessly applied to tracking data of lesser quality, for instance data collected from one tactical camera or from broadcast footage, by augmenting the possession zone radius and tuning the hyperparameters. Event data can also present several errors, for instance events not annotated, events attributed to a wrong player, or annotated event times more than \SI{10}{\s} before/after they occurred. These errors do not impact the auto-detected events, but they worsen the benchmarking results presented in Fig. \ref{fig:confusion}.
From the modeling standpoint, the errors were due to the choice of parameters and hyperparameters or inherent limitations of the algorithm. For the former, we recommend a cross-validation strategy on a subset of matches to optimize the hyperparameter selection for each tracking data provider. For the latter, we identify several directions of improvement: (1) incorporating the $z$-coordinate of the ball; (2) the use of machine learning to identify events that are not rule-based, for instance blocked or deflected shots based on speed and context; (3) extending the possession zone definition to encompass a variable radius/shape based on pitch location, proximity of opponents and player velocity; (4) developing algorithms to extract pressure, team possession information as well as offensive and defensive configurations; (5) the incorporation of limb tracking data in addition to center-of-mass tracking data for all players and referees, with the objective of enhancing the granularity of already detected events (types of saves, body part for passes) while facilitating the detection of events that can be ambiguous from the tracking data perspective (tackles, types of duels, offsides, throw-in vs corner kick); (6) leveraging a synchronized audio feed that provides timestamped referee whistles to more accurately establish in-play/dead ball intervals; and (7) complementing the current approach with a video-based events classifier, which can enable the detection of refereeing events (cards, substitutions, VAR interventions) that are not captured by tracking data, in addition to improving the detection performance on edge-case set piece events, for instance drop-ball vs. free kick, corner kick vs. throw-in vs. free kick close to the corner marks; (8) applying the algorithm to broadcast tracking, which is less accurate than in-stadium tracking and the pitch is not always visible, which will thus require adjusting the algorithm's hyperparameters and dead ball patterns; (9) availability of additional datasets collected from different providers and stadiums to further test the validity of the proposed framework.
In terms of specific applications for the auto-generated event data, the broader context of the game encoded in the tracking data can be leveraged for a higher granular definition of the events. The examples introduced in Section \ref{sec:applications} and S6 of Online Resource 1 demonstrate how the generated possession and augmented event data may be used to perform advanced football analytics at the match, team and individual player level. We have introduced the notion of possession-informed heatmap to visually represent the locations of the player whilst only in possession of the ball, analyzed how our frame-to-frame ball possession information can be used to visualize possession distribution for both teams and among players, and finally showcased how the event data can be queried in search of highly specific events towards advanced analytics or video segmentation/selection, due to the auto-generated event data being in sync with the video and tracking data.
\section{Conclusions}
We have presented a decision tree-based computational framework that combines information on the spatial location of players and how the possession of the ball changes in time, both computed from 2D player and ball tracking data, with the laws of football to automatically generate possession and event data for a football match. The collection of event data is a manual, subjective and resource-intensive task, and is thus not available to most tournaments and divisions. The proposed framework is a suitable approach towards auto-eventing, due to the high accuracy (+90\%) observed, the limited computational burden and the ever-increasing availability and quality of tracking data feeds.
\section{Acknowledgements}
This research was conducted at the MIT Sports Lab and funded by FIFA through the MIT Pro Sports Consortium. The authors would like to acknowledge the founding partners of the MIT Sports Lab, Prof. Anette Hosoi and Christina Chase, for supporting this research effort. Automatic event detection with tracking data was initially explored as a class project in the context of the MIT Sports Lab class \textit{2.98/2.980 - Sports Technology: Engineering and Innovation}, by the team of students comprised of Juanita Becerra, Spencer Hylen, Steve Kidwell, Guillermo Larrucea, Kevin Lyons, \'{I}\~{n}igo de la Maza and Federico Ram\'{i}rez, under the supervision of Prof. Hosoi, Christina Chase and Ferran Vidal-Codina. In addition, the authors would like to thank Eric Schmidt and Ramzi BenSaid from Google Cloud for the resources and help in leveraging the power of Google Cloud Platform. Finally, the authors thank Track160, DFL Deutsche Fu{\ss}ball Liga and Sportec Solutions for kindly providing part of the tracking and event data necessary to carry out this work.
\section{Conflict of interest statement}
One co-author serves as a Guest Editor for the Topical Collection for Football Research in Sports Engineering and another serves on the Editorial Board of Sports Engineering. Neither of them were involved in the blind peer review process of this paper.
\section{Data availability statement}
The data used for this study was collected by FIFA at a number of its tournaments. Due to media and data rights, the datasets are not publicly available, but can be requested from \texttt{[email protected]} together with a viable research proposal.
\bibliographystyle{unsrtnat}
| {
"attr-fineweb-edu": 1.898438,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcEDxK4tBVhvvrGqp |
\section{Introduction}\label{Sec:Intro}
Rating of players/teams is, arguably one of the most important issues in sport/ competition analytics. In this work we are concerned with rating of the players/teams in the sports with one-on-one games yielding ternary results of win, loss and draw; such a situation appear in almost all team sports and many individual sports/competitions.
Rating in sports consists in assigning a numerical value to a player/team using the results of the past games. While most of the sports' ratings use the points which are attributed to the game's winner, the rating algorithm which was developed in late fifties by Arpad Elo in the context of chess competition \citep{Elo08_Book}, and adopted later by \gls{fide}, challenged this view.
Namely, the Elo algorithm changes the players' rating using not only the game outcome but also the ratings of the players before the game. The Elo algorithm is arguably one of the most popular, non-trivial rating algorithm and was used to analyze different sports, although mostly informally \citep[Chap.~5]{Langeville12_book}\citep{wikipedia_elo}; it is also used for rating in eSports \citep{Herbrich06}. Moreover, in 2018, the Elo algorithm was adopted, under the name ``SUM'', by \gls{fifa} for the rating of the national football teams \citep{fifa_rating}. The Elo algorithm thus deserves a particular attention particularly because it is often presented without mathematical details behind its derivation, which may be quite confusing.
In this work we adopt the probabilistic modelling point of view, where the game outcomes are related to the rating by conditional probabilities. The advantage is that, to find the rating, we can use the conventional estimation strategies, such as \gls{ml}; moreover, with the well defined model, once the ratings are found, they can be used for the purpose of prediction, which we understand as defining the distribution over the results of the game to come. This well-known mathematical formalism of rating in sport has been developed in psychometrics for rating the preferences in pairwise-comparison setup \citep{Thurston27}\citep{Bradley52} and the problem was deeply studied and extended in different directions \eg \citep{Cattelan12}\citep{Caron12}.
In this work we are particularly concerned with the mathematical modelling of draws (or ties). This issue has been addressed in psychometrics via two distinct approaches: in \citep{Rao67}, using thresholding of the unobserved (latent) variables and in \citep{Davidson70}--via an axiomatic approach. These two approaches have also been applied in sport rating, \eg \citep{Herbrich06}\citep{Joe90}; the former, however, is used more often than the latter.
We note that the draws are not modelled in the Elo algorithm \citep{Elo08_Book}. In fact, and more generally, the outcomes are not explicitly modelled at all; rather, to derive the algorithm, the probabilistic model is combined with the strong intuition of the author; no formal optimality criteria is defined. Nevertheless, it was later observed that the Elo algorithm actually finds the approximate \gls{ml} ratings estimates in the binary-outcome (win-loss) games \citep{Kiraly17}.
As for the draws, the Elo algorithm considers them by using the concept of a fractional score (of the game). However, since the underlying model is not specified, in our view there is a logical void: on the one hand, the Elo algorithm includes draws, on the other hand, there is no model allowing us to calculate the draw probability. The objective of this work is to fill this gap.
The paper is organized as follows. We define the mathematical model of the problem in \secref{Sec:Model}. In \secref{Sec:rating.filtering} we show how the principle of \gls{ml} combined with the \gls{sg} yield the Elo algorithm in the binary-outcome games. We treat the issue of draws in \secref{Sec:Draws}; this is where the main contributions of the paper are found. Namely, we show and discuss the implicit model underlying the Elo algorithm; we also extend the model to increase its flexibility; finally we show how to define its parameters to take into account the known frequency of the draws. In \secref{Sec:Examples} we illustrate the analysis with numerical results and the final conclusions are drawn in \secref{Sec:Conclusions}.
\section{Rating: Problem definition}\label{Sec:Model}
We consider the problem of $M$ players (or teams), indexed by $m=1,\ldots,M$, challenging each other in face-to-face games. At a time $n$ we observe the result/outcome $y_n$ of the game between the players defined by the pair $\boldsymbol{i}_n=\set{i_{\tnr{H},n},i_{\tnr{A},n}}$. The index $i_{\tnr{H},n}$ refers to the ``home'' player, while $i_{\tnr{A},n}$ indicates the ``away'' player. This distinction is often important in the team games where the so-called home-field advantage may play a role; in other competition such an effect may exist as well, like in chess, the player who starts the game may be considered a home player. We consider three possible game results: i)~the home player wins; denoted as $\set{i_{\tnr{H},n}\gtrdot i_{\tnr{A},n}}$ in which case $\set{y_n=\mf{H}}$; ii)~the draw (or tie) $\set{y_n=\mf{D}}$, denoted also as $\set{i_{\tnr{H},n} \doteq i_{\tnr{A},n}}$; and finally, iii)~$\set{y_n=\mf{A}}$, which means that the ``away'' player wins which we denote also as $\set{i_{\tnr{H},n} \lessdot i_{\tnr{A},n}}$.
For compactness of notation, useful in derivations, it is convenient to encode the categorical variable $y_n$ into numerical indicators defined over the set $\set{0,1}$
\begin{align}\label{omega.lambda.tau}
h_n&=\IND{y_n=\mf{H}},\quad a_n=\IND{y_n=\mf{A}},\quad d_n=\IND{y_n=\mf{D}},
\end{align}
with $\IND{\cdot}$ being the indicator function: $\IND{A}=1$ if $A$ is true and $\IND{A}=0$, otherwise. The mutual exclusivity of the win/loss/draw events guarantees $h_n+a_n+d_n=1$.
Having observed the outcomes of the games, $y_l, l=1,\ldots,n$, we want to \emph{rate} the players, \ie assign a \emph{rating level}---a real number---$\theta_m$ to each of them. The rating level should represent the player's ability to win; for this reason it is also called \emph{strength} \citep{Glickman99} or \emph{skill} \citep{Herbrich06}\citep{Caron12}. The ability should be understood in the probabilistic sense: no player has a guarantee to win so the outcome $y_n$ is treated as a realization of a random variable $Y_n$. Thus, the levels $\theta_m, m=1,\ldots, M$ should provide a reliable estimate of the distribution of $Y_n$ over the set $\set{\mf{H},\mf{A},\mf{D}}$. In other words, the formal rating becomes an expert system explaining the past-- and predicting the future results.
\subsection{Win-loss model}\label{Sec:rating.model}
It is instructive to consider first the case when the outcome of the game is binary, $y_n\in\set{\mf{H}, \mf{A}}$, \ie for the moment, we ignore the possibility of draws, $\mf{D}$, and we consider them separately in \secref{Sec:Draws}. In this case we are looking to establish the probabilistic model linking the result of the game and the rating levels of the involved players. By far the most popular approach is based on the so-called linear model \citep[Ch.~1.3]{David63_Book}
\begin{align}\label{Pr.ij.PhiW.ov}
\PR{ i\gtrdot j |\theta_i,\theta_j} = \PhiH(\theta_i-\theta_j),
\end{align}
where $\PhiH(v)$ is an increasing function which satisfies
\begin{align}\label{}
\lim_{v\rightarrow-\infty}\PhiH(v)=0,\quad \lim_{v\rightarrow\infty}\PhiH(v)=1,
\end{align}
and thus we may set $\PhiH(v)=\Phi(v)$, where $\Phi(v)$ is a conveniently chosen \gls{cdf}. By symmetry, $\PR{ i\gtrdot j }=\PR{ j\lessdot i }$, we obtain
\begin{align}\label{Pr.ij.PhiW}
\PhiH(v)=\Phi(v),\quad \PhiA(v) = \Phi(-v)= 1-\Phi(v),
\end{align}
where the last relationship comes from the law of total probability, $\PR{i\gtrdot j}+\PR{i\lessdot j} =1$ (remember, we are dealing with binary-outcome games)
Indeed, \eqref{Pr.ij.PhiW.ov} corresponds to our intuition: the growing difference between rating levels $\theta_i-\theta_j$ should translate into increasing probability of user $i$ winning against the user $j$.
To emphasize that the entire model is defined by the \gls{cdf} $\Phi(v)$, which affects both $\PhiH(v)$ and $\PhiA(v)$ via \eqref{Pr.ij.PhiW}, we keep the separate notation $\PhiH(v)$ and $\Phi(v)$ even if they are the same in the case we consider.
A popular choice for $\Phi(v)$ is the logistic \gls{cdf} \citep[]{Bradley52}
\begin{align}\label{Phi.Logistic}
\Phi(v)=\frac{1}{1+10^{-v/\sigma}}=\frac{10^{0.5v/\sigma}}{10^{0.5v/\sigma}+10^{-0.5v/\sigma}},
\end{align}
where $\sigma>0$ is a scale parameter.
We note that the rating is arbitrary regarding
\begin{itemize}
\item the origin---because any value $\theta_0$ can be added to all the levels $\theta_m$ without affecting the difference $v=\theta_i-\theta_j$ appearing as the argument of $\Phi(\cdot)$ in \eqref{Pr.ij.PhiW},
\item the scaling---because the levels $\theta_m$ obtained with the scale $\sigma$ can be transformed into levels $\theta'_m$ with a scale $\sigma'$ via multiplication: $\theta'_m=\theta_m\sigma' /\sigma$, and then the value of $\Phi(\theta_i-\theta_j)$ used with $\sigma$ is the same value as $\Phi(\theta'_i-\theta'_j)$ used with $\sigma'$;\footnote{The rating implemented by \gls{fifa} uses $\sigma=600$ \citep{fifa_rating}, while \gls{fide} uses $\sigma=400$} and
\item the base of the exponent in \eqref{Phi.Logistic}; for example, $10^{-v/\sigma}=\mr{e}^{-v/\sigma'}$ with $\sigma'=\sigma \log_{10} \mr{e}$; therefore, changing from the base-$10$ to the base of the natural logarithm requires replacing $\sigma$ with $\sigma'$.
\end{itemize}
\section{Rating via maximum likelihood estimation}\label{Sec:rating.filtering}
Using the results from \secref{Sec:rating.model}, the random variables, $Y_n$, and the rating levels are related through conditional probability
\begin{align}
\label{P.Yk.W}
\PR{ Y_n = \mf{H} | \boldsymbol{x}_n, \boldsymbol{\theta}} &= \PhiH( v_n )=\Phi( v_n),\\
\label{P.Yk.L}
\PR{ Y_n = \mf{A} | \boldsymbol{x}_n, \boldsymbol{\theta}} &= \PhiA(v_n) =\Phi( - v_n ),\\
\label{linear.combine}
v_n&=\boldsymbol{x}_n \T \boldsymbol{\theta} = \theta_{i_{\tnr{H},n}}-\theta_{i_{\tnr{A},n}},
\end{align}
where $\boldsymbol{\theta}=[\theta_1,\ldots,\theta_M]\T$ is the vector which gathers all the rating levels, $(\cdot)\T$ denotes transpose, $v_n$ is thus a result of linear combiner $\boldsymbol{x}_n$ applied to $\boldsymbol{\theta}$, and $\boldsymbol{x}_n$ is the game-scheduling vector, \ie
\begin{align}\label{}
\boldsymbol{x}_n = [ 0, \ldots, 0, \underbrace{1}_{i_{\tnr{H},n}\tr{-th pos.}}, 0, \ldots, 0, \underbrace{-1}_{i_{\tnr{A},n}\tr{-th pos.}}, 0, \ldots, 0 ]\T.
\end{align}
We prefer the notation using the scheduling vector as it liberates us from somewhat cumbersome repetition of the indices $i_{\tnr{H},n}$ and $i_{\tnr{A},n}$ as in \eqref{linear.combine}.
Our goal now, is to find the levels $\boldsymbol{\theta}$ at time $n$ using the game outcomes $\set{y_l}_{l=1}^n$ and the scheduling vectors $\set{\boldsymbol{x}_l}_{l=1}^n$. This is fundamentally a parameter estimation problem (model fitting) and we solve it using the \gls{ml} principle. The \gls{ml} estimate of $\boldsymbol{\theta}$ at time $n$ is obtained via optimization
\begin{align}\label{theta.ML}
\hat{\boldsymbol{\theta}}_n=\mathop{\mr{argmin}}_{\boldsymbol{\theta}} J_n(\boldsymbol{\theta})
\end{align}
where
\begin{align}
\label{Pr.y.theta}
J_n(\boldsymbol{\theta})&=- \log \PR{\set{Y_l}_{l=1}^n=\set{y_l}_{l=1}^n|\boldsymbol{\theta},\set{\boldsymbol{x}_l}_{l=1}^n }.
\end{align}
Further, assuming that conditioned on the levels $\boldsymbol{\theta}$, the outcomes $Y_l$ are mutually independent, \ie $ \PR{\set{Y_l}_{l=1}^n=\set{y_l}_{l=1}^n|\boldsymbol{\theta},\set{\boldsymbol{x}_l}_{l=1}^n } =\prod_{l=1}^n\PR{Y_n=y_n|\boldsymbol{\theta},\boldsymbol{x}_n } $, we obtain
\begin{align}
J_n(\boldsymbol{\theta})&= \sum_{l=1}^n L_l(\boldsymbol{\theta})\\
\label{F.k}
L_l(\boldsymbol{\theta})&=-\log \PR{ Y_l=y_l | \boldsymbol{\theta}}\\
\label{F.k.2}
&=-h_l\log\PhiH(\boldsymbol{x}_l\T\boldsymbol{\theta})-a_l\log\PhiA(\boldsymbol{x}_l\T\boldsymbol{\theta}),
\end{align}
where we applied the model \eqref{P.Yk.W}-\eqref{P.Yk.L}.
\subsection{Stochastic gradient and Elo algorithm}
The minimization in \eqref{theta.ML} can be done via steepest descent which would result in the following operations
\begin{align}\label{Step.descent}
\hat{\boldsymbol{\theta}}_n \leftarrow \hat{\boldsymbol{\theta}}_n - \mu \nabla_{\boldsymbol{\theta}}J_n(\hat{\boldsymbol{\theta}}_n)
\end{align}
iterated (hence the symbol ``$\leftarrow$'') till convergence for a given $n$; the gradient is calculated as
\begin{align}\label{gradient.J.k}
\nabla_{\boldsymbol{\theta}}J_n(\boldsymbol{\theta}) = \sum_{l=1}^n \nabla_{\boldsymbol{\theta}}L_l(\boldsymbol{\theta}),
\end{align}
and the step, $\mu$, should be adequately chosen to guarantee the convergence. Moreover, since $J_n(\boldsymbol{\theta})$ is convex,\footnote{The convexity comes from the fact that $-\log\PhiH(v)$ is convex in $v$ (easy to demonstrate by hand) and thus $\log\PhiH(\boldsymbol{x}\T\boldsymbol{\theta})$, being a concatenation of a convex and linear functions is also convex \citep[Appendix~A]{Tsukida11}.} the minimum is global.\footnote{While the minimum is global, it is not unique due to the ambiguity of the origin $\theta_0$ we mentioned at the end of \secref{Sec:rating.model}.}
From \eqref{F.k.2} we obtain
\begin{align}\label{gradient}
\nabla_{\boldsymbol{\theta}}L_l(\boldsymbol{\theta})
&= - h_l \boldsymbol{x}_l \psi( \boldsymbol{x}_l\T\boldsymbol{\theta} ) +a_l \boldsymbol{x}_l \psi( -\boldsymbol{x}_l\T\boldsymbol{\theta} )\\
\label{gradient.2}
&=-\boldsymbol{x}_l e_l(v_l)
\end{align}
where, directly from \eqref{Phi.Logistic} we have
\begin{align}\label{psi.x}
\psi(v)=\frac{\dd }{\dd v}\log\Phi(v)=\frac{\Phi'(v)}{\Phi(v)}=\frac{1}{\sigma'}\Phi(-v),
\end{align}
where $\sigma'=\sigma \log_{10} \mr{e}$, and, using $\Phi(-v)=1-\Phi(v)$ we have
\begin{align}
\label{e.k}
e_l(v_l)
&=h_l\psi\big( v_l \big) +(h_l-1) \psi\big( -v_l \big)
\\
\label{e.k.2}
&=\frac{1}{\sigma'}[h_l -\Phi(v_l)].
\end{align}
The solution obtained in \eqref{Step.descent} is based on the model \eqref{P.Yk.W}-\eqref{P.Yk.L} which requires $\boldsymbol{\theta}$ to remain constant throughout the time $l=1,\ldots, n$. Since, in practice, the levels of the players may vary in time (the abilities evolve due to training, age, coaching strategies, fatigue, etc.), it is necessary to track $\boldsymbol{\theta}$.
To this end, arguably the simplest strategy relies on the \acrfull{sg} which differs from the steepest descent in the following elements: i)~at time $n$ only one iteration of the steepest descent is executed, ii)~the gradient is calculated solely for the current observation term $L_n(\hat{\boldsymbol{\theta}}_n)$, and iii)~the available estimate $\hat{\boldsymbol{\theta}}_n$ is used as the starting point for the update
\begin{align}\label{stoch.gradient}
\hat{\boldsymbol{\theta}}_{n+1} &= \hat{\boldsymbol{\theta}}_{n} -\mu\nabla_{\boldsymbol{\theta}} L_n(\boldsymbol{\theta}) =\hat{\boldsymbol{\theta}}_{n} + \mu \boldsymbol{x}_n e_n(v_n)\\
\label{stoch.gradient.2}
&=\hat{\boldsymbol{\theta}}_{n} + \mu \boldsymbol{x}_n [h_n-\Phi(v_n)]\\
\label{stoch.gradient.3}
&=\hat{\boldsymbol{\theta}}_{n} - \mu \boldsymbol{x}_n [a_n-\Phi(-v_n)],
\end{align}
where $\mu$ is the adaptation step; with abuse of notation the fraction $\frac{1}{\sigma'}$ from \eqref{e.k.2} is absorbed by $\mu$ in \eqref{stoch.gradient.2}-\eqref{stoch.gradient.3}.
In the rating context, $\boldsymbol{x}_n$ has only two non-zero terms, and therefore only the level of the players $i_{\tnr{H},n}$ and $i_{\tnr{A},n}$ will be modified. By inspection, the update \eqref{stoch.gradient.2}-\eqref{stoch.gradient.3} may be written as a single equation for any player $i\in\set{i_{\tnr{H},n},i_{\tnr{A},n}}$
\begin{align}\label{rating.SG}
\hat{\theta}_{n+1, i} &=\hat{\theta}_{n, i} + K\big[s_i-\Phi( \Delta_i )\big]
\end{align}
where $\Delta_i=\hat{\theta}_{n,i}-\hat{\theta}_{n,j}$ and $j$ is the index of the player opposing the player $i$, \ie $j\neq i, j\in\set{i_{\tnr{H},n},i_{\tnr{A},n}}$; $s_i=\IND{i\gtrdot j}$ indicates if the player $i$ won the game. Since the variables $s_i$ and $\Delta_i$ are intermediary, on purpose we do not index them with $n$.
We also replaced $\mu$ with $K$ so that \eqref{rating.SG} has the form of the Elo algorithm as usually presented in the literature \citep{Elo08_Book}\citep[Ch.~5]{Langeville12_book}. Thus, the Elo algorithm implements the \gls{sg} to obtain the \gls{ml} estimate of the levels $\boldsymbol{\theta}$ under the model \eqref{Pr.ij.PhiW}. This has been noted before, \eg in \citep{Kiraly17}.
We also note that, in the description of the Elo algorithm \citep{Elo08_Book}, $s_i$ is defined as a numerical ``score'' attributed to the game outcome $\mf{H}$ or $\mf{A}$. In a sense, it is a legacy of rating methods which attribute numerical value to the game result. On the other hand, in the modelling perspective we adopted, attribution of numerical values to the categorical variables $\mf{H}$ and $\mf{A}$ is not required.
\section{Draws}\label{Sec:Draws}
We want to address now the issue of draws (ties) in the game outcome. We ignored it for clarity of development, but draws are important results of the game and must affect the rating, especially in sports when they occur frequently, such as international football, chess and many others sports and competitions \citep[Ch.~11]{Langeville12_book}. Some approaches in the literature go around this problem by ignoring the draws, other count them as partial wins/losses with fractional score $s_i=\frac{1}{2}$ \citep[Ch.~11]{Langeville12_book}\citep{Glickman15}. Such heuristics, while potentially useful, do not show explicitly how to predict the results of the games from the rating levels.
Thus, the preferred approach is to model the draws explicitly; we must, therefore, augment our model to include the conditional probability of draws
\begin{align}\label{Pr.ij.PhiT}
\PR{ i\doteq j |\theta_i,\theta_j}= \PhiD(\theta_i-\theta_j),
\end{align}
where by axiomatic requirement $\PhiD(v)$ should be decreasing with the absolute value of its argument, and be maximized for $v=0$. The justification is that large absolute difference in levels increases the probability of win or loss, while the rating levels proximity, $\theta_i\approx \theta_j$, should increase the probability of a draw.
By the law of total probability we require now
\begin{align}\label{sum.WTL}
\PhiH(v)+\PhiA(v) +\PhiD(v)=1,
\end{align}
which obviously implies that considering the draws, the functions $\PhiH(v)$ and $\PhiA(v)$ also must change with respect to those used when analyzing the binary (win/loss) game results.
\subsection{Explaining draws in the Elo algorithm}\label{Sec:draws.axiomatic}
The Elo algorithm also considers draws by setting $s_i=\frac{1}{2}$ in \eqref{rating.SG} \citep[Ch.~1.6]{Elo08_Book} \citep[Ch.~5]{Langeville12_book}. However, the function $\PhiD(v)$ is undefined which is quite perplexing: the draws are accounted for but the model, which would allow us to calculate their probability from the parameters $\boldsymbol{\theta}$, is lacking. Moreover, the description of algorithm \eqref{rating.SG} still indicates that $\Phi(\Delta_i)$ is the ``expected score'' which cannot be calculated without explicit definition of the probability of draw. Despite this logical gap, the algorithm is being widely used and is considered reliable.
Our objective here, is thus to ``reverse-engineer'' the Elo algorithm and explain what probabilistic model is compatible with the operation of the algorithm. This will bridge the gap providing formal basis to interpret the results.
\begin{proposition}[The Elo algorithm with draws] The Elo algorithm \eqref{rating.SG} which assigns the score value $s_i=1$ to a win, $s_i=0$ to a loss and $s_i=\frac{1}{2}$ to a draw, implements \gls{sg} to estimate the rating levels $\boldsymbol{\theta}$ using the \gls{ml} principle for model defined by the following conditional probabilities
\begin{align}
\label{PhiW.WLT}
\PhiH(v)&=\Phi^2(v),\quad
\PhiA(v)=\Phi^2(-v)\\
\label{PhiT.WLT}
\PhiD(v)&=2\Phi(v)\Phi(-v).
\end{align}
\end{proposition}
\begin{proof}
We start by squaring the equation of the total probability law for the binary-outcome game, $\Phi(v)+\Phi(-v)=1$, to obtain
\begin{align}\label{total.law.WLT}
\Phi^2(v)+\Phi^2(-v)+2\Phi(v)\Phi(-v)=1
\end{align}
and thus, using the assignment \eqref{PhiW.WLT}-\eqref{PhiT.WLT}, we satisfy \eqref{sum.WTL}. This may appear arbitrary but we have to recall that the whole model for the binary outcome is built on assumptions which reflect our idea about the loss/win probabilities and indeed, the draw probability function $\PhiD(v)$ has the behaviour we expected: it has a maximum for $v=0$ and decreases with growing $|v|$.
Now, each function in the model, $\PhiH(v)$, $\PhiA(v)$, and $\PhiD(v)$, is a non-trivial transformation of $\Phi(v)$.
Using \eqref{PhiW.WLT}-\eqref{PhiT.WLT}, we rewrite \eqref{F.k} as
\begin{align}
\label{F.k.WTL}
L_l(\boldsymbol{\theta})&=-\log \PR{ Y_l=y_l | \boldsymbol{\theta}}\\
\label{F.k.WTL.2}
&=-h_l\log\PhiH(v_l)-a_l\log\PhiA(v_l)-d_l\log\PhiD(v_l)\\
&=-2h_l\log\Phi(v_l)-2a_l\log\Phi(-v_l)-d_l\Big[\log\Phi(v_l) +\log\Phi(-v_l)\Big]
\end{align}
so the gradient is calculated as in \eqref{gradient}
\begin{align}\label{gradient.WLT}
\nabla_{\boldsymbol{\theta}}L_l(\boldsymbol{\theta})
&= - 2 \tilde{h}_l\boldsymbol{x}_l \psi( v_l ) +2\tilde{a}_l\boldsymbol{x}_l \psi( -v_l )\\
\label{gradient.WLT.2}
&=-2\boldsymbol{x}_l \tilde{e}_l(\boldsymbol{\theta})
\end{align}
where $\tilde{h}_l=h_l+d_l/2$, $\tilde{a}_l=a_l+d_l/2=1-\tilde{h}_l$, and
\begin{align}
\label{e.k.WLT}
\tilde{e}_l(\boldsymbol{\theta}) &=\tilde{h}_l -\Phi(v_l).
\end{align}
We thus recover the same equations as in the binary-result game, splitting the draw indicator, $d_l$, equally between the indicators of the home and away wins; we can reuse them directly in \eqref{stoch.gradient.2}-\eqref{stoch.gradient.2}
\begin{align}\label{}
\label{stoch.gradient.2.HAD}
\hat{\boldsymbol{\theta}}_{n+1}&=\hat{\boldsymbol{\theta}}_{n} + \mu \boldsymbol{x}_n [\tilde{h}_n-\Phi(v_n)]\\
\label{stoch.gradient.3.HAD}
&=\hat{\boldsymbol{\theta}}_{n} - \mu \boldsymbol{x}_n [\tilde{a}_n-\Phi(-v_n)],
\end{align}
which yields the same update as the Elo algorithm \eqref{rating.SG}
\begin{align}\label{rating.SG.WLT}
\hat{\theta}_{n+1, i} &= \hat{\theta}_{n, i} + K \big[s_i-\Phi( \Delta_i )\big],
\end{align}
with the new definition of the score $s_i=\tilde{h}_n$ (for the home player) and $s_i=\tilde{a}_n$ (for the away player), and where for compatibility of equations, the update step $K$ absorbed the multiplication by $2$ (the only difference between \eqref{gradient.WLT.2} and \eqref{gradient.2}).
\end{proof}
The following observations are in order:
\begin{enumerate}
\item We unveiled the implicit model behind the Elo algorithm thus, our findings do not affect the operation of the algorithm but rather clarify how to interpret its results. Namely, given the estimate of the levels $\hat{\boldsymbol{\theta}}_n$ the probability of the game outcomes should be estimated as
\begin{align}\label{Pr.win.hat.theta}
\PR{i\gtrdot j|\hat{\theta}_i,\hat{\theta}_j} &=\Phi^2 ( \hat{\theta}_i-\hat{\theta}_j )\\
\label{Pr.loss.hat.theta}
\PR{i\lessdot j|\hat{\theta}_i,\hat{\theta}_j}&=\Phi^2 ( \hat{\theta}_j-\hat{\theta}_i )\\
\label{Pr.draw.hat.theta}
\PR{i\doteq j|\hat{\theta}_i,\hat{\theta}_j}& =2\Phi ( \hat{\theta}_i-\hat{\theta}_j )\Phi( \hat{\theta}_j-\hat{\theta}_i ).
\end{align}
\item We emphasize that $s_i$ is the indicator of the result but using \eqref{PhiW.WLT}-\eqref{PhiT.WLT} we can again calculate its expected value for $i=i_{\tnr{H},n}$
\begin{align}\label{}
\Ex_{Y_l|\hat{\theta}_{l,i},\hat{\theta}_{l,j}}[ s_i(Y_l) ]
&=\PR{i\gtrdot j|\hat{\theta}_i,\hat{\theta}_j} +\frac{1}{2}\PR{i\doteq j|\hat{\theta}_i,\hat{\theta}_j}\\
&=\big[\Phi ( \Delta_i )\big]^2+\Phi ( \Delta_i )\Phi( -\Delta_i )\\
&=\Phi ( \Delta_i )\big[\Phi ( \Delta_i ) +\Phi ( -\Delta_i )\big]=\Phi ( \Delta_i );
\end{align}
the same can be straightforwardly done for $i=i_{\tnr{A},n}$.
Thus, indeed, the function $\Phi(\Delta_i)$ in the Elo update \eqref{rating.SG} has the meaning of the expected score. It has not been spelled out mathematically up to now---most likely---because the draws has been only implicitly considered. Nevertheless, with formidable intuition, the description of the Elo algorithm defines correctly the terms without making reference to the underlying probabilistic model.
We note, again, that the notion of expected score is not necessary in the development of the \gls{sg} algorithm and the fact that the score takes the fractional value $s_i=\frac{1}{2}$ is a result of the particular form of the conditional probability \eqref{PhiT.WLT} and our decision to make $K$ absorb the multiplication by $2$, see \eqref{gradient.WLT.2}.
\end{enumerate}
While the clarification we made regarding the meaning of the expected score is useful, the first observation above is the most important for the explicit interpretation of the results of the algorithm. Recall that, in the win-loss game, the function $\Phi ( \Delta_i )$ has the meaning of the probability of winning the game, see \eqref{Pr.ij.PhiW}. However, in the win-draw-loss model, such interpretation is incorrect because the probability of winning the game is given by \eqref{Pr.win.hat.theta} which we just derived. As we will see in the numerical examples, using the latter, however, provides poor results.
This surprising confusion persisted through time because the model we have shown in \eqref{PhiW.WLT}-\eqref{PhiT.WLT} is merely implicit in the Elo algorithm and the explicit derivation of the algorithm \citep[Chap.~8]{Elo08_Book} did not consider the draws in the formal probabilistic framework. Other works, \eg \citep{Glickman99} \citep{Lasek13}, observed this conceptual difficulty before. In particular, \citep[Sec.~2]{Glickman99} used $\PhiT(v)= \sqrt{\Phi ( v )\Phi ( -v )}$ but kept the legacy of the win-loss model, \ie $\PhiH(v)= \Phi ( v )$ and $\PhiA(v)=\Phi(-v)$, which leads to approximate solutions because \eqref{sum.WTL} is violated.
The lesson learned is that, despite the apparent simplicity of the Elo algorithm, we should resist the temptation to tweak its parameters. While using the fractional score value $s_i=\frac{1}{2}$ for the draw is now explained, we cannot guarantee that modifying $s_i$ in arbitrary manner will correspond to a particular probabilistic model. Therefore, rather than tweaking the \gls{sg}/Elo algorithm \eqref{rating.SG}, the modification should start with the probabilistic model itself.
\subsection{Generalization of the Elo algorithm}\label{Sec:Extended.Elo}
Having unveiled the implicit modeling of draws underlying the Elo algorithm we immediately face a new problem. Namely, considering the draws, we have three events (and thus two independent probabilities to estimate) but the Elo algorithm has no additional degree of freedom to take this reality into account. For example, using \eqref{Pr.win.hat.theta}-\eqref{Pr.draw.hat.theta} the results of the game between the players with equal rating levels $\hat{\theta}_i=\hat{\theta}_j$, will always be predicated as $\PR{i\gtrdot j|\hat{\theta}_i,\hat{\theta}_j} =0.25$ and $\PR{i\doteq j|\hat{\theta}_i,\hat{\theta}_j} =0.5$. The Elo algorithm does that implicitly, but there is no real reason to stick to such a rigid solution which may produce an inadequate fit to the observed data, and a more general approach is necessary.
One of the workarounds proposed by \citep{Rao67} and used later, \eg \citep{Fahrmeir94}\citep{Herbrich06}\citep{Kiraly17}, modifies the model using a threshold value $v_0\geq 0$
\begin{align}\label{Phi.WLT.thresholds}
\PhiH(v)&= \Phi(v- v_0),\quad \PhiA(v)= \Phi( -v - v_0), \quad \PhiD(v) = \Phi(v+v_0) -\Phi(v- v_0).
\end{align}
While \eqref{Phi.WLT.thresholds} is definitely useful and solves formally the problem which is more general than the case of binary outcome game, we do not treat it as a generalization of the Elo algorithm itself, because there is no parameter $v_0$ which transforms \eqref{Phi.WLT.thresholds} into \eqref{PhiW.WLT}-\eqref{PhiT.WLT} (which, as we demonstrated, is the model behind the Elo algorithm).
Here we propose to use the model of \citep{Davidson70} which can be defined as
\begin{align}
\label{PhiW.Davidson}
\PhiH(v)&= \Phi_{\kappa}(v) = \frac{10^{0.5v/\sigma}}{10^{0.5v/\sigma}+10^{-0.5v/\sigma} +\kappa}\\
\label{PhiL.Davidson}
\PhiA(v)&=\Phi_{\kappa}(-v) = \frac{10^{-0.5v/\sigma}}{10^{0.5v/\sigma}+10^{-0.5v/\sigma} +\kappa}\\
\label{PhiT.Davidson}
\PhiD(v)&=\kappa\sqrt{\PhiH(v)\PhiA(v)}=\frac{\kappa}{10^{0.5v/\sigma}+10^{-0.5v/\sigma} +\kappa},
\end{align}
where $\kappa\geq 0$ is a freely set draw parameter.
We hasten to say that the model \eqref{PhiW.Davidson}-\eqref{PhiT.Davidson} is not necessarily better in the sense of fitting to the data than \eqref{Phi.WLT.thresholds}. Our motivation to adopt \eqref{PhiW.Davidson}-\eqref{PhiT.Davidson} is the fact that these equations generalize previous models. Namely, for $\kappa=0$ we obtain the win-loss model behind the Elo algorithm shown in \eqref{rating.SG}, while using $\kappa=2$ yields
\begin{align}\label{}
\label{PhiW.Davidson.n=2}
\PhiH(v)&= \frac{10^{0.5v/\sigma}}{\Big(10^{0.25v/\sigma}+10^{-0.25v/\sigma}\Big)^2}=\Phi^2(v/2)
\end{align}
which, up to the scale factor $\sigma$, corresponds to the implicit win-draw-loss model behind the Elo algorithm we have shown in \eqref{PhiW.WLT}-\eqref{PhiT.WLT}.
In other words, the implicit model for the Elo algorithm is based on the explicit modeling of draws proposed by \citep{Davidson70} if we set a particular value of the draw parameter ($\kappa=2$).
\subsubsection{Adaptation}\label{Sec:Adaptation.Extended.Elo}
We quickly note that the function $-\log \Phi_{\kappa}(v)$ is convex so the gradient-based adaptation will converge under adequate choice of the step $\mu$.
To derive the adaptation algorithm we recalculate \eqref{F.k.WTL}
\begin{align}\label{}
L_l(\boldsymbol{\theta})&=-\log \PR{ Y_l=y_l | \boldsymbol{\theta}}\\
&=-h_l\log\PhiH(v_l)-a_l\log\PhiA(v_l)-d_l\log\PhiD(v_l)\\
&=-\tilde{h}_l\log\PhiH(v_l)-\tilde{a}_l\log\PhiH(-v_l)
\end{align}
and the gradient is given by
\begin{align}\label{}
\nabla_{\theta}L_l(\boldsymbol{\theta})
&=-e_l(v_l)\boldsymbol{x}_l
\end{align}
where
\begin{align}\label{}
e_l(v_l)&=
\tilde{h}_l\psi_\kappa(v_l)+(\tilde{h}_l-1)\psi_\kappa(-v_l)\\
\psi_{\kappa}(v)&=\frac{\Phi'_{\kappa}(v)}{\Phi_{\kappa}(v)}
=\frac{1}{\sigma'}\frac{10^{-0.5v/\sigma}+\frac{1}{2}\kappa}{10^{0.5v/\sigma}+10^{-0.5v/\sigma} +\kappa}
=\frac{1}{\sigma'} F_{\kappa}(-v),
\end{align}
where, as before $\sigma'=\sigma\log_{ 10}\mr{e}$, and we define
\begin{align}\label{F.kappa.definition}
F_\kappa( v ) =\frac{10^{v/2}+\frac{1}{2} \kappa}{10^{v/2}+10^{-v/2} +\kappa}=1-F_\kappa( -v ),
\end{align}
and thus
\begin{align}\label{}
e_l(v_l)
&=
\frac{1}{\sigma'}\Big(\tilde{h}_l F_\kappa(-v_l) + (\tilde{h}_l-1)F _\kappa(v_l)\Big)\\
\label{e.l.GElo}
&=
\frac{1}{\sigma'}\Big(\tilde{h}_l-F_\kappa(v_l)\Big).
\end{align}
Using \eqref{e.l.GElo} in \eqref{stoch.gradient} yields the same equations as in \eqref{rating.SG.WLT} except that $\Phi(v)$ must be replaced with $F_\kappa(v)$ and the division by $\sigma'$ should be absorbed by the adaptation step. This yields a new $\kappa$-Elo rating algorithm
\begin{align}\label{rating.SG.WLT.Extended}
\hat{\theta}_{n+1, i} &= \hat{\theta}_{n, i} + K\big[s_i-F_\kappa( \Delta_i )\big],
\end{align}
where as before i)~$\Delta_i= \hat{\theta}_i - \hat{\theta}_j$ ($j$ being the index of the player opposing the player $i$), ii)~as in the Elo algorithm, $K$ is maximum increase/decrease step, and iii)~$s_i\in\set{0,\frac{1}{2},1}$ indicates the outcome of the game, \ie the score.
The new $\kappa$-Elo algorithm is equally simple as the Elo algorithm, yet provides us with the flexibility to model the relationship between the draws and the wins via the draw parameter $\kappa\geq0$. We recall that, in fact, the Elo algorithm is a particular version of $\kappa$-Elo for $\kappa=2$.
We provide numerical examples in \secref{Sec:Examples} to illustrate its properties.
\subsubsection{$\kappa$ in $\kappa$-Elo algorithm: insights and pitfalls}\label{Sec:setting.kappa}
Can we say something about the draw parameter, $\kappa$, without implementing and running the $\kappa$-Elo algorithm defined by \eqref{rating.SG.WLT.Extended}? The answer is yes, if we suppose that the fit we obtain is (almost) perfect, \ie the average empirical probabilities averaged over a large time window
\begin{align}\label{}
\ov{p}_\mf{H}&=\frac{1}{N}\sum_{l=1}^N\IND{y_l=\mf{H}}, \quad \ov{p}_\mf{A}=\frac{1}{N}\sum_{l=1}^N\IND{y_l=\mf{A}},\quad \ov{p}_\mf{D}=\frac{1}{N}\sum_{l=1}^N\IND{y_l=\mf{D}},
\end{align}
can be deduced from the functions \eqref{PhiW.Davidson}-\eqref{PhiT.Davidson} using the estimated rating levels $\hat{\boldsymbol{\theta}}$.\footnote{Such statistics may be obtained from previous seasons. While they do not change drastically through seasons and may be treated as a prior, in the case of on-line rating, they may also be estimated from the recent past. However, we do not follow this idea further in this work.} If this is the case they should stay in the relationship prescribed by the model \eqref{PhiT.Davidson}, \ie $\ov{p}_\mf{D} \approx \kappa\sqrt{\ov{p}_\mf{H} \ov{p}_\mf{A}}$.
Denoting the difference between the frequency of home and away wins as $\ov{\delta}=\ov{p}_\mf{H}- \ov{p}_\mf{A}$, and from the law of total (empirical) probability we obtain $\ov{p}_\mf{H}=\frac{1}{2}(1-\ov{p}_\mf{D}+\ov{\delta})$ and $\ov{p}_\mf{A}=\frac{1}{2}(1-\ov{p}_\mf{D}-\ov{\delta})$ from which
\begin{align}\label{}
\ov{p}_\mf{D} \approx \frac{\kappa}{2}\sqrt{(1-\ov{p}_\mf{D})^2-\ov{\delta}^2}
\end{align}
and thus, for the relatively small values of home/away imbalance, \eg $\ov{\delta}< 0.2$ we can ignore the term $\ov{\delta}^2$ which allows us simply say what is implicit assumption about $\ov{p}_\mf{D}$ for arbitrary $\kappa$
\begin{align}\label{kappa.2.pt}
\ov{p}_\mf{D} &\approx \frac{\kappa}{2+\kappa}.
\end{align}
Thus using $\kappa=2$ (as done implicitly in the current rating of \gls{fide} and \gls{fifa}), suggests that the $\ov{p}_\mf{D}\approx 0.5$. Since this is not the case in none of the competitions where these rating are used, we can expect that, when implementing the new rating algorithm with a more appropriate value of $\kappa$, \gls{fide} and \gls{fifa} will improve the fit to the results in the sense of better estimation of the probabilities of win, loss, and draw.
We can also estimate the suitable value of $\kappa$ as
\begin{align}\label{pt.2.kappa}
\ov{\kappa} &\approx \frac{2\ov{p}_\mf{D}}{1-\ov{p}_\mf{D}}.
\end{align}
For example, using $\ov{p}_\mf{D} \approx 0.25$ (which was the average frequency of draws in English Premier Ligue football games over ten seasons, see \secref{Sec:Examples}) we would find $\ov{\kappa}\approx 0.7$.
Is this value acceptable?
Before answering this question, we have to point to a particular problem that can arise in the modelling of the draws. Namely, the current formulations known in the literature (the threshold-based \eqref{Phi.WLT.thresholds} or the one we used \eqref{PhiW.Davidson}-\eqref{PhiT.Davidson}), do not explicitly constrain the relationship between the predicted \emph{values} of probabilities. Of course, we always keep the relationship $\PhiA(v)=\PhiH(-v)$. Thus, considering the case $v_l=\hat{\theta}_i-\hat{\theta}_j=\epsilon$ (where $\epsilon>0$ is a small rating difference), we have $\PhiH(\epsilon)\ge\PhiA(\epsilon)$ and our intuition follows: it is more probable that a stronger home player wins than he looses.
On the other hand, it is not clear what should be said about the probability of the draw in such a case. Should we expect the probability of draw to be larger than the probability of home/away win? For example, is it acceptable to obtain the values $\PhiH(\epsilon)=0.42$, $\PhiA(\epsilon)=0.38$ and $\PhiD(\epsilon)=0.20$? Nothing prevents such results in the model we use (and, to our knowledge, in other models used before) and the interpretation is counterintuitive: the stronger home player is more likely to loose than to draw.
Therefore, we might want to remove such results from the solution space: for equal-rating players we force the probability of the draw to be larger than the probability of home/away wins, we thus have to use $\kappa$ which satisfies
\begin{align}\label{D.gt.W}
\PhiD(0)&>\PhiW(0)\\
\label{k.gt.1}
\kappa&\geq1,
\end{align}
where the last inequality follows from \eqref{PhiW.Davidson}-\eqref{PhiT.Davidson}. This is an important restriction and forces us to model the draws occurring with (a large) frequency $\ov{p}_\mf{D}\ge 0.33$, see \eqref{kappa.2.pt}. While it seems unsound to use the mismatched model, we don't know its impact on the prediction capability and yet, we have to remember, that the current version of the Elo algorithms uses $\kappa=2$. We have no clear answer to this question and will seek more insight in the numerical examples.
\section{Numerical Examples}\label{Sec:Examples}
We illustrate the operation of the algorithms using the results from the England Premier League football games available at \citep{football-data}. In this context, there are $M=20$ teams playing against each other in one home- and one away-games. We consider one season at the time, thus $n=1,\ldots, N$, index the games in the chronological order, and $N=M(M-1)=380$.
Football (and other) games are known to produce the so-called home-field advantage, where the home wins $\set{y_n=\mf{H}}$ are more frequent than the away wins $\set{y_n=\mf{A}}$. In the rating context, this is modelled by artificially increasing the level of the home player, which corresponds, de facto, to left-shifting of the conditional probability functions
\begin{align}\label{}
\PhiH^\tnr{hfa}(v)=\PhiH(v+\eta\sigma), \quad \PhiA^\tnr{hfa}(v)=\PhiA(v+\eta\sigma), \quad \PhiD^\tnr{hfa}(v)=\PhiD(v+\eta\sigma),
\end{align}
where home-field advantage parameter $\eta\ge 0$ should be adequately set; its value is independent of the scale thanks to multiplication by $\sigma$.\footnote{We note that this version of the equation is slightly different from \citep[Eq.~2.4]{Davidson77}; with our formulation, the relationship \eqref{pt.2.kappa} is not affected by the home-field advantage parameter $\eta$.}
As in FIFA rating algorithm, \citep{fifa_rating}, we set $\sigma=600$; the levels are initialized at $\theta_{0,m}=0$; as we said before these values are arbitrary. In what follows we always use the normalization $K=\tilde{K}\sigma$ which removes the dependence on the scale: for a given $\tilde{K}$ the prediction results will be exactly the same even if we change the value of $\sigma$.
An example of the estimated ratings $\theta_{n,m}$ for a group of teams is shown in \figref{Fig:rating} to illustrate the fact that quite a large portion of time in the beginning of the season is dedicated to the convergences of the algorithm; this is the ``learning'' period. Of course, using larger step $\tilde{K}$ we can accelerate the learning at the cost of increased variability of the rating. These well-known issues are related to the operation of \gls{sg} but solving them is out of the scope of this work. We mention them mostly because, to evaluate the performance of the algorithms, we decide to use the second half of the season, where we assume the algorithms converged and the rating levels follow the performance of the teams. This is somewhat arbitrary of course, but our goal here is to show the influence of the draw-parameter and not to solve the entire problem of convergence/tracking in \gls{sg}/Elo algorithms.
\begin{figure}
\centering
\psfrag{xlabel}{\footnotesize $n$}
\psfrag{2015}{}
\psfrag{ylabel}{\footnotesize $\hat{\theta}_{m,n}$}
\includegraphics[width=0.8\linewidth]{trajectories.eps}
\caption{Evolution of the rating levels $\hat{\theta}_{m,n}$ for selected English Premier League teams in the season 2015; $N=380$, $\sigma=600$, $\tilde{K}=0.125$, $\eta=0.3$, $\kappa=0.7$. We assume that, the first half of the season absorbs the learning phase, and the tracking of the teams' levels in the second half is free of the initialization effect.}
\label{Fig:rating}
\end{figure}
For concision, the estimated probability of the game result $\set{\mf{H},\mf{A}, \mf{D}}$ calculated before the game at the time $l$ using the rating levels $\hat{\boldsymbol{\theta}}_{l-1}$ obtained at the time $l-1$, is denoted as
\begin{align}\label{}
\hat{p}_{l,\mf{H}}=\Phi_{\mf{H}}(\boldsymbol{x}\T_l\hat{\boldsymbol{\theta}}_{l-1}), \quad \hat{p}_{l,\mf{A}}=\Phi_{\mf{A}}(\boldsymbol{x}\T_l\hat{\boldsymbol{\theta}}_{l-1}), \quad \hat{p}_{l,\mf{D}}=\Phi_{\mf{D}}(\boldsymbol{x}\T_l\hat{\boldsymbol{\theta}}_{l-1}).
\end{align}
We show the (negative) logarithmic score \citep{Gelman2014} averaged over the second-half of the season
\begin{align}\label{log.score}
\ov{\tnr{LS}}= \frac{2}{N}\sum_{l=N/2+1}^{N} \tnr{LS}_{l}.
\end{align}
where
\begin{align}\label{}
\tnr{LS}_l &= -( h_l \log \hat{p}_{l,\mf{H}} +a_l \log \hat{p}_{l,\mf{A}} +d_l \log \hat{p}_{l,\mf{D}}).
\end{align}
We still have to define the prediction of the draw in the conventional Elo algorithm: we cannot set $\hat{p}_{l,\mf{D}})=\PhiD(v)\equiv 0$, of course, because it would result in infinite logarithmic score. We thus follow the heuristics of \citep{Lasek13} which may be summarized as follows: the conventional Elo algorithm is used to find the rating levels (\ie $\kappa=2$ is used in $\kappa$-Elo), but the prediction is based on $\PhiH(v)$, $\PhiA(v)$, and $\PhiD(v)$ with a different value of the draw-parameter $\kappa=\check{\kappa}$. This may be seen as a model mismatch between estimation and prediction. We follow \citep{Lasek13} and apply $\check{\kappa}=1$; this correspond to $\ov{p}_\mf{D}\approx 0.33$ and also is the minimum value of $\kappa$ which guarantees \eqref{D.gt.W}.
We show in \figref{Fig:LogScore} the logarithmic score $\ov{\tnr{LS}}$ for different values of the draw parameter $\kappa$, and normalized step $\tilde{K}$. We compare our predictions with those based on the probabilities inferred from the odds of the betting site Bets365 available, together with the game results, at \citep{football-data}.\footnote{This is done as in \citep{Kiraly17}: the published decimal odds for the three events, $o_\mf{H}$, $o_\mf{A}$, and $o_\mf{D}$, are used to infer the probabilities, $\tilde{p}_\mf{H}\propto 1/o_\mf{H}$, $\tilde{p}_\mf{A}\propto 1/o_\mf{A}$, and $\tilde{p}_\mf{D}\propto 1/o_\mf{D}$; these are next normalized to make them sum to one (required as the betting odds are not ``fair'' and include the bookie's overhead, the so-called vigorish).} These are constant reference lines in \figref{Fig:LogScore} as they, of course, do not vary with the parameters we adjust.
We observe that introducing the draw parameter $\kappa$ we improved the logarithmic score. On the other hand, using $\kappa$-Elo algorithm with $\kappa=2$ yields particularly poor results if we explicit the model (and thus use $\kappa=2$ for the prediction); a much better solution is to use a mismatched model and apply $\check{\kappa}=1$; the results obtained are, in general very close to those obtained using $\kappa$-Elo algorithm especially when used with $\kappa=1$. In the season 2013-2014, where frequency of draws was low, using the corresponding value $\kappa=\ov{\kappa}$ provided notable improvement comparing to large $\kappa\in\set{1,2}$.
\begin{figure}
\centering
\psfrag{score}{\footnotesize$\ov{\tnr{LS}}$}
\psfrag{0.40}{\tiny $\kappa=0.4$}
\psfrag{0.70}{\tiny $\kappa=0.7$}
\psfrag{1.00}{\tiny $\kappa=1$}
\psfrag{2.00}{\tiny $\kappa=2$}
\psfrag{Eloxxxxxx}{\tiny Elo+$\check{\kappa}$}
\psfrag{Bet365}{\tiny Bet365}
\psfrag{2017}[cb]{\footnotesize $2017-2018$}
\psfrag{2013}[cb]{\footnotesize $2013-2014$}
\begin{tabular}{cc}
\psfrag{xlabel}[tt]{\footnotesize $\eta$}
\includegraphics[width=0.45\linewidth]{log_score_home_2013.eps} &
\psfrag{xlabel}[tt]{\footnotesize $\eta$}
\includegraphics[width=0.45\linewidth]{log_score_home_2017.eps} \\
\small a) &
\small b)\\
\end{tabular}
\caption{Logarithmic score, \eqref{log.score}, in second half of two seasons of English Premier League with $\sigma=600$, $\tilde{K}=0.125$, different values of $\kappa$ indicated in the legend, and varying the home-advantage parameter, $\eta$; a)~season 2013-2014, where $\ov{p}_\mf{D}=0.17$ and thus using \eqref{pt.2.kappa}, we obtain $\ov{\kappa}\approx 0.40$; b)~season 2017-2018, where $\ov{p}_\mf{D}=0.26$, and thus $\ov{\kappa}\approx 0.7$. The results ``Elo+$\check{\kappa}$'' are obtained from the conventional Elo algorithm but $\check{\kappa}=1$ is used in the prediction. The results ``Bet365'' are based on the probabilities inferred from the betting odds offered by the site Bet365.}
\label{Fig:LogScore}
\end{figure}
Finally, we show the comparison across various seasons in \tabref{Tab:Score.Seasons} where, beside the score $\ov{\tnr{LS}}$ we also show the pseudo-credibility interval $(\ov{\tnr{LS}}_\tnr{low},\ov{\tnr{LS}}_\tnr{high})$; this is the the minimum-length interval in which 95\% of the data was found.\footnote{We find it more informative than derivation of credibility intervals using unknown statistics.} We observe that using $\kappa=1$ does not incur a large penalty when compared to $\kappa=0.7$ even if the latter matches closely the observed frequency of draws. The differences may be observed only for seasons where the low frequency of draws implies very small $\ov{\kappa}$, \eg in 2013-2014 and 2018-2019.
On the other hand, the length of the credibility intervals is slightly smaller for $\kappa=1$, indicating a better prediction ``stability'' across time. Similar average results may be obtained using the Elo algorithm with $\check{\kappa}=1$ which produces also slightly larger credibility intervals.
The results obtained with this rather limited set of data stay in line with our previous theoretical discussion, indicating at the same time that no dramatic change in performance should be expected by using $\kappa$-Elo. Nevertheless, an improvement can be obtained by using the conservative value of $\kappa=1$. This recommendation is motivated by the discussion in \secref{Sec:setting.kappa} and comes at no implementation cost.
\begin{table}
\centering
\scalebox{.8}{
\begin{tabular}{c|c||c|c|c|c}
Season & $\ov{\kappa}$ & Bet365 & $\kappa$-Elo, $\kappa=0.7$ & $\kappa$-Elo, $\kappa=1$ & Elo+$\check{\kappa}$ \\
\hline
\input{scores_across_seasons.tex}
\end{tabular}
}
\caption{Logarithmic score $\ov{\tnr{LS}}$, \eqref{log.score}, in ten seasons of English Premier League; $\sigma=600$, $\tilde{K}=0.125$, $\eta=0.3$. The results ``Elo+$\check{\kappa}$'' are obtained from the conventional Elo algorithm but $\check{\kappa}=1$ is used in the prediction. The results ``Bet365'' are based on the probabilities inferred from the betting odds offered by the site Bet365.}
\label{Tab:Score.Seasons}
\end{table}
\section{Conclusions}\label{Sec:Conclusions}
In this paper we were mainly concerned with explaining the rationale and mathematical foundation behind the Elo algorithm. The whole discussion may be summarized as follows:
\begin{itemize}
\item We explained that, in the binary-outcome games (win-loss), the Elo algorithm is an instance of the well-known stochastic gradient algorithm applied to solve the \gls{ml} estimation of the rating levels. This observation already appeared in the literature, \eg \citep{Kiraly17} so it was made for completeness but also to lay ground for further discussion
\item We have shown the implicit model behind the algorithm in the case of the games with draws. Although the algorithm has been used for decades in this type of games, the model of the draws has not been shown, impeding, de facto, the formal prediction of their probability. We thus filled this logical gap.
\item We proposed a natural generalization of the Elo algorithm obtained from the well-known model proposed by \citep{Davidson70}; the resulting algorithm, which we call $\kappa$-Elo, has the same simplicity as the original Elo algorithm, yet provides additional parameter to adjust to the frequency of draws. By extension, we revealed that the implicit model behind the Elo algorithm assumes that the frequency of draws is equal to 50\%.
\item We briefly discussed the constraints on the relationship between the values of draw and loss probabilities for the players with similar ratings; more precisely, we postulate that, in such a case, the draw probability should be larger than the probability of win/loss. While the discussion on such constraints has been absent from the literature, we feel it deserves further analysis to construct suitable models and algorithms for rating. Applying these constraints to the $\kappa$-Elo algorithm yields $\kappa\ge 1$. This is clearly a limitation which will produce a mismatch between the results and the model if the frequency of draws is less than 33\%.
\item To illustrate the main concepts we have shown numerical examples based on the results of the international football games in English Premier League.
\item Finally, we conclude that, while in the past, the Elo algorithms has satisfied to a large extent the demand for simple rating algorithms, it is still possible to provide better, more flexible, and yet simple solutions. In particular the $\kappa$-Elo is better in a sense of taking the frequency of draws into account without compromising the complexity of implementation.
\end{itemize}
| {
"attr-fineweb-edu": 1.955078,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdGfxK7Tt6AlxDqI_ | \section{Introduction}
Soccer is a complex and sparse game, with a large variety of actions, outcomes, and strategies. These aspects of a soccer game make the analysis strenuous. Recent breakthroughs in computer vision methods help sport analysis companies, such as InStat\cite{instat}, Wyscout\cite{wyscout}, StatsBomb\cite{statsbombs}, STATS\cite{stats}, Opta\cite{opta}, etc., collect highly accurate tracking and event datasets from match videos.
Obviously, the existing tracking and event data in the market contain the prior decisions and the observed outcomes of players and coaches, with some nonrandom, regular, and non-optimized policies. We call these policies as behavioral policies throughout the rest of this paper.
Nowadays, analyzing the behavioral policies obeyed by the players and dictated by coaches, has been one of the most interesting topics for researchers. Sports analysts, i.e., academic researchers, applications, scouts, and other sports professionals, are investigating the potential of using previously collected, i.e., off-line, data to make counterfactual inference of how alternative decision policies could perform in a real match.
There are several engrossing action valuation methods in the literature of sports analytics, focusing on passes and shots (e.g., \cite{Fernandez2019, Fernandez2020, fernandez2021, gyarmati2016qpass}, etc.), and some others cover all type of actions (e.g., \cite{Tom2019, Liu2018, Liu2019}, etc.). They accurately evaluate the player actions and contribution to goal scoring. However, all those models leave the player and the coach with the value of the performed action, without any proper proposal of alternative and optimal actions.
To fill this gap, this work goes beyond action valuation by proposing a novel policy optimization method, which can decide about the optimal action to perform in critical situations. In soccer, we consider as critical situations the moments with a high probability of losing the ball, or scoring/conceding a goal. However, the player does not have any chance of passing to teammates, or dribbling in these situations. Thus, she/he needs to immediately decide about the following options: 1) shooting, 2) sending the ball out, 3) committing a foul, 4) submitting the ball to the opponent by making an error. Moreover, we define the optimal action as the action that maximizes the expected goal for the team. Thus, our method should both evaluate the behavioral policy, and suggest the optimal target policy to the players and coaches, for critical situations. It is a challenging task to design such a system in soccer due to the following reasons: first, soccer is a highly interactive and sparse rewarding game with a relatively large number of agents (i.e., players). Thus, state representation is ambiguous in such a system and requires an exact definition. Second, the spatiotemporal and sequential nature of soccer events and dynamic players' location on the field dramatically increases the state dimensions, which is never pleasant for machine learning tasks. Third, the game context in a soccer match is severely affecting the model prediction performance. Forth, evaluating a trained optimal policy requires deployment in a real soccer match. However, this solution sounds impossible due to the large cost of deployment. This work offers solutions to all the above-mentioned challenges. Sports professionals can use our policy optimization method after the match to check what action the player performed, evaluate it, and propose the optimal alternative action in that critical situation. If the action performed by the player, and the optimal action proposed by the optimal policy are not the same, we can relate it to the player's mistake, or poor strategy from the coach.
In summary, our work contains the following contributions:
\begin{itemize}
\item We propose an end-to-end framework that utilizes raw data and trains an optimal policy to maximize the expected goal of soccer teams via reinforcement learning (RL);
\item Introduce a soccer ball possession model, which we assume to be Markovian, and a new state representation to analyze the impact of actions on subsequent possessions;
\item Suggest spectral clustering with regards to the opponent's position and velocity for measuring the pressure on the ball holder at any moment of the match;
\item Propose a new reward function for each time-step of the game, based on the output of the neural network predictor model;
\item Derive the optimal policy in critical situations of soccer matches, with the help of fully off-policy, deep reinforcement learning method.
\end{itemize}
\section{Related work}
\label{sec:sot}
The state-of-the-art models in soccer analytics are focusing on several aspects such as evaluating actions, players, and the strategies.
Plus/minus method is an early work on player evaluation that has been proposed by Kharrat et al. \cite{Kharrat2017}. This method assigns plus for each goal scored and minus for each goal conceded by the players per total time they were on the pitch. Although this is the simplest method, it ignores the rating of other players, the opposition strength, and does not account for match situations.
Regression method on actions and shots was firstly proposed by Ian et al. \cite{Ian2012}. They estimate the number of shots as the function of crosses, dribbles, passes, clearances, etc. Coefficients show how important they are in generating shots. However, this model does not work well in some cases (e.g., when the value of pass changes, and in case we want to know where the cross occurred).
Another interesting player evaluation method is percentiles and player radars by Statsbomb \cite{Statsbomb}. This method estimates the relative rank for each player based on his actions. For example, a ranking can be assigned to a player for all his defensive actions (tackle/interception/block), his accurate passes, crosses, etc.
The application of a Markovian model in action valuation was first proposed by Rudd \cite{Rudd,GoldnerKeith2012}. The input of this model is the probability of ball location in the next five seconds. Assuming we have these probabilities, this model estimates the likely outcomes after many iterations based on the probabilities of transitioning from one state to another.
Another application of the Markovian model is the Expected threat (xT) \cite{xT}, which uses simulations of soccer matches to assign value to the actions. Although, we believe that simulations tend to be unrealistic. Because the simulations with any arbitrary point are not resulting in a goal by several iterations.
VAEP \cite{Tom2019} is another action valuation model, which considers all types of actions. This model uses a classifier to estimate the probability that an action leads to a goal within the next 10 actions, and the game state is considered as 3 actions. This model ignores the concept of possessions in valuation.
Considering the possession, the Expected Possession Value (EPV) metrics in football \cite{Fernandez2019} and basketball \cite{cervone2014} were proposed. These models assume a simple world in which the actions of the players inside possessions are limited to pass, shot, and dribble. Thus, ignoring any other actions such as foul, ball out, or the errors, which frequently happen in critical situations.
Recently, researchers utilize deep learning methods due to their promising performance in valuation domains.
Fernandez and Bornn \cite{fernandez2021} present a convolutional neural network architecture that is capable of estimating full probability surfaces of potential passes in soccer.
Moreover, Liu et al. took advantage of RL, by assigning value to each of the actions in ice-hockey \cite{Liu2018} and soccer \cite{Liu2019} using Q-function. They later used a linear model tree to mimic the output of the original deep learning model to solve the trade-off between Accuracy and Transparency \cite{Xiangyu2020}. Moreover, Dick and Brefeld \cite{Dick2019} used reinforcement learning to rate player positioning in soccer.
In this paper, we go beyond the valuation of actions in critical situations, and use RL to derive the optimal policy to be performed by the teams and players.
\section{Our Markovian possession model}
\label{sec:markov}
In order to train an RL model, we first represent a soccer game as a Markov decision process. To this end, in this section we introduce an episode of the game, the start, intermediate, and final states. In the next sections, we define the state, action, and reward in each time-step of the game.
Due to the fluid nature of a soccer game, it is not straightforward to have a comprehensive description of a possession, which applies to all different types of soccer logs provided by different companies (e.g., InStat, Wyscout, StatsBomb, Opta, etc.). In the InStat dataset and accordingly in this work, possessions for any home or away teams are clearly defined and numbered. A possession starts from the beginning of a deliberate and on the ball action by a team, until it either ``ends'' due to some event like ball out, foul, bad ball control, offside, clearance, goal (regardless of who possesses the ball afterward, i.e., the next possession can belong to the same team or opposing team), or ``transfers'' by a defensive action of the opponent, such as pass interception, tackle, or clearance.
The possession can be transferred if and only if the team is not in the possession of the ball over two consecutive events. Thus, the unsuccessful touches of the opponent in fewer than 3 consecutive actions are not considered as a possession loss. Consequently, all on-the-ball actions of players of the same team should be counted to get the possession length, not only passes, shots, and dribbles. Accordingly, we define an episode $\tau$ as subsequent possessions for any team, until they lose the ball, or end possession sequences with a shot.
We aspire to describe the possessions with a Markovian model.
In order to take advantage of the Markovian model of possessions and their outcomes, we converted the action level nature of the dataset to possession level, and each possession is labeled by its own terminating action. This conversion expedites the usage of supervised learning methods to predict the most probable outcomes as well. Our proposed model can be separately applied to any team participating in the games.
We can model this process as a Finite State Automaton, with the initial node of ``Start'' of possession, the final nodes of (``Loss'' or ``Shot''), and the intermediate node of ``Keep'' the possession. The schematic view of the state transition is illustrated in Figure~\ref{fsa}.
\begin{figure}[h]
\centering
\includegraphics[ width=6cm]{fsa4.png}
\caption
Finite State Automaton of the Markovian possession model. The state is considered as one possession. Green nodes show the conditions of the possessions, transited by the ending actions. Red circles are actions categorized by intentional (out, foul, shot) or unintentional (errors, i.e., players mistakes such as bad ball control, pass inaccurate, or tackles by opponents).}
\label{fsa}
\end{figure}
\section{State representation and neural network architecture}\label{sec:dataset_state_generation_cnn}
In this study, we present an end-to-end framework to learn an optimal policy for maximizing the expected goals in a soccer game. To achieve this goal, data preparation is a core task to achieve a reliable RL model. In this section, we present the steps of building the states. Considering the definition of the episode in Section~\ref{sec:markov}, there is no necessity for existence of a goal in an episode; so, we need to define a well-suited reward function for each time-step. We propose a neural network model, utilizing the suggested state, and obtain the underlying data to get the reward of each time-step. The structure of the datasets used in this study is provided in Appendix~\ref{data}.
\subsection{Game context: opponent pressure}
\label{clustering-sec}
Considering a descriptive game context is one of the most important aspects of soccer analytics, when it comes to feature engineering.
Several works introduced different methods, KPIs, and features to address this problem. Among the works, Decroos et al. \cite{Tom2019} created the following game context features: number of goals scored by attacking team after action, number of goals scored by defending team after action, and goal difference after action.
Fernandez et al. \cite{Fernandez2019} considered the role of context by slicing the possession into 3 phases: build-up, progression, and finalization. They considered three dynamic formation or pressure lines, and grouped the actions based on particular relative locations: first vertical pressure line (forwards), second vertical pressure line (midfielders), third vertical pressure line (defenders).
Another interesting approach by Alguacil et al. \cite{Fernandez2020} mimicked the collective motion of animal groups, called self-propelled particles, in soccer. They claimed that in FC Barcelona, and generally, coaches can talk about three different playing zones: intervention zone (immediate points around the ball), mutual help zone (players close to the ball, but further away than first zone), and cooperation zone (players not expected to receive ball within few second).
Our approach of modeling the pressure by the opposing team and considering game context in our valuation framework, matches the self-propelled particle model in grouping the opponents into several zones around the ball holder. To this end, we take advantage of a clustering method, keeping into consideration that opponents inside the clusters are not distributed spherically (according to their positions and velocities). K-means algorithm demonstrates a Pyrrhic victory, as it assumes that the clusters are roughly spherical and operates on Euclidean distance (Figure~\ref{k-means1}). But in soccer tracking data, such clusters are unevenly distributed in size. Thus, we experimented with spectral clustering to provide the number of opponents inside each cluster as an indicator of defensive pressure around the ball holder. We treated the positions $(x,y)$ and velocity $(v_x,v_y)$ of the opponents around the ball holder as graph vertices, and we constructed a k-nearest neighbors graph for each frame (5 neighbors in this work). In this graph, nodes are the opponent players' positions and velocities (direction and magnitude), and an edge is drawn from each position to its k nearest neighbors in the original space. The graph Laplacian is defined by the difference of adjacency and degree matrices. Then, we used K-means to perform clustering on vectors of the zero eigenvalues (connected components) from the Laplacian by setting the exact position (x,y) of the ball holder at each frame as the initial centroid of the clusters. Thus, each opponent player can be perfectly assigned to a spectral cluster (See Figure \ref{spectral1}).
Moreover, we experimentally selected the optimal number of clusters to be 3, using the elbow method by setting the metric to distortion (computes the sum of squared distances from each point to its assigned center) and inertia (sum of squared distances of samples to their closest cluster center).
Figure~\ref{clustering} depicts a frame of a specific match in our dataset with K-means on the top, and Spectral clustering on the bottom. Opponents from the away team are clustered into 3 groups. In the bottom Figure \ref{spectral1}, Cluster 1 (blue) includes 4 opponents who might immediately take the possession of the ball, cluster 2 (yellow) are opponents who might intercept the pass or dribble of ball holder, and cluster 3 (red) are opponents who cannot reach the ball in a few seconds. We compute the number of opponents for all frames of the matches in our dataset, and use them as the pressure feature throughout the rest of this work. The pseudo-code of the clustering algorithm for pressure measurement is provided in Algorithm~\ref{pseudo}.
\begin{figure}[ht]%
\centering
\subfloat[K-means]{{\includegraphics[ width=6cm]{kmeans2.png}\label{k-means1} }}%
\qquad
\subfloat[Spectral clustering]{{\includegraphics[width=6cm]{spectral2.png}\label{spectral1} }}%
\caption{Pressure model: number of opponent players in each zone/cluster is considered as pressure on ball holder in that zone.}%
\label{clustering}%
\end{figure}
\begin{algorithm}
\caption{Defensive pressure measurement with Spectral clustering}
\label{pseudo}
\begin{algorithmic}[1]
\State Set T: total number of frames, A: adjacency matrix, D: degree matrix, L: graph Laplacian. Initialize P: (opponent players data), Z: (connected components), C: (pressure clusters)
\For {$t\in T$}
\State $b\leftarrow (x_b, y_b, v_{xb}, v_{yb}) $ \Comment{Ball holder (b)}
\For {$o=1,\ldots,11$}
\State $p_o\leftarrow (x_o, y_o, v_{xo}, v_{yo}) $ \Comment{Each opponent player}
\State $p_{os}\leftarrow StandardScaler(p_o) $
\State $P.append(p_{os}) $
\EndFor
\State $A\leftarrow kneighbors\_graph(P)$
\State $D\leftarrow diag(A)$
\State $L\leftarrow D-A$ \Comment{Graph Laplacian}
\State $vals , vecs \leftarrow eig(L)$
\State $eigenvecs\leftarrow vecs[:,sort(vals)]$
\State $eigenvals\leftarrow vals[sort(vals)]$
\For {$i \in eigenvals$}
\If {$abs(i) < 1e-5 $}
\State $index \leftarrow argwhere(i)$
\State $z \leftarrow eigenvecs[:,index]$
\State $Z.append(z)$
\Comment{Connected components}
\EndIf
\EndFor
\State $C \leftarrow Kmeans(Z, init= b )$
\Comment{Cluster assignment}
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Selected features}
The InStat tracking data is one frame per second representation of positions for all the players including home and away.
As mentioned in
Section~\ref{clustering-sec}, we have taken advantage of tracking data to calculate the velocities and opponents' location around the ball holder, and compute the defensive pressure in 3 different pressure zones to reduce dimensionality of the feature set. Another option was to avoid clustering the position features, and feed the network with the 44-dimensional ((x,y) for 22 players) normalized locations on the pitch. On the other hand, angle and distance to goal, time remaining, home/away, and body id can be directly calculated from event stream data. Table~\ref{features} shows the final list of our analyzed features used for our machine learning tasks in the following sections. Note that we either use location features (44-dimensional of exact players' locations) or pressure features (numbers of players in each cluster) to represent 3 state types in Table~\ref{cnn-table}. \textcolor{blue}{}
\begin{table}[h]
\caption{Feature set}
\centering
\label{features}
\begin{tabular}{|p{5cm}|p{3cm}|p{7cm}|}
\hline
\textbf{Feature set} & \textbf{Feature name} & \textbf{Description} \\
\hline
hand-crafted & Angle to goal & the angle between the goal posts seen from the shot location\\
\hline
hand-crafted & Distance to goal & Euclidean distance from shot location to center of the goal line \\
\hline
hand-crafted & Time remaining & time remained from action occurrence to the end of match half \\
\hline
hand-crafted & Home/Away & action is performed by home or away team? \\
\hline
hand-crafted & Action result & successful or unsuccessful \\
\hline
hand-crafted & Body ID & action is performed by head? body? foot? \\
\hline
contextual: clustered locations & Pressure in zone 1 & number of opponents in first cluster \\
\hline
contextual: clustered locations & Pressure in zone 2 & number of opponents in second cluster \\
\hline
contextual: clustered locations & Pressure in zone 3 & number of opponents in third cluster \\
\hline
contextual: exact locations & locations & 44-dimensional exact locations (x,y) of opponents \\
\hline
\hline
\end{tabular}
\end{table}
\subsection{Possession input representation}
State representation is one of the most challenging steps in soccer analytics due to the high-dimensional nature of the datasets. We describe each game state by generating the most relevant features and labels to them. To this end, we define a different set of features, i.e., hand-crafted and contextual (Table~\ref{features}), and 3 types of state representation (Table~\ref{cnn-table}).
For each of the state types (I, II, III), we demonstrate the state as the combinations of different features vector $X$ (see Tables~\ref{features},\ref{cnn-table}), and one-hot representation of the action $A$ for all the actions inside each possession, excluding the ending action. Thus, the varying possession length is the number of actions inside a possession, excluding the ending one. Then, the state is a 2 dimensional array, with the first dimension of possession length: (varying for each possession), and second dimension of features number. Therefore, a $m^{th}$ state/possession with length of $n$ can be represented as $S_m = [[X_0,A_0], [X_1,A_1],...,[X_{n-1},A_{n-1}]]$.
Due to the complex and spatiotemporal nature of the dataset, we select the best representation of the state through an experimental process. To do this, we train the spatiotemporal models on three different state types. State type (I) ignores the players' locations and only reflects the occurred actions in addition to the hand-crafted features of each action. State type (II) is a high-dimensional representation that considers exact players' locations besides the actions and their corresponding hand-crafted features. In the state type (III), we handled the curse of dimensionality of type (II) by clustering the locations as shown in Section~\ref{clustering-sec}. See Table~\ref{cnn-table} for more details on states.
\subsection{CNN-LSTM architecture for deriving behavioral policy}
\label{cnn-lstm}
In the soccer event dataset, each possession is represented by a sequence of actions. We aim to classify these possessions (and show the result only for the home team) based on their ending actions. Thus, each possession should be terminated by the following classes:
1) Shot (goal or unsuccessful), 2) Ball out, 3) Foul, 4) Errors (possession loss due to inaccurate pass, bad ball control, or tackle and interception by opponent). Note that foul and ball out actions are only performed by the home team. Thus, if the possession is terminated by any action from the opponent, including foul and ball out, we classify them as error.
To this end, we utilize the classification capability of sequence prediction methods.
In order to handle the spatiotempral nature of our dataset, we needed a sophisticated model and best feature set, which could optimize the prediction performance. Thus, model selection was the core task of this study. We first created appropriate state dimensions suitable for each model by reshaping the state inputs, then fed our reshaped arrays with the 3 state types to the following networks: 3D-CNN, LSTM, Autoencoder-LSTM, and CNN-LSTM, to compare their classification performance (See Table~\ref{cnn-table}). Validation split of 30\% of consecutive possessions is used to evaluate during training, and cross entropy loss on train and validation datasets is used to evaluate the model. As the table suggests, CNN-LSTM \cite{jeff2016} trained on state type (III) outperforms other models in terms of accuracy and loss. Thus, the necessity of the exact location of the players can be rejected and sufficiency of pressure features can be proved in this analysis. Although the Autoencoder-LSTM accuracy trained on state types (II and III) is quite similar to CNN-LSTM, its relatively large inference time and trainable parameters make the implementation more strenuous and expensive.
Thus, we continued the rest of the analysis by developing a CNN-LSTM network \cite{jeff2016}, using CNN for spatial feature extraction of input possessions, and LSTM layer with 100 memory units (smart neurons) to support sequence prediction and interpret features across time steps. Figure~\ref{cnn} depicts the architecture of our network. Since our input possessions (possession array) have a three dimensional spatial structure, i.e., first dimension: number of possessions, second dimension: dynamic possession length (maximum=10), third dimension: number of features, CNN is capable of picking invariant features for each class of possession. Then, these learned consolidated spatial features are fed to the LSTM layer. Finally, the dense output layers are used with softmax activation function to perform our multi-classification task.
\begin{figure}[h]
\includegraphics[ width=\textwidth]{cnn2.png}
\caption{CNN-LSTM network structure for classification of the possessions, i.e., action sequences. Input possessions represent both state features vector $(X)$, and one-hot vector of actions $(A)$, excluding the ending action. There are $m$ possessions in the dataset, with varying lengths of $\{n_1,n_2,...,n_m.\}$. The output is the predicted class (ending action) of the possession, along with the estimated probabilities of alternative ending actions. }
\label{cnn}
\end{figure}
Note that the feature vector $X$ has a fixed length for each individual action, but varying for all actions in the state (because possession length or number of actions varies). This is one of the main challenges in our work as most machine learning methods require fixed-length feature vectors.
In order to address the challenge of dynamic length of possession features (second dimension of possession input array), we use truncating and padding. We mapped each action in a possession to an 11 length real-valued vector. Also, we limit the total number of actions in a possession to 10, truncating long possessions and we pad the short possessions with zero values. In this case, we will have a fixed length of sequences through the whole dataset for modeling.
Consequently, this network estimates the categorized probability distribution over actions for any given possession, parameterized by $\theta$. Through the rest of the paper, we denote this probability distribution as $q(x;\theta)$.
\begin{table}[h]
\centering
\caption{Classification performance of different design choices for spatiotemporal analysis. (Inference times are the average running times over 20 iterations of training using a server enriched with Tesla K80 GPU)}
\begin{adjustbox}{width=1.\textwidth,center=\textwidth}
\label{cnn-table}
\begin{tabular}{|>{\centering\arraybackslash}m{6.2cm}|>{\centering\arraybackslash}m{2.55cm}|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{3cm}|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{1.5cm}|}
\hline
\textbf{State representation} & \textbf{State type} & \textbf{Initial state dimension} & \textbf{Spatio temporal model} & \textbf{Accuracy} & \textbf{Loss}&\textbf{Inference time} & \textbf{Parameters}\\
\hline
\multirow{ 4}{*}{hand-crafted(6) + actions(11)} & \multirow{ 4}{*}{(I).non-contextual} & \multirow{ 4}{*}{[28054,10,17]} & 3D-CNN & 61\% & 0.73 & 0.15s & 52,321 \\
& && LSTM & 65\% & 0.72 & 0.09s & 42,001 \\
&& & Autoencoder-LSTM & 69\% & 0.71 & 3.7s & 89,099 \\
& && CNN-LSTM & 68\% & 0.71 & 0.25s & 32,456 \\
\hline
\multirow{ 4}{*}{hand-crafted(6) + locations(44) + actions(11)} & \multirow{2}{*}{(II).contextual,}
& \multirow{ 4}{*}{[28054,10,61]} & 3D-CNN & 72\% & 0.63 & 1.71s & 172,981 \\
&& & LSTM & 68\% & 0.69 & 0.81s & 151,034 \\
&\multirow{2}{*}{high-dimensions} & & Autoencoder-LSTM & 80\% & 0.59 & 12.32s & 350,024 \\
&& & CNN-LSTM & 75\% & 0.65 & 1.22s & 152,211\\
\hline
\multirow{ 4}{*}{hand-crafted(6) + pressures(3) + actions(11)} & \multirow{2}{*}{(III).contextual,} & \multirow{ 4}{*}{[28054,10,20]} & 3D-CNN & 73\% & 0.63 & 0.31s & 61,211 \\
&& & LSTM & 71\% & 0.63 & 0.11s & 50,804 \\
&\multirow{2}{*}{reduced-dimensions}& & Autoencoder-LSTM & 79\% & 0.59 & 5.02s & 92,022 \\
&& & \textbf{CNN-LSTM} & \textbf{81}\% & \textbf{0.56} & 0.51s & 56,036 \\
\hline
\hline
\end{tabular}
\end{adjustbox}
\end{table}
\section{Off-policy reinforcement learning}
\label{rl-sec}
Most RL methods require active data collection, where the agent actively interacts with the environment to get rewards. Obviously, this situation is impossible in our soccer analytics problem, since we are not able to modify the players' actions. Thus, our study falls right into the category of batch RL. In this case, we will not face the exploration vs. exploitation trade-off, since the actions and rewards are not sampled randomly, but they are sampled from the real world (players' actions in a match) before the learning process. Moreover, our network learns a better target policy from a fixed set of interactions.
Before the learning process, the players selected some ending actions according to some non-optimal (behavioral) policy. We aim to use those selected actions and acquired rewards to learn a better policy.
Therefore, we prepared our dataset of transitions in the form of $<$current observation, action, reward, next state$>$ for learning a new policy.
Through the end of the paper, we use the notation and definition shown in Table~\ref{notations}.
\begin{table}[h]
\caption{Notations}
\centering
\label{notations}
\begin{tabular}{|p{5cm}|p{10cm}|}
\hline
\textbf{Notation} & \textbf{Definition} \\
\hline
State: ($s$) & Sequence of actions and their features in a possession of each team, excluding the ending action \\
\hline
Action: ($a$) & Ending action of each possession, which leads to state transition \\
\hline
Episode: ($\tau$) & Sequence of possessions of the home team, until they lose the possession, or end it with a shot, denoted by $\tau = \{s(\tau , t), a(\tau , t)\}_{t=1}^T$ \\
\hline
Reward: $r(s_t,a_t)$ & Reward acquired from each ending action at the end of a possession \\
\hline
Episode reward: $r(\tau)$ & Sum of rewards (expected goals) for each episode: $r(\tau)= \sum_{t=1}^Tr(s_t,a_t)$ \\
\hline
Return: $R$ & Cumulative discounted and normalized reward \\
\hline
Target policy distribution: $p(x)$ & Learned policy (probability distribution of actions from the policy network) \\
\hline
Behavior policy distribution: $q(x)$ & Actual policy (probability distribution of actions collected off-line from a real match) \\
\hline
$n_i$ & Length (number of actions) in $i^{th}$ possession \\
\hline
$m$ & Total number of possessions \\
\hline
\hline
\end{tabular}
\end{table}
\subsection{Action reward function}
In this section, we aim to estimate the reward acquired for the ending actions. Owing to the complex and sparse environment of soccer games, it is tedious to design the perfect reward function. In general though, every team desires to be in the most precious states, i.e., with maximum probability of goal scoring, as much as possible.
In the soccer dataset (for either the home or the away team), each episode starts from the moment that the team acquires the possession of the ball, and it terminates when the team either loses the possession (loss), or it ends up shooting (win).
According to the Markovian possession model in Section~\ref{sec:markov}, we have the set of ending actions (out, foul, shot, error) which are leading to state transitions. We estimated the probabilities of a possession belonging to the shot class in Section~\ref{cnn-lstm} with the help of a CNN-LSTM network. In order to define the value of each possession, we need to define the following concepts:
\begin{itemize}
\item \textbf{$P(shot | X)$:} computed by CNN-LSTM, is the probability of possession belonging to the shot class, given the features of the possession.
\item \textbf{$P(goal | shot , X)$: }is the probability of goal scoring, assuming that a possession belongs to the shot class, and given the shot features. This is the same concept as the state-of-the-art expected goal (xG) model that classifies shots to goal and no-goal. In this work, we have computed xG using logistic regression and show its higher performance with 5-fold cross-validation in comparison with other classifiers in Table~\ref{classifier}. (Details in Appendix~\ref{xg})
\end{itemize}
It has become evident that higher $P(shot | X)$ indicates a higher chance of a shot. Accordingly, the higher $P(goal | shot , X)$ shows a better chance of goal scoring. Thus, the multiplication of these two terms will give us the Possession Value (PV) in state s, denoted in \eqref{eq1}. The Bayesian formula for this equation is provided in Appendix~\ref{bayes}.
\begin{equation}
\label{eq1}
PV(s) = P(shot | X) P(goal | shot , X)
\end{equation}
Now we define the rewards acquired by each ending action in a possession. The most precious actions in critical situations have 2 criteria: 1) prevent possession loss, 2) save the possession for the team, and lead transition to a more valuable possession with higher PV. Thus, we present our reward function as depicted in \eqref{eq2}:
\begin{equation}
\label{eq2}
\begin{split}
r(s,a) = \left\{
\begin{array}{lll}
PV(s) & \mbox{if a is a shot;}\\
PV(s^\prime)-PV(s) & \mbox{else and } s,s^\prime \in \mbox{same team;}\\
-0.1 & \mbox{else,}
\end{array}
\right.
\end{split}
\end{equation}
where $r(s,a)$ is the reward when the state changes from $s$ to $s^\prime$ by taking action $a$.
Our proposed reward function computes the immediate reward by the arbitrary action that each player performed. Choosing the shot, the player receives the PV of the possession. If he performs any action other than shot (e.g., ball out or foul), but the next possession is still for his team, the model computes the PV of the next possession and compares it to the current possession. On the other side, if he performs any action leading to possession loss (e.g., bad ball control, inaccurate pass, tackle and interception by opponent), he should receive a negative reward. In this work, -0.1 proved to be the best reward of possession loss to confirm the convergence of the policy network. Moreover, the sum of $r(s,a)$ at each time-step throughout the whole episode is the indicator of expected goal for the team. Thus, the control objective is to maximize the expected goal of the teams.
\begin{table}[h]
\centering
\caption{Expected Goal computation performance}
\label{classifier}
\begin{tabular}{p{4.5cm}|p{1cm}|p{1cm}}
\hline
\textbf{Classifier} & \textbf{Brier} & \textbf{AUC} \\
\hline
XGBoost & 0.014 & 0.765\\
Random Forest & 0.014 & 0.759\\
SVM & 0.015 & 0.733\\
\textbf{Logistic Regression} & \textbf{0.012} & \textbf{0.798}\\
\hline
\end{tabular}
\end{table}
\subsection{Training protocol and return}
For each state, the network needs to decide about performing the appropriate action with the corresponding parameter gradient. The parameter gradient tells us how the network should modify the parameters if we want to encourage that decision in that possession in the future.
We modulate the loss for each action taken at the end of a possession according to their eventual outcome, since we aim to increase the log probability of successful actions (with higher rewards) and decrease it for the unsuccessful actions.
We define discounted reward (return) for episode $\tau$ in \eqref{eq3}.
\begin{equation}
\label{eq3}
R(\tau) = \frac{\sum_{t=0}^{\infty} \gamma^{t}\times r(s_{\tau,t}, a_{\tau,t})}{\sum_{t=0}^{\infty} \gamma^{t}}
\end{equation}
where $\gamma$ is a discount factor (Appendix~\ref{gamma}), and $r$ is the estimated rewards (expected goals) for time-step $t$ after standardization to control the gradient estimator variance. $R$ shows that the strength of encouraging a sample action at the end of a possession is the weighted sum of rewards (expected goals) afterwards. In this work, we constrain the look ahead to the end of the episodes.
\subsection{Policy gradient}
Policy gradient (PG) is a type of score function gradient estimator. Using PG, we aim to train a policy network that directly learns the optimal policy by learning a function that outputs the best action to be taken in each possession.
The CNN-LSTM network in Section~\ref{cnn-lstm} estimated the behavioral probability distribution over actions (shot, out, foul, error) for any given possession denoted by $q(x)$. This categorized probability distribution demonstrates some nonrandom, regular, and non-optimized policies obeyed by the players and possibly dictated by coaches through the matches.
In order to find a better policy, which optimizes the expected goal of episodes, we need to train the network. We call this network a target policy network $p(x)$. The training is done with the help of gradient vector, which encourages the network to slightly increase the likelihood of highly positive rewarding actions, and decrease the likelihood of negative ones.
We seek to learn how the distributions should be shifted (through its parameter $\theta$), in order to increase the reward of the taken actions.
In the general case, we have the expression of form: \[E_{x\sim p(x;\theta)}[f(x)],\] in which $f(x)$ is our return, and $p(x;\theta)$ is our learned policy. In our soccer problem, this expression is an indicator of expected goals in each episode through the whole match. In order to maximize the expected goals, we need to compute the gradient vector $\nabla_\theta \log p(x;\theta)$ as follows:
\begin{gather*}
\nabla_\theta E_x[f(x)]=\nabla_\theta \sum_x p(x)f(x) \Leftarrow \mbox{expectation of return} \\
=\sum_x \nabla_\theta p(x)f(x) \Leftarrow \mbox{swap sum and gradient} \\
=\sum_x p(x) \frac{\nabla_\theta p(x)}{p(x)}f(x) \Leftarrow \mbox{multiplying and dividing by p(x)} \\
=\sum_x p(x) \nabla_\theta \log p(x)f(x) \Leftarrow \mbox{because } \nabla_\theta \log (z) = \frac{1}{z}\nabla_\theta z \\
=E_x[f(x)\nabla_\theta \log p(x)] \Leftarrow \mbox{expectation }
\end{gather*}
But the PG is considered to be on-policy, i.e., training samples are collected according to the target policy. This situation is not valid in our off-line setting and we encounter out-of-distribution actions. Thus, we need to reformulate the PG as in \eqref{eq4} considering importance weight $\frac{p(x)}{q(x)}$ (proof in Appendix~\ref{pg-off}).
\begin{equation}
\label{eq4}
\nabla_\theta E_x[f(x)] = E_x[\frac{p(x)}{q(x)}f(x)\nabla_\theta \log p(x)]
\end{equation}
Gradient vector $\nabla_\theta \log p(x;\theta)$, is the gradient that computes a direction in the parameter space leading to an increase of the probability assigned to $x$. Consequently, high rewarding actions will tug on the probability density stronger than low rewarding actions. Therefore, by training the network, the probability density would shift around in the direction of high rewarding actions, making them more likely to occur.
\subsection{Off-policy training}
Our soccer analysis problem in this work falls right into the category of the off-policy variant of RL methods. In this method, the agent learns (trains and evaluates) solely from historical data, without online interaction with the environment.
Figure~\ref{pg} illustrates our training workflow of the policy network, with off-line data collection, and gradient computation.
\begin{figure*}
\centering
\includegraphics[width=15cm]{pg.png}
\caption{Offline training workflow of policy network}
\label{pg}
\end{figure*}
\section{Experimental results}
\label{ope-sec}
It is a challenging task to evaluate our implemented framework, as there is no ground truth method for action valuation or optimizing the policy in soccer. Therefore, we evaluate the performance of our proposed framework with an eye towards two questions: 1) How well our trained network can maximize the expected goal in comparison to the behavioral policy? We answer this question by the off-policy policy evaluation (OPE) method. 2) What is the intuition behind the selected actions of our target policy? We elaborate on this by providing three scenarios of the most critical situations in a particular match from the dataset. The structure of the datasets used in this study is provided in Appendix~\ref{data}.
\subsection{Off-policy policy evaluation with importance sampling and doubly robust methods}
Applying the off-policy method in our soccer analysis problem, we faced the following challenge: while training can be performed without a real robot (simulator), the evaluation cannot, because we cannot deploy the learned policy in a real soccer match to test its performance. This challenge motivated us to use off-policy policy evaluation (OPE), which is a common technique for testing the performance of a new policy, when the environment is not available or it is expensive to use.
With OPE, we aim to estimate the value and performance of our newly optimized policy based on the historical match data collected by a different behavioral policy obeyed by the players.
For this aim, we use the importance sampling method used by different works such as Teng et al. \cite{Teng2019} and doubly robust in \cite{DBLP:journals/corr/abs-1802-03493} and \cite{DBLP:journals/corr/JiangL15}. They take samples from behavioral policy $q(x)$ to evaluate the performance of target policy $p(x)$. The workflow of the evaluation with importance sampling is sketched in Figure~\ref{sampling} of Appendix~\ref{imp}, and details of the doubly robust are provided in Appendix~\ref{dr}. Moreover, the input dataset format to the OPE is shown in Table~\ref{policy} of Appendix~\ref{data-qppendix}.
\subsection{Experiments}
We used the 104 games on 3 state types to train our policy, and evaluate it by the OPE methods. In this section, we demonstrate the performance of the obtained policy and compare it to the behavior policy on different state representations. Then we mention three scenarios and analyze the performance of our policy versus the real players' actions.
Figure \ref{models} shows mean rewards over 100 epochs of the trained policy network using the different proposed state representations (see Table~\ref{cnn-table}) evaluated by importance sampling and doubly robust methods. As the Figure reveals, both OPE methods show that our proposed state representation type(III) (purple line) could let the policy network converge after sufficient epochs. Particularly under state(III), the policy network converges after about 70 epochs evaluated by importance sampling, and around 80 epochs evaluated by doubly robust. On the other hand, mean rewards curves under state(I) are quickly converging (due to their low-dimensional input) to a relatively lower mean rewards, and mean rewards curves under state(II) are failing to converge (due to their high-dimensional and complex input structure). Thus, the results obviously prove that our proposed state representation (III) is outperforming than other types.
Using importance sampling as the better evaluator of the optimal policy with state (III), any model after epoch number 70 is suitable for going into deployment by the football club for analysis. We can see that the acquired reward (expected goal) by the trained policy is around 0.45 with some variances on average of all 104 games. This figure also shows that the optimized policy (purple line) is outperforming the mean rewards by behavioral policy (green line), which is about -0.1 for all the matches.
\begin{figure}[h]
\centering
\includegraphics[ width=.8\textwidth]{importance_sampling_reward.png}
\caption{Off-policy policy evaluation on the 3 state representations of the trained models with importance sampling and doubly robust methods, and compare it to behavioral policy. Shaded region represents standard deviation over 104 game rollouts}
\label{models}
\end{figure}
Moreover, Figure~\ref{rewards} compares the Kernel density estimation (KDE) of the mean rewards by behavioral and optimal policies for all matches, evaluated by OPE. As it is shown, the density of the optimized policy has moved to the positive side and clearly has improved over the behavior policy. It also has a smaller variance compared to the behavior policy.
\begin{figure}[h]
\centering
\includegraphics[ width=5cm]{rewards.png}
\caption{KDE of mean rewards for all episodes of 104 games, acquired by behavioral and optimal policy (evaluated by importance sampling)}
\label{rewards}
\end{figure}
Now, we consider some scenarios to see how the optimized policy works compared to the behavior policy.
Figure \ref{scenarios} sketches 3 different scenarios in the critical situations of a particular match in our dataset, when there is no chance of pass or dribble for the ball holder. Thus, the ball holder needs to decide about the 3 intentional options (shot, out, foul), or submit the ball to the opponent by an error. The scenarios of the performed action by the player, and the proposed action by policy network are the following.
\textbf{Scenario 1: home player missed goal scoring opportunity: }
Figure~\ref{scenario1} shows the $24^{th}$ episode of the match. Player A from the home team stops a long sequence of passes by committing a foul and he gets the reward of -0.16. In this second, there is high pressure from away players (B,C,D). So A tries a foul to prevent possession loss. Then player D from away team gets the ball, but immediately loses it due to an inaccurate pass. As claimed before, the unsuccessful touches of opponent in less than 3 consecutive actions are not considered as possession loss. Thus, the possession is kept for the home team after committing a foul by A. Although the possession was kept for the home team after this action, the policy network suggests shooting the ball instead of committing the foul. So player A could gain the reward (expected goal) of 0.4, meaning that the probability of goal scoring was 0.4, and he missed this opportunity.
\textbf{Scenario 2: goal conceding:}
Figure~\ref{scenario2} shows the $73^{th}$ episode of the match. Player A from home loses the possession by error (tackle by D) and he gets -0.1 reward. The next possession belongs to the away team, and they score a goal (red trajectory in the figure). The policy network assigns a higher probability for foul in this situation instead of this inaccurate pass (error), so there was a chance of saving the possession for A, and avoid goal conceding for the home team.
\textbf{Scenario 3: goal conceding: }
Figure~\ref{scenario3} shows the $90^{th}$ episode of the match. Player A from home loses the possession due to bad ball control and high pressure from B,D,E, and gets -0.1 of reward. The next possession belongs to away players and they score a goal (red trajectory). The policy network surprisingly suggests sending the ball out in this situation, so those home players could probably save the possession and avoid goal conceding.
\begin{figure}[h]
\centering
\subfloat[Scenario1: (performed action: foul), (optimal action: shot)]{{\includegraphics[height=4cm, width=5cm]{scenario11.png}\label{scenario1}}}%
\qquad
\subfloat[Scenario2: (performed action: error/tackle by opponent), (optimal action: foul)]{{\includegraphics[height=4cm, width=5cm]{scenario22.png}\label{scenario2} }}%
\qquad
\subfloat[Scenario3: (performed action: error/bad ball control), (optimal action: out)]{{\includegraphics[height=4cm, width=6cm]{scenario33.png}\label{scenario3} }}%
\caption{Three scenarios of critical situations in a match. Red dots are home team players, and blues are away. Black arrow shows the ball holder. Yellow dashed lines show the optimal trajectory of ball, if the player was following optimal policy. Red dashed lines show the actual and non-optimal trajectory of ball by the performed action of player in the match. The probability distribution shows the optimal output of our trained policy network. }%
\label{scenarios}%
\end{figure}
\section{Conclusion}
\label{conclusion}
We proposed a data-driven deep reinforcement learning framework to optimize the impact of actions, in the critical situations of a soccer match. In these situations, the player cannot pass the ball to a teammate, or continue with dribbling. Thus, the player can only commit a foul, send the ball out, shoot it, or if not skilled enough, she/he would lose the ball by a defensive action of the opponent. Our framework built on a training policy network will help the players and coaches to compare their behavioral policy with the optimal policy. More specifically, sports professionals can feed any state with the proposed possession features and state representation to find the optimal actions. We conducted experiments on 104 matches and showed that the optimal policy network can increase the mean rewards to 0.45, outperforming the gained expected goals by the behavioral policy, which is -0.1. To the best of our knowledge, this work constitutes the first usage of off-policy policy gradient reinforcement learning to maximize the expected goal in soccer games. A direction for future work is to expand the framework to evaluate all on the ball actions of the players, including passes and dribbles.
\section*{Acknowledgment}
Project no. 128233 has been implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the FK\_18 funding scheme. The authors thank xfb Analytics\footnote{http://www.xfbanalytics.hu/} for supplying event stream and tracking data used in this work.
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.938477,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbMS6NNjgB0Ss0LxJ | \section{Introduction}
\label{sec:intro}
What are the offensive tendencies of your upcoming opponent with regards to their shot selection?
Do these tendencies change through the course of the game?
Are they particularly ineffective with regards to specific plays so as to force them towards them?
These are just some of the questions that our proposed framework, named {{\tt tHoops}}, can answer.
While data have been an integral part of sports since the first boxscore was recorded in a baseball game during the 1870s, it is only recently that machine learning has really penetrated the sports industry and has been utilized for facilitating the operations of sports franchises.
One of the main reason for this is our current ability to collect more fine-grained data; data that capture essentially (almost) everything that happens on the court.
\begin{figure*}[ht]
\centering
\includegraphics[scale=0.3]{figures/thoops}
\caption{The {{\tt tHoops}} framework. Tensor $\underline{\mathbf{X}}$ can capture the aggregate shot charts of each players for specific times within the game. For example, $\underline{\mathbf{X}}(i,j,k)$ is the number of shots that player $i$ took from court location $j$ during time $k$. ${\tt tHoops}$ identifies prototype patterns in the data, expressed as triplets of vectors corresponding to the three dimensions of $\underline{\mathbf{X}}$ respectively. An element in the player vector can be thought of as a {\em soft} coefficient for the membership of the corresponding player in this component/pattern.}
\label{fig:thoops}
\end{figure*}
For example, shot charts, that is, maps capturing locations of (made or missed) shots, describe the shot selection process and can be thought of as an indicator of the identity of a player/team.
Furthermore, since the 2013-14 season, the National Basketball Association (NBA) has mandated its 30 teams to install an optical tracking system that collects information 25 times every second for the location of all the players on the court, as well as the location of the ball.
These data are further annotated with other information such as the current score, the game and shot clock time etc.
Optical tracking data provide a lens to the game that is much different from traditional player and team statistics.
These spatio-temporal trajectories for all the players on the court can capture information for the offensive/defensive tendencies as well as, the schemes used by a team.
They can also allow us to quantify parts of the game that existing popular statistics cannot.
For instance, the Toronto Raptors were among the first teams to make use of this technology and were able to identify optimal positions for the defenders, given the offensive formation \cite{grandland}.
This allowed the Raptors to evaluate the defensive performance of their players, an aspect of the game that has traditionally been hard to evaluate through simple boxscore metrics such as blocks and steals.
One of the tasks a team has to undertake during its preparation for upcoming games is to study its opponents, their tendencies and how they compare with other teams.
Playing tendencies have been traditionally analyzed in a heuristic manner, mainly through film study, which is certainly time consuming (a temporal cost pronounced for NBA teams that play 3 to 4 games every week).
However, the availability of detailed spatio-temporal (optical tracking) data makes it possible to identify prototype patterns of schemes and tendencies of the opponent in much shorter time.
Recently automated ways for similar comparisons have appeared in the sports analytics literature focusing on shooting behavior and aiming into identifying a set of prototype shooting patterns that can be used as a basis for describing the shooting tendencies of a player/team (e.g., \cite{miller14}).
These approaches offer a number of advantages over simply comparing/analyzing the raw data (e.g., the raw shot-charts).
In particular, similar to any latent space learning method, they allow for a better comparison as well as easier data retrieval, through the decomposition of the data into several prototype patterns in a reduced dimensionality.
However, existing approaches almost exclusively focus on the spatial distribution of the underlying (shooting) process ignoring a multitude of other parameters that can affect the shot selection.
For instance, the time remaining on the shot/game clock, the score differential, etc. are some contextual factors that can impact the shot selection of a player.
Similarly, the analysis of the players' trajectories obtained from optical tracking can benefit greatly from the learning of a latent, reduced dimensionality, space that considers several contextual aspects of the game.
\begin{mdframed}[linecolor=red!60!black,backgroundcolor=gray!20,linewidth=2pt, topline=true, rightline=false, leftline=false]
To address this current gap in the literature, we design and develop {{\tt tHoops}}, a novel tensor decomposition based method for analyzing basketball data, which simultaneously incorporates multiple factors that can affect the offensive (and defensive) tendencies of a team.
\end{mdframed}
{{\tt tHoops}} allows us to separate the observed tendencies captured in the data across multiple dimensions and identify patterns strongly connected among all dimensions.
This is particularly important for sports, where strategies, play selection and schemes depend on several contextual factors including time.
What are the Boston Celtics offensive patterns in their ATO (After Time-Out) plays?
How do they differ based on the players on the court?
The benefits of {{\tt tHoops}} are not limited to describing and learning the tendencies of a team/player but as we will discuss later it also allows for flexible retrieval of relevant parts of the data from the coaching staff (e.g., for film study).
We would like to emphasize here that we do not suggest that {{\tt tHoops}} - or any other similar system - will substitute film study, but it will rather make it more efficient by allowing coaching staff to (i) obtain a report with the most prevalent patterns, which can form a starting point for the film study, and (ii) perform a flexible search through the data (e.g., identify all the possessions that had two offensive players at the corner threes and one player in the midrange slot during their last 5 seconds).
Simply put, {{\tt tHoops}} is an automated, {\em exploratory analysis} and {\em indexing}, method that can facilitate traditional basketball operations.
In addition, {{\tt tHoops}} can be used to generate synthetic data based on patterns present in real data.
This can significantly benefit the sports analytics community, since real optical tracking data are kept proprietary.
However, as we will describe later, one could identify prototype motifs on the real data using {{\tt tHoops}} and use the obtained patterns to generate synthetic data that exhibit similar patterns.
In this study we focus on and evaluate the analytical tool of {{\tt tHoops}} but we also describe how we can use {{\tt tHoops}} for other applications (i.e., indexing player tracking data to facilitate flexible retrieval and generating synthetic data).
More specifically, we are interested in analyzing the offensive tendencies of an NBA team (or the league as a whole).
For capturing these offensive tendencies we make use of two separate datasets, namely, {\em shot charts} and {\em optical tracking} data.
The reason for this is twofold.
First, the two different datasets capture different type of information related with the offensive tendencies of a team.
The shot charts are representative of the shot selection process for a team or a player, while the optical tracking data capture rich information for the shot generation process, i.e., the players' and ball movement that led to a shot (or a turnover).
Second, using two datasets that encode different type of information directly showcases the general applicability of {{\tt tHoops}}, i.e., it can be adjusted accordingly to accommodate a variety of multi-aspect (sports) data.
In brief, {{\tt tHoops}} is based on tensor decomposition (see Figure \ref{fig:thoops}).
For illustration purposes, let us consider the shot charts of individual players.
{{\tt tHoops}} first builds a 3-dimensional tensor $\underline{\mathbf{X}}$, whose element $\underline{\mathbf{X}}(i,j,k)$ is the aggregate number of shots that player $i$ took from court location $j$ at time $k$.
The granularity of location and time can be chosen based on the application.
For example, for location one could consider a grid over the court and hence, $j$ represents a specific grid cell.
Or alternatively - as is the case in Figure \ref{fig:thoops} - $j$ can represent one of the official courtzones (e.g., left/right corner three area, left/right slot three area etc.).
With respect to the temporal dimension, one could consider as the time unit to be the shot clock (i.e., values between 0-24), the quarter in the game (i.e., values between 1-5, where 5 groups together all possible overtimes) or even the exact game clock (e.g., at the minute granularity).
The factors of this tensor, which are obtained through solving an optimization problem, are essentially vector triplets, that can also be represented as 3 separate matrices (Figure \ref{fig:thoops}).
As we will elaborate on later, these factors provide us with a set of prototype patterns, i.e., {\em shooting bases}, that synthesize the shot selection tendencies of players.
From a technical standpoint, one of the challenges is to identify the appropriate number of factors/components for the decomposition.
While, there are well-established metrics for this task, they have limitations - both in terms of computational complexity as well as in terms of applicability - that can appear in our setting.
Therefore, we introduce an approach that is based on a specifically-defined clustering task and the separability of the obtained clusters.
The rest of the paper is organized as follows:
Section \ref{sec:related} provides an overview of related with our work literature.
We further present {{\tt tHoops}} in Section \ref{sec:thoops}.
Sections \ref{sec:dataset} and \ref{sec:thoops-results} describe our datasets and their analysis using {{\tt tHoops}} respectively.
Finally, Section \ref{sec:discussion} discusses our work and other potential applications of {{\tt tHoops}}.
\section{Related Literature}
\label{sec:related}
The availability of optical tracking sports data has allowed researchers and practitioners in sports analytics to analyze and model aspects of the game that were not possible with traditional data.
For example, Franks \textit{et al.} \cite{franks15} developed models for capturing the defensive ability of players based on the spatial information obtained from optical tracking data.
Their approach is based on a combination of spatial point processes, matrix factorization and hierarchical regression models and can reveal several information that cannot be inferred with boxscore data.
For instance, the proposed model can identify whether a defender is effective because he reduces the quality of a shot or because he reduces the frequency of the shots all together.
Cervone \textit{et al.} \cite{cervone2016} further utilize optical tracking data and develop a model for the expected possession value (EPV) using a multi-resolution stochastic process model.
Tracking the changes in the EPV as the possession progresses can enable practitioners to quantify previously \textit{intangible} contributions such as a good screen, a good pass (not assist) etc.
Similar to this study, Yue \textit{et al.} \cite{yue2014learning} develop a model using conditional random fields and non-negative matrix factorization for predicting the near-term actions of an offense (e.g., pass, shoot, dribble etc.) given its current state.
In a tangential direction, D'Amour \textit{et al.} \cite{damour15} develop a continuous time Markov-chain to describe the discrete states a basketball possession goes through.
Using this model the authors then propose entropy-based metrics over this Markov-chain to quantify the ball movement through the unpredictability of the offense, which also correlates well with the generation of opportunities for open shots.
Optical tracking data can also quantify the usage of the different court areas.
Towards this direction Cervone \textit{et al.} \cite{cervone2016nba} divided the court based on the Voronoi diagram of the players' locations and formalized an optimization problem that allowed them to obtain court realty values for different areas of the court.
This further allowed the authors to develop new metrics for quantifying the spacing of a team and the positioning of the lineup.
Very recently a volume of research has appeared that utilizes deep learning methods to analyze spatio-temporal basketball data and learn latent representations for players and/or teams, identify and predict activities etc. (e.g., \cite{Mehrasa18,zhong18} with the list not being exhaustive).
Closer to our work, Miller \textit{et al.} \cite{miller14} use Non-Negative Matrix Factorization to reduce the dimensionality of the spatial profiles of player's shot charts.
Their main contribution is the use of a log-Gaussian Cox point process to smooth the raw shooting charts, which they show provides more intuitive and interpretable patterns.
The same authors developed a dictionary for trajectories that appear in basketball possessions using Bezier curves and Latent Dirichlet Allocation \cite{miller2017possession}.
Our work can be thought of as complementary to these studies.
In particular, {{\tt tHoops}} is able to consider additional dimensions that can affect the shot selection process, and the possession development such as the time while it is also able to analyze a large variety of multi-aspect in general sports data and is not limited on shooting charts and spatial trajectories.
Furthermore, is generic enough to power other applications (see Section \ref{sec:discussion}).
While basketball is the sport that has been studied the most through optical tracking data - mainly due to the availability of data - there is relevant literature studying other sports as well (both in terms of methodology and application).
For example, Bialkowski \textit{et al.} \cite{bialkowski2014large} formulate an entropy minimization problem for identifying players' roles in soccer.
They propose an EM-based scalable solution, which is able to identify the players' role as well as the formation of the team.
Lucey \textit{et al.} \cite{lucey2014quality} also used optical tracking data for predicting the probability of scoring a goal by extracting features that go beyond just the location and angle of the shot \cite{fairchild17}.
More recently, Le \textit{et al.} \cite{Le2017CoordinatedMI} develop a collaboration model for multi-agents using a combination of unsupervised and imitation learning.
They further apply their model to optical tracking data from soccer to identify the optimal positioning for defenders - i.e., the one that \textit{minimizes} the probability of the offense scoring a goal - given a particular formation of the offense.
This allows teams to evaluate the defensive skills of individual players.
In a tangential effort, Power \textit{et al.} \cite{Power:2017:PCE:3097983.3098051} define and use a supervised learning approach for the risk and reward for a specific pass in soccer using detailed spatio-temporal data.
The risk/reward for a specific pass can further quantify offensive and defensive skills of players/teams.
While we introduce {{\tt tHoops}} as a framework for analyzing basketball data, it should be evident that it can really be used to analyze spatio-temporal (and in general multi-aspect) data for other sports as well.
\section{{\tt tHoops}: Tensor Representation and Decomposition}
\label{sec:thoops}
In this section we will present the general representation of spatio-temporal sports data with tensors, as well as, the core of {{\tt tHoops}}.
A $n$-mode tensor, is a generalization of a matrix (2-mode tensor) in $n$ dimensions.
For illustration purposes in this section we will focus on players' shot charts that include information for the court location, game time and the player who took the shot.
In order to represent these shot charts, we will utilize a 3-mode tensor $\underline{\mathbf{X}}$, that captures the spatio-temporal information of the shot selection for players/teams.
In particular, the element $\underline{\mathbf{X}}(i, j, k)$ will be equal to the number of shots that player/team $i$ took from the court location $j$ during time $k$.
Figure \ref{fig:thoops} depicts this (cubic) structure.
A typical technique for identifying latent patterns in data represented by a 2-mode tensor (i.e., a matrix), is matrix factorization (e.g., Singular Value Decomposition, Non-negative Matrix Factorization etc.).
A generalization of the Singular Value Decomposition in n-modes is the Canonical Polyadic (CP) or PARAFAC decomposition \cite{harshman1970foundations}.
Without getting into the details of the decomposition, PARAFAC expresses $\underline{\mathbf{X}}$ as a sum of $F$ rank-one components:
\begin{equation}
\underline{\mathbf{X}} \approx \displaystyle {\sum_{f=1}^F \mathbf{a}_f \circ \mathbf{b}_f \circ \mathbf{c}_f },
\label{eq:tesor_dec}
\end{equation}
where $ \mathbf{a}_f \circ \mathbf{b}_f \circ \mathbf{c}_f (i,j,k) = \mathbf{a}_f(i) \mathbf{b}_f (j) \mathbf{c}_f (k) $.
In cases where we have sparse count data these components are obtained as the solution to the following optimization problem \cite{chi2012tensors}:
\begin{equation}
\min_{{\mathbf{A}},\mathbf{B},\mathbf{C}} D_{KL}(\underline{\mathbf{X}}|{\sum_{f} \mathbf{a}_f \circ \mathbf{b}_f \circ \mathbf{c}_f }),
\end{equation}
where $D_{KL}$ is the Kullback-Leibler divergence, and matrices $\mathbf{A,B,C}$ hold the $\mathbf{a}_f,\mathbf{b}_f,\mathbf{c}_f$ respectively in their $f$-th columns.
Simply put, each component of the decomposition, i.e., triplet of vectors, is a rank-one tensor (obtained as the outer product of the three vectors).
Each vector in the triplets corresponds to one of the three modes of the original tensor $\underline{\mathbf{X}}$.
In our example, $\mathbf{a}$ corresponds to the players, $\mathbf{b}$ corresponds to the court locations, and $\mathbf{c}$ corresponds to the game clock/time.
Each of these $F$ components can be considered as a cluster, and the corresponding vector elements as soft clustering coefficients, that is, if a coefficient is small, the corresponding element does not belong to this {\em cluster}.
In our application, these \textit{clusters} correspond to a set of players that tend to take shots from \textit{similar} areas on the court during \textit{similar} times within the game.
For notational simplicity, we will denote as matrix $\mathbf{A}$ (and matrices $\mathbf{B}$ and $\mathbf{C}$ accordingly) the factor matrix that contains the $\mathbf{a}_f$ ($\mathbf{b}_f$ and $\mathbf{c}_f$ respectively) vectors as columns.
The vectors ($\mathbf{b}$, $\mathbf{c}$) essentially correspond to the latent patterns for the spatio-temporal shot selection of players obtained from tensor $\underline{\mathbf{X}}$.
\textbf{Intuition Behind the Use of Tensors: }
Tensor decomposition attempts to summarize the given data into a reduced rank representation.
PARAFAC tends to favor dense groups that associate all the aspects involved in the data (player, locations and time in the example in Figure \ref{fig:thoops}).
These groups need not be immediately visible via inspection of the $n$-mode tensor, since PARAFAC is not affected by permutations of the mode indices.
As an immediate consequence, we expect near-bipartite cores of players who take shots from specific locations on the court during certain periods of the game.
The benefit of tensor decomposition over matrix decomposition - that has been used until now to analyzing shooting patterns - is the ability to consider several aspects of the data simultaneously, allowing for a richer context, consequently allowing {{\tt tHoops}} to obtain a richer set of latent patterns.
One could argue that we could compare shot-charts directly after dividing them based on the game time of the shot (or other contextual factor).
This is indeed true, but as aforementioned the dimensionality of the raw data can be very high (especially with an increase in the contextual factors considered), which will make it challenging to identify high quality patterns.
\textbf{Choice of Number of Components $F$: }
Depending on the structure of the given data, the PARAFAC decomposition can range from (almost) perfectly capturing the data, to performing rather poorly.
The main question is whether the spatio-temporal data at hand are amenable to PARAFAC analysis, and to what extent.
In order to answer the question of how well does PARAFAC decomposition model our data, we turn our attention to a very elegant diagnostic tool, CORCONDIA \cite{bro03}.
CORCONDIA serves as an indicator of whether the PARAFAC model describes the data well, or whether there is some problem with the model.
The diagnostic provides a number between 0 and 100; the closer to 100 the number is, the better the modeling.
If the diagnostic gives a low score, this could be caused either because the chosen rank $F$ is not appropriate, or because the data do not have appropriate trilinear structure, regardless of the rank.
To identify the reason behind a low CORCONDIA score one can gently increase the rank and observe the behavior \cite{papalexakis2015location}.
Despite its elegant application, computing CORCONDIA is very challenging even for moderately large size data.
The main computational bottleneck of CORCONDIA is solving the following linear system:
$
\mathbf{g} = \left( \mathbf{C}\otimes \mathbf{B} \otimes \mathbf{A} \right)^\dag vec\left(\underline{\mathbf{X}}\right)
$
where $\dag$ is the Moore-Penrose pseudoinverse, $\otimes$ is the Kronecker product, and the size of $\left( \mathbf{C}\otimes \mathbf{B} \otimes \mathbf{A} \right)$ is $IJK \times F^3$.
Even computing and storing $\left( \mathbf{C}\otimes \mathbf{B} \otimes \mathbf{A} \right)$ proves very hard when the dimensions of the tensor modes are growing, let alone pseudoinverting that matrix.
In order to tackle the above inefficiency in this study we use our recent work for efficiently computing CORCONDIA when the data are large but sparse \cite{papalexakis2015fastcorcondia}.
In brief, key behind the approach is avoiding to pseudoinvert $\left( \mathbf{A \otimes B \otimes C} \right)$.
In order to achieve the above, we reformulate the computation of CORCONDIA.
The pseudoinverse
$
\left( \mathbf{A} \otimes \mathbf{B} \otimes \mathbf{C} \right)^\dag
$
can be rewritten as:
\begin{equation}
\left( \mathbf{V_a} \otimes \mathbf{V_b} \otimes \mathbf{V_c} \right) \left( \mathbf{\Sigma_a}^{-1} \otimes \mathbf{\Sigma_b}^{-1} \otimes \mathbf{\Sigma_c}^{-1} \right) \left( \mathbf{U_a}^T \otimes \mathbf{U_b}^T \otimes \mathbf{U_c}^T \right)
\end{equation}
where $\mathbf{A} = \mathbf{U_a \Sigma_a {V_a}^T}$, $\mathbf{B} = \mathbf{U_b \Sigma_b {V_b}^T}$, and $\mathbf{C} = \mathbf{U_c \Sigma_c {V_c}^T}$ (i.e. the respective Singular Value Decompositions).
After rewriting the least squares problem as above, we can efficiently compute a series of Kronecker products times a vector, {\em without} the need to materialize the (potentially big) Kronecker product.
One of the limitations of CORCONDIA is that it cannot examine the quality of the decomposition for rank higher than the smallest dimension of the original tensor $\underline{\mathbf{X}}$.
Depending on the specific design of the tensor for {{\tt tHoops}}, the smallest dimension of $\underline{\mathbf{X}}$ can limit the practical applicability of CORCONDIA for choosing rank $F$.
Simply put, with $f$ being the smallest dimension of $\underline{\mathbf{X}}$, using CORCONDIA will provide us with a rank $F$ of at most $f$, i.e., $F \le f$.
If the CORCONDIA score has already been reduced for a decomposition of rank $r \le f$ then we do not have to further examine other ranks, since their quality is going to be low.
However, if the CORCONDIA score is still high for a decomposition of rank $f$, a higher rank decomposition can provide us with a more practical answer that captures a larger percentage of the patterns that exist in the data.
In order to overcome this problem in these cases we introduce a heuristic approach that attempts to choose the rank $F$ of the decomposition based on the ability of the identified components to separate natural clusters in the data as compared to the raw data.
More specifically let us consider tensor $\underline{\mathbf{X}}$ whose first mode (say mode $A$) is the dimension across which we want to separate the data (e.g., players, teams etc.).
In other words, the second and third mode ($B$ and $C$ respectively), will be the feature space that we will use for the clustering.
First we perform clustering across the first mode using the raw data, i.e., the features for element $i$ of the first mode is the matrix $\underline{\mathbf{X}}(i,:,:)$.
In order to quantify the quality of the clustering we can use the Silhouette \cite{rousseeuw1987silhouettes} of the obtained clustering $\sigma_{\underline{\mathbf{X}},k}$, where $k$ represents the number of clusters.
Given that we do not know the number of clusters a-priori we compute a set of Silhouette values for different number of clusters $\mathcal{S}_{\underline{\mathbf{X}},2:K}$.
From these values, one can choose their average, their minimum or their maximum value.
In our work, we choose to pick as the Silhouette value the $\max \mathcal{S}_{\underline{\mathbf{X}},2:K}$.
The reasoning behind this choice, is that the maximum Silhouette value will provide the best separation of the data.
Consequently for the rank $F$ decomposition of $\underline{\mathbf{X}}$ we consider as the features $\mathbf{r}$ for the element $i$ of the first mode, a concatenation of the corresponding elements in the component vectors $\mathbf{a}$ (or simply the $i^{th}$ row of the factor matrix $\mathbf{A}$):
\begin{equation}
\mathbf{r}_i = (\mathbf{a}_{i,j},~\forall j\in\{1,\dots,F\})
\label{eq:dec_f}
\end{equation}
Using these features we can again cluster the elements in the mode of interest of the tensor and obtain a set of Silhouette values $\mathcal{S}_{F,2:K}$.
We can then choose the decomposition rank $F$ as:
\begin{equation}
\min_F \{\max{\mathcal{S}}_{\underline{\mathbf{X}},2:K} \le \max{\mathcal{S}}_{F,2:K} \land |\max{\mathcal{S}}_{F+1,2:K} - \max{\mathcal{S}}_{F,2:K}| < \epsilon \}
\label{eq:heuristic}
\end{equation}
where $\epsilon > 0$ is a convergence criteria.
Simply put we choose the decomposition rank $F$ that provides better separability as compared to the raw data, while at the same time increasing the rank does not provide any (significant) additional benefits.
We can also add another constraint in Equation \ref{eq:heuristic} that sets a minimum value for $\max{\mathcal{S}}_{F,2:K}$.
According to Kaufman and Rousseeuw \cite{kaufman2009finding} a strong structure will exhibit Silhouette values greater than 0.7.
Therefore, we could set a threshold according to similar rules-of-thumb.
Nevertheless, this could be very restrictive in some cases (e.g., when there is not any inherent natural structure in the data), and therefore, we do not include it in the formal condition in Equation \ref{eq:heuristic}.
\section{Datasets}
\label{sec:dataset}
In this section we present the datasets we used in our study as well as the results from our analysis. For tensor manipulations we use the Tensor Toolbox for Matlab \cite{TTB_Software}.
{\bf Shot Charts: }
We collected a shot dataset from the 2014-15 NBA season using NBA's shotchart API endpoint \cite{kpele-git-2018}.
This endpoint provides several information for all the shots taken during the season, including: the game that the shot was taken, the player that took the shot, the location on the floor from where the shot was taken, the game clock information, the shot type, and whether the shot was made or missed.
In total we collected information for 184,209 shots from 348 different players.
In order to represent these data with a tensor $\underline{\mathbf{X}}$, we need to quantize the court location and the temporal dimension.
For court location we could filter the points through a spatial grid and use the grid ID as the index for the location dimension in $\underline{\mathbf{X}}$.
However, we choose to use the official courtzones, depicted in the sample shotchart in Figure \ref{fig:shotchart},
since these are the natural borders for the different locations on the court.
This choice further reduces any potential noise induced by an extremely fine-grained grid.
Furthermore, for the temporal dimension, we use the game periods, where we merge all overtimes to a single 5th period.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{figures/shotchart}
\caption{Sample shotchart, with the court zones used for {{\tt tHoops}} spatial granularity depicted.}
\label{fig:shotchart}
\end{figure}
{\bf Player Tracking: }
Since 2013 the NBA has \textit{mandated} all the teams to equip their stadiums with optical tracking capabilities.
The information this technology provides is derived from cameras mounted in stadium rafters and consist primarily of x,y coordinates for the players on the court and ball.
This information is recorded at a frequency of 25 times per second and allows the analysis of offensive (and defensive) schemes.
The player tracking data provide additional meta-information such as game and shot clock, violations etc.
For our study, we use data from 612 games from the 2015-16 NBA regular season \cite{linouk23-git-2016}.
As we will see in the following section, these data can be analyzed by {{\tt tHoops}} to provide prototype patterns for the offensive tendencies of a team, which can further facilitate scoutings tasks as discussed earlier.
\section{Evaluations}
\label{sec:thoops-results}
In this section we present the results from the application of {{\tt tHoops}} to our datasets.
We would like to emphasize here that {{\tt tHoops}} is an unsupervised learning method and given the absence of ground truth labels (i.e., the {\em true patterns}) it is hard to do a comparative/accuracy evaluation.
Therefore, similar to existing literature that deals with related unsupervised learning problems \cite{miller14,miller2017possession}, we perform more of a qualitative evaluation of the results, discussing and matching them with {\em partial} ground truth that we know for players and teams.
\textbf{Shot Chart Analysis: }
We start by presenting our results for the players' latent shooting patterns.
In particular, we build two separate tensors, one for made shots $\underline{\mathbf{X}}_{Made}$ and one for missed shots $\underline{\mathbf{X}}_{Missed}$, since one can argue that they encode different information of sorts.
Figure \ref{fig:shot-components} presents the spatial and temporal patterns for the 12 components we identified using the $\underline{\mathbf{X}}_{Made}$.
Note here that the smallest dimension of $\underline{\mathbf{X}}_{Made}$ is the temporal one and this is equal to 5.
Therefore, CORCONDIA can assess the quality of the decomposition for rank up to 5.
Our results indicate that the quality of the model obtained does not deteriorate until that rank so we also examine higher ranks.
However, PARAFAC provides us only with 12 components for $\underline{\mathbf{X}}_{Made}$, that is, higher components are degenerate, and for this reason we use all of the components provided.
Components 1 and 2 are particularly important for showing the difference between {{\tt tHoops}} and a similar approach based on matrix factorization.
The spatial element of these two components is very similar (almost identical) and represents shots made from the (deep) paint.
However, they are different in the temporal dimension.
As we can see component 1 includes shots taken mainly during quarters 1 and 3, while component 2 mainly covers quarters 2 and 4.
The fact that PARAFAC detected two components (instead of of one that covers all the periods), means that there are subgroups of players that take and make these shots in different times during the game.
Of course, this difference can be purely based on personnel decisions from the coaching staff through the game, but {{\tt tHoops}} is able to pick this up and provide us with latent patterns considering all the dimensions included in the tensor simultaneously.
In contrast, component 11 corresponds to corner 3 (made) shots.
There is no other component that includes them (component 10 includes a small fraction of corner 3 shots but it heavily captures above the break 3-point shots), which means that players that are heavily represented in this component take (and make) these shots almost uniformly across the game as it can be seen by the temporal element.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{figures/shot-components}
\caption{{{\tt tHoops}} components for the $\underline{\mathbf{X}}_{Made}$ tensor. The spatial and temporal elements are presented. The size of the points for the temporal element correspond to the coefficient for each quarter.}
\label{fig:shot-components}
\end{figure}
Figure \ref{fig:shot-components} does not provide any information for the player vector of the component.
The player vector of the tensor factor informs us which players have a strong representation in the component under examination.
For example, Table \ref{tab:players} presents the top-10 players (i.e., the 10 players with the largest coefficients in the corresponding player vectors) included in the corner 3 components and the midrange shot component.
\begin{table}[h!]
\begin{center}
\begin{tabular}{c|c}
\textbf{Corner 3} & \textbf{Midrange} \\
Component 11 & Component 4\\
\hline
Trevor Ariza & Blake Griffin\\
Matt Barnes & Avery Bradley \\
Danny Green & Monta Ellis \\
Klay Thomson & LaMarcus Aldridge \\
Luol Deng & Anthony David \\
Kyle Korver & Marc Gasol \\
JJ Redick & Pau Gasol \\
O.J. Mayo & Nikola Vucevic\\
Bojan Bojdanovic & Chris Paul\\
\end{tabular}
\end{center}
\caption{The components obtained from {{\tt tHoops}} can provide us with valuable information for the shooting tendencies of players.}
\label{tab:players}
\vspace{-0.1in}
\end{table}
As one might have expected, Danny Green, Klay Thompson, Kyle Korver and JJ Redick are predominantly featured in the corner 3s component, while players like LaMarcus Aldridge, Chris Paul and the Gasol brothers are featured in the midrange component.
Table \ref{tab:players} also serves as an indicator that the components obtained from {{\tt tHoops}} are sensible and pass the ``eye-test''.
Using as the players features the vectors $\mathbf{r}$ from Equation (\ref{eq:dec_f}), we can obtain a 12-dimensional latent representation of each player that can be further used to cluster players.
These clusters will represent players with similar offensive patterns (with regards to shots made).
We use k-means clustering and the Silhouette to determine the appropriate number of clusters, which provides us with a value of k=5.
Figure \ref{fig:player_clusters} further presents the clusters on a two-dimensional projection using t-SNE \cite{maaten2008visualizing}.
As we can see the clusters are well distinguished - especially considering that t-SNE uses a (further) reduced dimensionality of the data.
The largest cluster corresponds to players whose main patterns (i.e., the ones with the highest coefficients) correspond to shots taken from the paint (specifically tensor components 1, 2, and 3).
The smallest cluster corresponds to players whose patterns heavily include the 3-point shoot components (tensor components 8, 9, 11 and 12).
This cluster includes players such as Steph Curry, James Harden, Kyle Korver, JJ Redick, Gordon Hayward, Kyrie Irving, Klay Thompson and JR Smith.
Another distinct cluster includes players whose most dominant components are 4, 6 and 7, i.e., midrange shots.
This cluster includes players like DeMar DeRozan, LaMarcus Aldridge, Al Horford, Blake Griffin, Marreese Speights and Anthony Davis.
The fourth cluster does not exhibit any specific pattern with regards to the spatial distribution of the shots.
However, it includes players who are offensively active mainly during quarters 2 and 4 (tensor components 2, 5 and 10).
Players that fall into this cluster are mainly bench and role players such as Jamal Crawford, Leandro Barbosa, Patty Mills, Andre Iguodala, J.J. Barea and Vince Carter.
This shows that using the information from the tensor components allows us to essentially group players based on different aspects of their game simultaneously.
Finally, the last cluster includes players that are a mix of the other 4 clusters, which makes it harder to profile them.
Nevertheless, considering also the location of this cluster on the t-SNE projection, i.e., surrounded by the other four clusters, it further strengthens our belief that {{\tt tHoops}} is able to capture multi-aspect patterns in the shooting data.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{figures/tsne}
\caption{t-SNE visualization of the players' clusters using the components for $\underline{\mathbf{X}}_{Made}$ obtained through {{\tt tHoops}}.}
\vspace{-0.1in}
\label{fig:player_clusters}
\end{figure}
We also analyzed the teams shooting patterns by designing the appropriate tensor for {{\tt tHoops}}.
In this case tensor $\underline{\mathbf{X}}_{Made,Team}$ is obtained by using the shots from all the players of the teams.
For the 2014-15 season {{\tt tHoops}} identified 7 components, whose spatio-temporal parts are presented in Figure \ref{fig:shot-team-componenets}.
These patterns can characterize the behavior of teams as a whole - rather than individual players.
For example, for the Houston Rockets, their made shots does not include components 1, 3, 6 and 7, i.e., the corresponding coefficients are almost 0.
Rockets' successful shot selection only follows the latent patterns described in components 2, 4 and 5.
Note that these components correspond to three-point shots and shots taken from the paint, i.e., the most efficient shots in basketball.
This is something we should have expected from an analytically savvy team like the Rockets, and hence, {{\tt tHoops}} again matches the known intuition and knowledge for the game.
Beyond the pure quality of the models identified by {{\tt tHoops}} it is also important to make sure that the patterns identified are \textit{sensible} and our results seem to clearly indicate that.
This will provide confidence to results obtained when incorporating additional information that has been ignored before and is expected to provide new insights.
For example, the shot clock information can be an important factor for shot selection.
When the shock clock winds down, a player will simply take a shot (in most of the cases) to avoid a shot clock violation.
This shot can be of very bad quality, but the corresponding component obtained from the tensor factorization will inform us for this (i.e., that the shot was taken as the shot clock was expiring).
The applications of {{\tt tHoops}} are only limited by the amount and type of information available to us.
In the rest of this section, we use {{\tt tHoops}} analyze optical tracking data to obtain information about the offensive schemes of teams.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.45]{figures/shot-team-components-2}
\caption{{{\tt tHoops}} identified 7 components using $\underline{\mathbf{X}}_{Made,Team}$ The teams' offensive tendencies can then be thought of as a combination of these latent patterns.}
\vspace{-0.1in}
\label{fig:shot-team-componenets}
\end{figure}
\textbf{Optical Tracking Data Analysis: }
Shooting data can provide a glimpse into the offensive (and defensive depending on the definition of $\underline{\mathbf{X}}$) tendencies of a team and in general of basketball.
For example, if one tracks the data over different seasons he might observe that the team factor matrix $\mathbf{A}$ for (made) shot charts (Figure \ref{fig:shot-team-componenets}) have larger coefficients for 1, 2, and 4.
This is a direct consequence of the analytics movement that has identified three-point shots and shots from the paint as the most efficient ones.
However, the offensive tendencies of the game can be better captured through the detailed optical tracking data described earlier.
{{\tt tHoops}} can be used in this setting as well to identify \textit{prototype} offensive formations.
For this case the three modes of $\underline{\mathbf{X}}$ correspond to (i) the court zones, (ii) the shot clock\footnote{We have quantized the shot clock information to bins of 1 second.}, and (iii) an identifier for the possession that this snapshot is obtained from.
The element $\underline{\mathbf{X}}(i,j,k)$ is equal to the number of offensive players on the court zone i, when the shot clock was j during possession k.
Simply put, $\underline{\mathbf{X}}(i,j,k)$ can take values from 0 to 5.
We have computed various rank decompositions for $\underline{\mathbf{X}}(i,j,k)$.
All the possible ranks that CORCONDIA can examine (i.e., up to $F = 13$, which is the minimum dimension of $\underline{\mathbf{X}}$ that corresponds to the court zones) exhibit a good quality model.
Therefore, we will also utilize the heuristic described in Section \ref{sec:thoops} to choose the decomposition rank.
In particular, we will cluster the possessions in our dataset.
Using $K=10$, Figure \ref{fig:silhouette} presents our results, where we have also presented the Silhouette value (horizontal dashed line) for the clustering using the raw data.
As we see for all the ranks examined the separability obtained from the possession factor matrix is better as compared to that from the raw data.
The latter exhibit a Silhouette value of approximately 0.1 (horizontal dashed line in Figure \ref{fig:silhouette}), which is typically interpreted as no having identified any substantial structure \cite{kaufman2009finding}.
When using the decomposition of $\underline{\mathbf{X}}$ for low ranks (e.g., less than 40), the separability, while improved over the raw data, is still fairly low, with Silhoutte values still less than 0.4, which translate to a weak (potentially artificial) structure identified \cite{kaufman2009finding}.
However, for ranks between 40 and 50 the components are able to identify a reasonably strong structure ($\max{\mathcal{S}}_{F=40,2:K}=0.69$ and $\max{\mathcal{S}}_{F=50,2:K}=0.71$).
Therefore, in this case we can choose $F = 45$.
Due to space limitations we cannot present all 45 components.
However, Figure \ref{fig:thoops-optical} presents 5 representative components identified for the offensive schemes in the league from {{\tt tHoops}}.
A first observation is that the temporal component (second column) exhibits either a single mode or a bimodal (e.g., fourth row) distribution.
This is the same for the components omitted and essentially captures the fact that specific formations/schemes are deployed in different stages of the offense.
The spatial component (first column) provides us with prototype formations for an NBA offense.
For example, component 27 (fourth row) represents a very common setting the last few years in the NBA, i.e., shooter(s) {\em parked} at the corner 3 area waiting for an assist for the catch-and-shoot attempt.
Corner 3s have been identified to be one of the most efficient shoots in basketball and hence, team have tried to incorporate this into their offense.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.25]{figures/silhouette}
\vspace{-0.1in}
\caption{While all ranks for the decomposition we examined increase the separability of the possessions as compared to the raw data, a rank of $F=45$ was chosen for this application of {{\tt tHoops}} since it exhibits the maximum Shillouette value.}
\label{fig:silhouette}
\vspace{-0.15in}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[scale=0.6]{figures/thoops-optical}
\vspace{-0.15in}
\caption{Five representative components identified by {{\tt tHoops}} from optical data. The spatial part of the component (first column) represents the distribution of the offense's players across the court zones, while the temporal part captures the temporal distribution of this formation over a possession. }
\label{fig:thoops-optical}
\vspace{-0.15in}
\end{figure*}
\section{Discussion and Other Applications}
\label{sec:discussion}
In this paper we have presented {{\tt tHoops}}, a framework based on tensor decomposition for analyzing multi-aspect basketball data.
We have showcased its applicability through analyzing spatio-temporal shooting and player tracking data.
However, the applications of {{\tt tHoops}} are not limited only in this type of data.
Additional information can be integrated into the analysis through higher order tensors.
For example, a fourth mode can be included that captures the score differential during the possession.
This will allow us to identify prototype offensive patterns controlling for the score differential as well.
Depending on the type of information it might be more appropriate to include them in a matrix coupled with the tensor.
For instance, personnel information per possession, such as team on offense/defense, player names, boxscore statistics of the players, (adjusted) plus-minus ratings, personal fouls etc., is better represented through a matrix $\mathbf{M}_B$ with rows representing the possessions in the dataset and columns representing different attributes.
In this case matrix $\mathbf{M}_B$ is coupled with tensor $\underline{\mathbf{X}}$ at the possession dimension and we can obtain the tensor components through a coupled matrix-tensor factorization, which essentially provides a low dimensional embedding of the data in a common contextual subspace by solving a coupled optimization problem.
However, on-court strategy and scouting is not the only application of {{\tt tHoops}}.
As mentioned earlier, film study is crucial for game preparation but it can be very time consuming to identify the possessions the team wants to study.
Nevertheless, the components identified by {{\tt tHoops}} can drive the development of a system that allows for flexible search in a database of possessions.
This can automate and facilitate the time-consuming parts related with film study.
For example, one can imagine querying the system using as input a (probabilistic) spatial distribution for the offense, the shot clock and any other information available for the possessions used to build tensor $\underline{\mathbf{X}}$.
For instance, let us assume that we want to extract all the possessions where an offense had players positioned at the corner 3 areas during the last 5 seconds of the shot clock for full possessions, i.e., while the shot clock was between 1-5 seconds.
If one were to use the raw data directly, he would need to go over all the possessions, extract the last 5 seconds and perform spatial queries to examine whether the spatial constraints of the query are satisfied.
The time complexity for this search is $O(N)$, where $N$ is the number of possessions (i.e., size of data).
However, using the components identified from {{\tt tHoops}} as indices we can further improve the time complexity of this retrieval task, to sub-linear to the size of the data.
In particular, we can first compute the similarity $\sigma(\mathbf{q},f_i)$, between the query $\mathbf{q}$ - expressed as a vector over the spatial and temporal dimension - and the different components $f_i$ identified from {{\tt tHoops}}.
If $\sigma(\mathbf{q},f_i) > \theta$, for some threshold $\theta$, the system will return the possessions that are mainly represented in component $f_i$, i.e., the possessions with ``high'' coefficient at vector $\mathbf{a}_{f_i}$.
It should be evident that this process has a time complexity $O(F)$, where $F$ is the number of components.
Given that the number of components increases slower than the size of the dataset\footnote{An upper bound for the rank of the tensor is $0.5\cdot \min$ {\tt dim}$(\underline{\mathbf{X}})$, where {\tt dim}$(\underline{\mathbf{X}})$ is the set of $\underline{\mathbf{X}}$'s dimensions size \cite{sidiropoulos2017tensor}. Given also that we are not interested in the full-rank decomposition of the tensor, the number of $\underline{\mathbf{X}}$'s components used in {{\tt tHoops}} will be (much) smaller as well.}, using {{\tt tHoops}} for indexing the possession database will significantly accelerate the retrieval of relevant video frames for film study.
We will further develop the system as part of our future work, where we will also study the various trade-offs between the response time versus precision and recall of the retrieved possessions.
{{\tt tHoops}} can also be used to generate synthetic data that exhibit the same patterns as the original data.
In particular, we can use the components in {{\tt tHoops}} as a seed to a dataset generation process, where the coefficients of the component can be normalized to represent a probability distribution function.
By sampling this probability distributions through Monte Carlo and assigning an actual location for a player within a court zone using a uniform spatial distribution (or a distribution learned from the data), we can generate a synthetic dataset.
This can be crucial for the research community since many times spatio-temporal data similar to the ones obtained from NBA's optical tracking system are not publicly available.
{{\tt tHoops}} can help bridge the gap between public, open research and private data by allowing the generation of synthetic datasets that exhibit similar patterns with the original data.
Obviously the synthetic data exhibit similar patterns with the original data only with respect to the dimensions considered in the creation of $\underline{\mathbf{X}}$.
We will further explore this application as part of our future work as well.
\bibliographystyle{siamplain}
\section{A detailed example}
Here we include some equations and theorem-like environments to show
how these are labeled in a supplement and can be referenced from the
main text.
Consider the following equation:
\begin{equation}
\label{eq:suppa}
a^2 + b^2 = c^2.
\end{equation}
You can also reference equations such as \cref{eq:matrices,eq:bb}
from the main article in this supplement.
\lipsum[100-101]
\begin{theorem}
An example theorem.
\end{theorem}
\lipsum[102]
\begin{lemma}
An example lemma.
\end{lemma}
\lipsum[103-105]
Here is an example citation: \cite{KoMa14}.
\section[Proof of Thm]{Proof of \cref{thm:bigthm}}
\label{sec:proof}
\lipsum[106-112]
\section{Additional experimental results}
\Cref{tab:foo} shows additional
supporting evidence.
\begin{table}[htbp]
{\footnotesize
\caption{Example table} \label{tab:foo}
\begin{center}
\begin{tabular}{|c|c|c|} \hline
Species & \bf Mean & \bf Std.~Dev. \\ \hline
1 & 3.4 & 1.2 \\
2 & 5.4 & 0.6 \\ \hline
\end{tabular}
\end{center}
}
\end{table}
\bibliographystyle{siamplain}
\section{Introduction}
\label{sec:intro}
Soccer is undoubtedly the {\em king of sports}, with approximately 4 billion global following \cite{worldatlas}.
However, despite this huge global interest it still lags behind with respect to advanced quantitative analysis and metrics capturing teams' and players' performance as compared to other sports with much smaller fan base (e.g., baseball, basketball).
Traditionally sports metrics quantify on-ball events.
However, soccer epitomizes the notion of team sports through a game of space and off-ball movement.
In soccer every player has possession of the ball an average of only 3 minutes \cite{fernandez2018wide}, and hence, metrics that quantify on-ball events will fail to capture a player's influence on the game.
Expected goals ({\xG}) \cite{lucey2015quality,fairchildspatial} is probably the most prominent, advanced metric used in soccer today.
{\xG} takes into account the context of a shot (e.g., location, number of defenders in the vicinity etc.) and provides us with the probability of a shot leading to a goal.
{\xG} allows us to statistically evaluate players.
For example, if a player is over-performing his expected goals, it suggests that he is either lucky or an above-average finisher.
If this over-performance persists year-after-year then the latter will be a very plausible hypothesis.
Nevertheless, while expected goals represent a straightforward concept and has been already used by mainstream soccer broadcast media, its application on evaluating players is still limited to a specific aspect of the game (i.e., shot taking) and only to players that actually take shots (and also potentially goalkeepers).
A more inclusive version of {\xG}, is the Expected Goal Chains ({\xGC) \cite{xGC}.
{\xGC} considers all passing sequences that lead to a shot and credits each player involved with the expected goal value for the shot.
Of course, not all passes are created equally \cite{Power:2017:PCE:3097983.3098051} and hence, {\xGC} can over/under estimate the contribution of a pass to the final shot.
The last few years player tracking technology has started penetrating the soccer industry.
During the last world cup in Russia, teams obtained player tracking data in real time \cite{economist-worldcup}!
The availability of fine-grained spatio-temporal data have allowed researchers to start looking into more detailed ways to evaluate soccer players through their movement in space.
For example, Hoang {\em et al.} \cite{le2017coordinated,le2017data} developed a deep imitation learning framework for identifying the {\em optimal} locations - i.e., the ones that minimize the probability of conceding a goal - of the defenders in any given situation based on the locations of the attackers (and the other defensive players).
Fernandez and Bornn \cite{fernandez2018wide} also analyzed player tracking data and developed a metric quantifying the contribution of players in space creation as well as, this space's value, while a nice overview of the current status of advanced spatio-temporal soccer analytics is provided by Bornn {\em et al.} \cite{doi:10.1111/j.1740-9713.2018.01146.x}.
Player tracking data will undoubtedly provide managers, coaches and players with information that previously was considered to be {\em intangible}, and revolutionize soccer analytics.
However, to date all of the efforts are focused on specific aspects of the game.
While in the future we anticipate that a manager will be able to holistically evaluate the contribution of a player during a game over a number of dimensions (e.g., space generation, space coverage, expected goals etc.), currently this is not the case - not to mention that player tracking technology is still slow in widespread adoption.
Therefore, it has been hard to develop soccer metrics similar to Win Shares and/or Wins Above Replacement Player that exist for other sports (e.g., baseball, basketball etc.) \cite{james2002win,vorp}.
These - all-inclusive - metrics translate on field performance to what managers, coaches, players and casual fans can understand, relate to and care about, i.e., wins.
Our study aims at filling exactly this gap in the existing literature discussed above.
The first step towards this is quantifying the positional values in soccer.
For instance, how much more important are the middle-fielders compared to the goalkeeper when it comes to winning a game?
In order to achieve this we use data from games from 11 European leagues as well as FIFA ratings for the players that played in these games.
These ratings have been shown to be able to drive real-world soccer analytics studies \cite{cotta2016using} and they are easy to obtain\footnote{Data and code will be available. Link not provided for double blind review.}.
Using these ratings we model the final goal differential of a game through a Skellam regression that allows us to estimate the impact of 1 unit of increase of the FIFA rating for a specific position on the probability of winning the game.
As we will elaborate on later, to avoid any data sparsity problems (e.g., very few team play with a sweeper today), we group positions in the four team lines (attack, middle-field, defense and goalkeeping) and use as our model's independent variables the difference on the average rating of the corresponding lines.
Using this model we can then estimate the {\bf expected} league points added above replacement ({{\tt tHoops}}) for every player.
The emphasis is put on the fact that this is the expected points added from a player, since it is based on a fairly static, usually pre-season, player rating, and hence, does not capture the exact performance of a player in the games he played, even though the FIFA ratings change a few times over the course of a season based on the overall player's performance.
However, when we describe our model in detail it should become evident that if these data (i.e., game-level player ratings) are available the exact same framework can be used to evaluate the actual league points added above replacement from every player.
The contribution of our work is twofold:
\begin{enumerate}
\item We develop a pre-game win probability model for soccer that is accurate and well-calibrated. More importantly it is based on the starting lineups of the two teams and hence, it can account for personnel changes between games.
\item We develop the expected league points added above replacement ({{\tt tHoops}}) metric that can be used to identify positional values in soccer and facilitate quantitative (monetary) player valuation in a holistic way.
\end{enumerate}
Section \ref{sec:method} describes the data we used as well as the regression model we developed for the score differential.
Section \ref{sec:moneyball} further details the development of our expected league points added above replacement using the Skellam regression model.
In this section we also discuss the implications for the players' transfer market.
Finally, Section \ref{sec:discussion} discusses the scope and limitations of our current study, as well as, future directions for research.
\section{Data and Methods}
\label{sec:method}
In this section we will present the data that we used for our analysis, existing modeling approaches for for the goal differential in a soccer game, as well as, the Skellam regression model we used.
\subsection{Soccer Dataset}
\label{sec:data}
In our study we make use of the Kaggle European Soccer Database \cite{kaggle-data}.
This dataset includes all the games (21,374 in total) from 11 European leagues\footnote{English Premier League, Bundesliga, Serie A, Scotish Premier League, La Liga, Swiss Super League, Jupiler League, Ligue 1, Eredivisie, Liga Zon Sagres, Ekstraklasa.} between the seasons 2008-09 and 2015-16.
For every game, information about the final result as well as the starting lineups are provided.
There is also temporal information on the corresponding players' ratings for the period covered by the data.
A player's $\player$ rating takes values between 0 and 100 and includes an overall rating $\rating_{\player}$, as well as {\em sub-ratings} for different skills (e.g., tackling, dribbling etc.).
There are 11,060 players in totals and an average of 2 rating readings per season for every player.
One of the information that we need for our analysis and is not present in the original dataset, is the players' position and his market value.
We obtained this information through FIFA's rating website (\url{www.sofifa.com}) for all the players in our dataset.
The goals scored in a soccer game have traditionally been described through a Poisson distribution \cite{lee1997modeling,karlis2000modelling}, while a negative binomial distribution has also been proposed to account for possible over-dispersion in the data \cite{pollard198569,greenhough2002football}.
However, the over-dispersion, whenever observed is fairly small and from a practical perspective does not justify the use of the negative binomial for modeling purposes considering the trade-off between complexity of estimating the models and improvement in accuracy \cite{karlis2000modelling}.
In our data, we examined the presence of over-dispersion through the Pearson chi-squared dispersion test.
We performed the test separately for the goal scored from home and away teams and in both cases the dispersion statistic is very close to 1 (1.01 and 1.1 respectively), which allows us to conclude that a Poisson model fits better for our data.
Figure \ref{fig:goals_pois} depicts the two distributions for the goals scored per game for the home and away teams in our dataset.
\begin{figure}%
\centering
\includegraphics[width=4cm]{plots/home_poisson} %
\includegraphics[width=4cm]{plots/away_poisson} %
\caption{The empirical distribution of the number of goals scored per game from the home (left) and way (right) teams. The data can be fitted by a Poisson with means $\lambda=1.56$ and $\lambda=1.18$ respectively. }%
\label{fig:goals_pois}%
\end{figure}
Another important modeling question is the dependency between the two Poisson processes that capture the scoring for the two competing teams.
In general, the empirical data exhibit a small correlation (usually with an absolute value for the correlation coefficient less than 0.05) between the goals scored by the two competing teams and the use of Bivariate Poisson models has been proposed to deal with this correlation \cite{karlis2003analysis}.
Simple put, $(X,Y)\sim BP(\lambda_1, \lambda_2, \lambda_3)$, where:
\begin{equation}
P(X=x, Y=y) = e^{-(\lambda_1+\lambda_2+\lambda_3)}\dfrac{\lambda_1^x}{x!}\dfrac{\lambda_2^y}{y!} \sum_{k=0}^{\min (x,y)} \binom{x}{k} \binom{y}{k} k! \bigg(\dfrac{\lambda_3}{\lambda_1 \lambda_2}\bigg)^k
\label{eq:bpois}
\end{equation}
The parameter $\lambda_3$ captures the covariance between the two marginal Poisson distributions for $X$ and $Y$, i.e., $\lambda_3 = Cov(X,Y)$.
In our data, the correlation between the number of goals scored from the home and away team is also small and equal to -0.06.
While this correlation is small, Karlis and Ntzoufras \cite{karlis2003analysis} showed that it can impact the estimation of the probability of a draw.
However, a major drawback of the Bivariate Poisson model is that it can only model data with positive correlations \cite{karlis2005bivariate}.
Given that in our dataset the correlation is negative, and hence, a Bivariate Poisson model cannot be used, an alternative approach is to directly model the difference between the two Poisson processes that describe the goals scored for the two competing teams.
With $Z$, $X$ and $Y$ being the random variables describing the final score differential, the goals scored from the home team and the goals scored from the away team respectively, we clearly have $Z=X-Y$.
With $(X,Y)\sim BP(\lambda_1,\lambda_2,\lambda_3)$, $Z$ has the following probability mass function \cite{skellam1946frequency}:
\begin{equation}
P(z) = e^{\lambda_1 + \lambda_2}\cdot \bigg(\dfrac{\lambda_1}{\lambda_2}\bigg)^{z/2}\cdot I_z(2~ \sqrt[]{\lambda_1\lambda_2})
\label{eq:skellam}
\end{equation}
where $I_r(x)$ is the modified Bessel function.
Equation (\ref{eq:skellam}) describes a Skellam distribution and clearly shows that the distribution of $Z$ does not depend on the correlation between the two Poisson distributions $X$ and $Y$.
In fact, Equation (\ref{eq:skellam}) is exactly the same as the distribution of the difference of two independent Poisson variates \cite{skellam1946frequency}.
Therefore, we can directly model the goal differential without having to explicitly model the covariance.
Of course, the drawback of this approach is that the derived model is not able to provide estimates on the actual game score, but rather only on the score differential.
Nevertheless, in our study we are not interested in the actual score but rather in the win/lose/draw probability.
Hence, this does not pose any limitations for our work.
\subsection{Skellam Regression Model}
\label{sec:skellam_reg}
Our objective is to quantify the value of different positions in soccer.
This problem translates to identifying how an one-unit increase in the rating of a player's position impacts the probability of his team winning.
For instance, if we substitute our current striker who has a FIFA rating of 79, with a new striker with a FIFA rating of 80, how do our chances of winning alter?
Once we have this information we can obtain for every player an expected league points added per game over a reference, i.e., replacement, player (Section \ref{sec:elpar}).
This can then be used to obtain a more objective market value for players based on their position and rating (Section \ref{sec:mv}).
\begin{figure}[t]%
\centering
\includegraphics[width=7cm]{plots/soccer-positions} %
\caption{We grouped player positions to four distinct groups, namely, goalkeeping, attack, middlefielders and defense.}%
\label{fig:positions}%
\vspace{-0.1in}
\end{figure}
In order to achieve our goal we model the goal differential $Z$ of a game using as our independent variables the player/position ratings of the two teams that compete.
Hence, our model's dependent variable is the goal differential (home - away) of game $i$, $z_i$, while our independent variables are the positional rating differences of the two teams, $x_{i,\positions}=r_{\player(h,\positions,i)}-r_{\player(a,\positions,i)},~\forall \positions \in \positionSet$, where $r_{\player(h,\positions,i)}$ ($r_{\player(a,\positions,i)}$) is the rating of the home (away) team player that covers position $\positions$ during game $i$ and $\positionSet$ is the set of all soccer positions.
One of the challenges with this setting is the fact that different teams will use different formations and hence, it can be very often the case that while one team might have 2 center backs and 2 wing backs, the other team might have 3 center backs only in its defensive line.
This will lead to a situation where the independent variables $x_{i,\positions}$ might not be well-defined.
While this could potentially be solved by knowing the exact formation of a team (we will elaborate on this later), this is unfortunately a piece of information missing from our data.
Nevertheless, even this could create data sparsity problems (e.g., formation/player combinations that do not appear often).
Hence, we merge positions to four groups, namely, attacking line, middle-fielders, defensive line and goalkeeping.
Figure \ref{fig:positions} depicts the grouping of the positions we used to the four lines $\linepSet = \{\linep_{\defense},\linep_{\middlefield},\linep_{\attack},\linep_{\gk}\}$.
Note that this grouping in the four lines has been used in the past when analyzing soccer players as well \cite{he2015football}.
The independent variables of our model are then the differences in the average rating of the corresponding lines.
The interpretation of the model slightly changes now, since the independent variable captures the rating of the whole line as compared to a single position/player.
Under this setting we fit a Skellam regression for $Z$ through maximum likelihood estimation.
In particular:
\begin{mydefinition}{Final Goal Differential}{mod:skellam}
We model the goal differential $Z_i$ of game $i$ using the following four co-variates:
\begin{itemize}
\item The difference between the average player rating of the defensive line of the two teams $x_{\defense}$
\item The difference between the average player rating of the middle-fielders of the two teams $x_{\middlefield}$
\item The difference between the average player rating of the attacking line of the two teams $x_{\attack}$
\item The difference between the goalkeeper's rating of the two teams $x_{\gk}$
The random variable $Z$ follows a Skellam distribution, where its parameters depend on the model's covariates $\mathbf{x} = (x_{\defense},x_{\middlefield},x_{\attack},x_{\gk})$:
\begin{eqnarray}
Z \sim Skellam(\lambda_1,\lambda_2)\\
\log(\lambda_1) = \mathbf{b}_1^T \cdot \mathbf{x} \\
\log(\lambda_2) = \mathbf{b}_2^T \cdot \mathbf{x}
\end{eqnarray}
\end{itemize}
\end{mydefinition}
Table \ref{tab:skellam_reg} shows the regression coefficients.
It is interesting to note that the coefficients for the two parameters are fairly symmetric.
$\lambda_1$ and $\lambda_2$ can be thought of as the mean of the Poisson distributions describing the home and visiting team respectively and hence, a positive relationship between an independent variable and the score differential for one team corresponds - to an equally strong - negative relationship between the same variable and the score differential.
An additional thing to note is that an increase on the average rating of any line of a team contributes positively to the team's chances of winning (as one might have expected).
\begin{table}[ht]\centering
\begin{tabular}{c c c }
\toprule
\textbf{Variable} & \textbf{$\log(\lambda_1)$} & \textbf{$\log(\lambda_2)$} \\
\midrule
Intercept & 0.37*** & 0.07*** \\
& (0.012) & (0.015) \\
$x_{\defense}$ & 0.02*** & -0.03*** \\
& (0.01) & (0.002) \\
$x_{\middlefield}$ & 0.02*** & -0.015*** \\
& (0.01) & (0.002) \\
$x_{\attack}$ & 0.01***& -0.01*** \\
& (0.001) & (0.001) \\
$x_{\gk}$ & 0.001& -0.004** \\
& (0.001) & (0.002) \\
\midrule
N & 21,374 & 21,374 \\
\bottomrule
\addlinespace[1ex]
\multicolumn{3}{l}{\textsuperscript{***}$p<0.01$,
\textsuperscript{**}$p<0.05$,
\textsuperscript{*}$p<0.1$}
\end{tabular}
\caption{Skellam regression coefficients}
\label{tab:skellam_reg}
\end{table}
\iffalse
\begin{table}[ht]\centering
\begin{tabular}{c c c }
\toprule
\textbf{Variable} & \textbf{$\log(\lambda_1)$} & \textbf{$\log(\lambda_2)$} \\
\midrule
Intercept & 0.41*** & 0.13*** \\
& (0.006) & (0.006) \\
$x_{\defense}$ & 0.02*** & -0.02*** \\
& (0.01) & (0.002) \\
$x_{\middlefield}$ & 0.02*** & -0.02*** \\
& (0.01) & (0.001) \\
$x_{\attack}$ & 0.01***& -0.01*** \\
& (0.001) & (0.001) \\
$x_{\gk}$ & 0.001& -0.002** \\
& (0.001) & (0.001) \\
\midrule
N & 21,374 & 21,374 \\
\bottomrule
\addlinespace[1ex]
\multicolumn{3}{l}{\textsuperscript{***}$p<0.01$,
\textsuperscript{**}$p<0.05$,
\textsuperscript{*}$p<0.1$}
\end{tabular}
\caption{Skellam regression coefficients}
\label{tab:skellam_reg}
\end{table}
\fi
Before using the model for estimating the expected league points added above replacement for each player, we examine how good the model is in terms of actually predicting the score differential and the win/draw/lose probabilities.
We use an 80-20 split for training and testing of the model.
We begin our evaluation by calculating the difference between the goal differential predicted by our model and the actual goal differential of the game \cite{10.2307/2684286}.
Figure \ref{fig:model_eval} (top) presents the distribution of this difference and as we can see it is centered around 0, while the standard deviation is equal to 1.6 goals.
Furthermore, a chi-squared test cannot reject the hypothesis that the distribution is normal with mean equal to 0 and a standard deviation of 1.6.
However, apart from the score differential prediction error, more important for our purposes is the ability to obtain {\em true} win/loss/draw probabilities for the games.
As we will see in Section \ref{sec:elpar} we will use the changes in these probabilities to calculate an expected league points added for every player based on their position and rating.
Hence, we need to evaluate how accurate and well-calibrated these probabilities are.
Figure \ref{fig:model_eval} (bottom) presents the probability calibration curves.
Given that we have 3 possible results (i.e., win, loss and draw), we present three curves from the perspective of the home team, that is, a home team win, loss or draw.
The $x$-axis presents the predicted probability for each event, while the $y$-axis is the observed probability.
In particular we quantize the data in bins of 0.05 probability range, and for all the games within each bin we calculate the fraction of games for which the home team won/lost/draw, and this is the observed probability.
Ideally, we would like to have these two numbers being equal.
Indeed, as we can see for all 3 events the probability output of our model is very accurate, that is, all lines are practically on top of the $y=x$ line.
It is interesting to note, that our model does not provide a draw probability higher than 30\% for any of the games in the test set, possibly due to the fact that the base rate for draws in the whole dataset is about 25\%.
\begin{figure}%
\centering
\includegraphics[width=6.5cm]{plots/prediction-error} %
\includegraphics[width=6.5cm]{plots/calibration} %
\caption{Our model is accurate in predicting the score differential as well as the win/loss/draw probabilities of a soccer game.}%
\label{fig:model_eval}%
\vspace{-0.1in}
\end{figure}
\section{eLPAR and Market Value}
\label{sec:moneyball}
We begin by defining the notion of a replacement player and developing {{\tt tHoops}}.
We also show how we can use {{\tt tHoops}} to obtain {\em objective} player and transfer fee (monetary) valuations.
\subsection{Replacement Player and Expected League Points Added}
\label{sec:elpar}
The notion of replacement player was popularized by Keith Woolner \cite{woolner2002understanding} who developed the Value Over Replacement Player (VORP) metric for baseball.
The high level idea is that player talent comes at different levels.
For instance, there are superstar players, average players and subpar player talent.
These different levels come in different proportions within the pool of players, with superstars being a scarcity, while subpar players (what Woolner termed replacement players) being a commodity.
This essentially means that a team needs to spend a lot of money if it wants to acquire a superstar, while technically a replacement player comes for free.
Since a replacement player can be thought of as a {\em free} player, a good way to evaluate (and consequently estimate a market value for) a player is to estimate the (expected) contribution in wins, points etc. that he/she offers above a replacement player.
One of the main contributions of Woolner's work is to show that average players have value \cite{vorp}!
If we were to use the average player as our reference for evaluating talent, we would fail to recognize the value of average playing time.
Nevertheless, replacement level, even though it is important for assigning economic value to a player, it is a less concrete mathematical concept.
There are several ways that have been used to estimate this.
For example, one can sort players (of a specific position) in decreasing order of their contract value and obtain as replacement level the talent at the bottom 20th percentile \cite{winston2012mathletics}.
What we use for our study is a {\em rule-of-thumb} suggested from Woolner \cite{vorp2}.
In particular, the replacement level is set at the 80\% of the positional average rating.
While the different approaches might provide slightly different values for a replacement player, they will not affect the relative importance of the various positions identified by the model.
Figure \ref{fig:ratings} presents the distribution of the player ratings for the different lines for the last season in our dataset - i.e., 2015-16.
The vertical green lines represent the replacement level for every position/line, i.e., 80\% of the average of each distribution.
As we can see all replacement levels are very close to each other and around a rating of 56.
So the question now becomes how are we going to estimate the expected league points added above replacement (${\tt tHoops}$) given the model from Section \ref{sec:skellam_reg} and the replacements levels of each line.
First let us define ${\tt tHoops}$ more concretely:
\begin{mydefinition2}{{\tt eLPAR}}{def:elpar}
Consider a game between teams with only replacement players.
Player $\player$ substitutes a replacement player in the lineup.
${\tt tHoops}_{\player}$ describes how many league points (win=3 points, draw = 1 point, loss = 0 points) $\player$ is expected to add for his team.
\end{mydefinition2}
\begin{figure*}%
\centering
\includegraphics[width=4cm]{plots/defense} %
\includegraphics[width=4cm]{plots/middlefield} %
\includegraphics[width=4cm]{plots/attack} %
\includegraphics[width=4cm]{plots/gk} %
\caption{The replacement level rating (green vertical line) for each one of the positional lines in soccer is around 56.}%
\label{fig:ratings}%
\end{figure*}
Based on the above definition, ${\tt tHoops}_{\player}$ can be calculated by estimating the change in the win/draw/loss probability after substituting a replacement player with $\player$.
However, the win probability model aforementioned does not consider individual players but rather lines.
Therefore, in order to estimate the expected points to be added by inserting player $\player$ in the lineup we have to consider the formation used by the team.
For example, a defender substituting a replacement player in a 5-3-2 formation will add a different value of expected points as compared to a formation with only 3 center-backs in the defensive line.
Therefore, in order to estimate ${\tt tHoops}_{\player}$ we need to specify the formation we are referring to.
Had the formation been available in our dataset we could have built a multilevel model, where each combination of position and formation would have had their own coefficients\footnote{And in this case we would also be able to analyze better the impact of positions within a line (e.g., value of RB/LB compared to CB).}.
Nevertheless, since this is not available our model captures the formation-average value of each line.
In particular, ${\tt tHoops}_{\player}$ for player $\player$ with rating $r_{\player}$ can be calculated as following:
\begin{enumerate}
\item Calculate the increase in the average rating of the line $\linep \in \linepSet$ where $\player$ substituted the replacement player based on $r_{\player}$, formation $\formation$ and the replacement player rating for the line $r_{replacement,\formation,\linep}$
\item Calculate, using the win probability model above, the change in the win, loss and draw probability ($\delta P_w$, $\delta P_d$ and $\delta P_l$ respectively)
\item Calculate ${\tt tHoops}_{\player}(\formation)$ as:
\begin{equation}
{\tt tHoops}_{\player}(\formation) = 3\cdot \delta P_w + 1\cdot \delta P_d
\label{eq:elpar}
\end{equation}
\end{enumerate}
It should be evident that by definition a replacement player has ${\tt tHoops} = 0$ - regardless of the formation - while if a player has rating better than a replacement, his ${\tt tHoops}$ will be positive.
However, the actual value and how it compares to players playing in different positions will depend on the formation.
In Figure \ref{fig:elpar_formations} we present the expected league points added per game for players with different ratings (ranging from 50 to 99) and for different formations.
While there are several different formations that a team can use, we chose 4 of the most often used ones.
\begin{figure}%
\centering
\includegraphics[width=4cm]{plots/4-4-2} %
\includegraphics[width=4cm]{plots/4-5-1} %
\includegraphics[width=4cm]{plots/3-5-2} %
\includegraphics[width=4cm]{plots/4-3-3} %
\includegraphics[width=8cm]{plots/allformations} %
\caption{Expected league points added above replacement for different formations, player ratings and positions.}%
\label{fig:elpar_formations}%
\end{figure}
One common pattern in all of the formations presented is the fact that for a given player rating goal keepers provide the smallest expected league points above replacement - which is in line with other studies/reports for the value of goal keepers in today's soccer \cite{economist-gk}.
It is also evident that depending on the formation the different positions offer different {\em value}.
For example, a 4-5-1 system benefits more from an attacker with a rating of 90 as compared to a defender with the same rating, while in a 3-5-2 formation the opposite is true.
It is also interesting to note that for a 4-4-2 formation, the value added above replacement for the different positions are very close to each other (the closest compared to the rest of the formations).
This most probably is due to the fact that the 4-4-2 formation is the most balanced formation in soccer, and hence, all positions {\em contribute} equally to the team.
To reiterate this is an expected value added, i.e., it is not based on the actual performance of a player but rather on static ratings for a player.
Given that teams play different formations over different games (or even during the same game after in-game adjustments), a more detailed calculation of ${\tt tHoops}$ would include the fraction of total playing time spent by each player on a specific formation.
With $T$ being the total number of minutes played by $\player$, and $t_{\formation}$ the total minutes he played in formation $\formation$, we have:
\begin{equation}
{\tt tHoops}_{\player} = \dfrac{1}{T}\sum_{\formation} t_{\formation} \cdot{\tt tHoops}_{\player}(\formation)
\label{eq:elpar_formation}
\end{equation}
The last row in Figure \ref{fig:elpar_formations} presents the average ${\tt tHoops}$ for each position and player rating across all the four possessions (assuming equal playing time for all formations).
As we can see for the same player rating, a defender adds more expected league points above replacement, followed by an attacker with the same rating.
A middlefielder with the same rating adds only slightly less expected league points compared to an attacker of the same rating, while a goal keeper (with the same rating) adds the least amount of expected league points.
A team manager can use this information to identify more appropriate targets given the team style play (formations used) and the budget.
In the following section we will explore the relation between the market value of a player and his ${\tt tHoops}$.
\begin{figure*}[h]%
\centering
\includegraphics[width=5cm]{plots/mv-positions}
\includegraphics[width=5cm]{plots/cost-point} %
\includegraphics[width=5cm]{plots/cost-rating}
\caption{Even though goalkeepers are among the lowest paid players in soccer, they still are overpaid in terms of expected league points contributions. Defenders are undervalued when it comes to contributions in winning.}%
\label{fig:mv}%
\end{figure*}
\subsection{Positional Value and Player Market Value}
\label{sec:mv}
In this section we will explore how we can utilize {{\tt tHoops}} to identify possible {\em inefficiencies} in the player's transfer market.
In particular, we are interested in examining whether the transfer market overvalues specific positions based on the {{\tt tHoops}} value they provide.
Splitting the players in the four lines Figure \ref{fig:mv} (left) presents the differences between the average market value - that is, the transfer fee paid from a team to acquire a player under contract - for each position.
As we can see, on average, defenders are the lowest paid players!
However, as aforementioned (Figure \ref{fig:elpar_formations}) for a given player rating, a defensive player provides the maximum {{\tt tHoops}} value.
Nevertheless, what we are really interested in is the monetary value that a team pays for 1 expected league point above replacement per player.
Granted there is a different supply of players in different positions.
For example, only 8.5\% of the players are goal keepers, as compared to approximately 35\% of defenders\footnote{There is another approximately 35\% of middlefielders and 21\% of attackers.}, and hence, one might expect goalkeepers to be paid more than defenders.
However, there is also smaller demand for these positions and hence, we expect these two to cancel out to a fairly great extend, at least to an extend that does not over-inflate the market values.
Hence, we calculate the monetary cost the players market values imply that teams are willing to pay for 1 expected league points.
Figure \ref{fig:mv} (middle) presents the cost (in Euros) per 1 expected league points for different positions and as a function of the ${\tt tHoops}$ they provide.
An {\em efficient} market would have four straight horizontal lines, one on top of the other, since the value of 1 expected league point should be the same regardless of where this point is expected from.
However, what we observe is that the market over-values significantly goal keepers (even though on average they are only the 3rd highest paid line), and this is mainly a result of their low ${\tt tHoops}$ (the best goalkeeper in our dataset provides an ${\tt tHoops}$ of just over 0.1 per 90 minutes).
Furthermore, teams appear to be willing to pay a premium for expected league points generated by the offense as compared to points generated by the defense, and this premium increases with ${\tt tHoops}$.
This becomes even more clear from the right plot in Figure \ref{fig:mv}, where teams are willing to pay multiples in premium for 1 expected league point coming from a goalkeeper with 88 FIFA rating as compared to 1 expected league point (i.e., the same performance) coming from a defender with 86 FIFA rating.
Player wages exhibit similar behavior (the ranking correlation between transfer/market value and a player's wage is 0.94).
Given that there is no salary cap in European soccer, teams can potentially overpay in general in order to bring in the players they want.
Hence, across teams comparisons are not appropriate.
However, within team comparison of contracts among its players is one way to explore whether teams are being rational in terms of payroll.
In particular, we can examine the distribution of their total budget among their players, and whether this is in line with their positional values.
Simply put, this analysis will provide us with some relative insight on whether teams spend their budget proportional to the positional and personal on-field value (i.e., FIFA rating) of each player.
Let us consider two specific teams, that is, FC Barcelona and Manchester United.
We will use the wages of the starting 11 players of the two teams (from the 2017-18 season) and considering the total budget $\budget$ constant we will redistribute it based on the ${\tt tHoops}$ of each player.
We do not consider substitutions, since an accurate comparison would require the expected (or actual) time of play.
Table 2 presents the starting 11 for Barcelona, their FIFA rating and their wage\footnote{\url{www.sofifa.com/team/241}}, while Table 3 presents the same information for Manchester United\footnote{\url{www.sofifa.com/team/11}}.
We have also included the formation-agnostic (i.e., average of the four most frequent formations aforementioned) ${\tt tHoops}$ and the corresponding redistribution of salaries, as well as the same numbers for the default formation of each team (4-4-2 for Barcelona and 4-3-3 for Manchester United).
The way we calculate the re-distribution is as following:
\begin{enumerate}
\item Calculate the fraction $f_p=\dfrac{{\tt tHoops}_p}{{\tt tHoops}_{total}}$ of total ${\tt tHoops}$ that player $p$ contributes to his team (${\tt tHoops}_{total} = \sum_{p=1}^{11} {\tt tHoops}_p$)
\item Calculate the ${\tt tHoops}$-based wage for player $p$ as $f_p \cdot \budget$
\end{enumerate}
As we can see there are differences in the wages projected when using ${\tt tHoops}$.
Both teams for example appear to overpay their goalkeepers based on their expected league points above replacement per 90 minutes.
Of course, some players are under-valued, and as we can see these players are mainly in the defensive line.
These results open up interesting questions for soccer clubs when it comes to budget decisions.
Budget is spent for two reasons; (a) to win, as well as, (b) to maximize the monetary return (after all, sports franchises are businesses).
The premium that clubs are willing to pay an attacker over a defender for the same amount of league points can be seen as an investment.
These players bring fans in the stadium, increase gate revenue (e.g., through increased ticket prices), bring sponsors, sell club merchandise, etc.
For example, even though attackers are approximately only 20\% of the players' pool, 60\% of the top-selling jerseys in England during 2018 belonged to attackers \cite{jerseys}.
Therefore, when we discuss the money spent from a team for a transfer (or a wage), winning is only one part of the equation.
While teams with large budgets (like Manchester United and Barcelona) might be able to pay premiums as an investment, other teams in the middle-of-the-pack can achieve significant savings, without compromising their chances of winning.
In fact, clubs with limited budget can maximize their winning chances, which is an investment as well (winning can bring in revenues that can then be used to acquire better/more popular players and so on).
A club with a fixed transfer budget $\budget$ can distribute it in such a way that maximizes the expected league points {\em bought} (even under positional constraints).
For instance, with $\budget = 6$ millions and with the need for a center back and a goalkeeper, if we use the average market values for the two positions we should allocate 55\% of the budget (i.e., 3.3 millions) for the goalkeeper and 45\% of the budget for the defender.
This will eventually get us about 0.028 expected league points per 90 minutes (a goalkeeper with a 74 FIFA rating and a defender with a 73 FIFA rating).
However, if we allocate 500K for the goalkeeper and 5.5 millions for the defender this will get us around 0.033 expected league points (a goalkeeper with 68 FIFA rating and a defender with 78 FIFA rating), or simply put the team will have bought 1 expected league point at a 15\% discount as compared to the rest of the market.
\begin{table}
\begin{tcolorbox}[tab2,tabularx={X||Y|Y|Y|Y|Y|Y},title=FC Barcelona,boxrule=0.5pt]
Players & FIFA Rating & Wage ($\euro$) & ${\tt tHoops}$ & ${\tt tHoops}$ Wage ($\euro$) & ${\tt tHoops}$ (4-4-2) & ${\tt tHoops}$ Wage (4-4-2) ($\euro$) \\\hline\hline
M. Stegen & 87 & 185K & 0.092& 79K & 0.093 & 83K \\\hline\hline
S. Roberto & 82 & 150K & 0.32 & 271.5K& 0.30 & 266K \\\hline
Pique & 87 & 240K & 0.38 & 324K & 0.36& 317K \\\hline
S. Umtiti & 84 & 175K & 0.35 & 292.5K& 0.32 & 286.5K\\\hline
Jordi Alba & 87 & 185K & 0.38 & 324K & 0.35 & 317.5K \\ \hline\hline
O. Dembele & 83 & 150K & 0.28 & 239K & 0.29& 258K \\ \hline
I. Rakitic & 86 & 275K & 0.31 & 266K & 0.32 & 287K \\\hline
s. Busquets & 87 & 250K & 0.32 & 275K & 0.33 & 298K \\\hline
Coutinho & 87 & 275K & 0.32 & 275K & 0.33& 298K \\\hline\hline
L. Messi & 94 & 565K & 0.41 & 349K & 0.35 & 317K \\\hline
L. Suarez & 92 & 510K & 0.39 & 330K & 0.33 & 300K \\\hline\hline
\end{tcolorbox}
\label{tab:barcelonafc}
\caption{FC Barcelona wages and ${\tt tHoops}$-based projected wages. }
\vspace{-0.1in}
\end{table}
\begin{table}
\begin{tcolorbox}[tab3,tabularx={X||Y|Y|Y|Y|Y|Y},title=Manchester United,boxrule=0.5pt]
Players & FIFA Rating & Wage ($\euro$) & ${\tt tHoops}$ & ${\tt tHoops}$ Wage ($\euro$) & ${\tt tHoops}$ (4-3-3) & ${\tt tHoops}$ Wage (4-3-3) ($\euro$) \\\hline\hline
De Gea & 91 & 295K & 0.1& 65.5K & 0.11 & 69K \\\hline\hline
A. Valencia & 83 & 130K & 0.33 & 208K& 0.31 & 203K \\\hline
C. Smalling & 81 & 120K & 0.31 & 193K & 0.28& 188K \\\hline
V. Lindelof & 78 & 86K & 0.27 & 169K& 0.25 & 165K \\\hline
A. Young & 79 & 120K & 0.28 & 177K & 0.26 & 172.5K \\ \hline\hline
N. Matic & 85 & 180K & 0.3 & 189.5K & 0.41& 273K \\ \hline
A. Herrera & 83 & 145K & 0.28 & 176K & 0.38 & 254K \\\hline
P. Pogba & 88 & 250K & 0.34 & 209.5K & 0.46 & 301.5K \\\hline\hline
J. Lingard & 81 & 115K & 0.27 & 168K & 0.15 & 100K \\\hline
R. Lukaku & 86 & 210K & 0.32 & 202.5K & 0.18 & 121K \\\hline
A. Sanchez & 88 & 325K & 0.35 & 216K & 0.19 & 129K \\\hline\hline
\end{tcolorbox}
\label{tab:manunfc}
\caption{Manchester United wages and ${\tt tHoops}$-based projected wages. }
\vspace{-0.1in}
\end{table}
\subsection{Fair Transfer Fees}
\label{sec:transfer}
In the last example above, the transfer fees mentioned (i.e., 500K and 5.5M) are based on the current market transfer and most probably will still be an overpayment for the talent acquired.
What basically one can achieve with an approach as the one described above is to optimize the team's transfers based on the current market values.
However, we can use our model and analysis to also estimate a {\em fair} (i.e., considering only a team's winning chances) transfer fee for a player.
For this we would need to know what 1M Euros is worth in terms of league points.
To do so we will need the total transfer budget of teams and the total number of league points they obtained.
For example, Figure \ref{fig:premier_league} presents the relationship between a team's transfer budget and the total points obtained for the 2017-18 Premier League.
The slope of the linear fit is 0.44 ($R^2 = 0.71$), which means that 1M Euros in transfer budget is worth 0.44 Premier League points.
Therefore for a player $p$ with ${\tt tHoops}_p$, who is expected to play $N$ games, a fair transfer fee is $\dfrac{N \cdot {\tt tHoops}_p}{0.44}$.
For example, recently a transfer that was discussed a lot was that of goal keeper Danny Ward from Liverpool to Leicester.
Based on Ward's current rating (70) and his potential upside (78), the transfer fee should be between 3.3 and 5.2 million pounds, assuming he plays all 38 Premier League games next season (he is not currently expected to even start).
However, Leicester paid 10 million pounds for this transfer \cite{skysports-ward}.
Again there might be other reasons that Leicester was willing to pay 10 million pounds for Ward, and similar transfers can only be accurately - if at all - evaluated only after the players leaves/transfers from his new team.
For instance, if Ward ends up playing 10 full seasons with Leicester his transfer fee can even be considered a {\em steal}.
The same will be true if Leicester sells Ward for double this price within a couple of years.
In general, estimating transfer fees is a much more complex task, but ${\tt tHoops}$ can facilitate these estimations by considering the on-pitch expected contributions of the player.
We would like to emphasize here that the relationship between transfer budget and league points should be built separately for every league and for robustness more seasons need to be considered (appropriately adjusted for inflation).
\begin{figure}%
\centering
\includegraphics[width=8cm]{plots/premier_league} %
\caption{In Premier League 1M Euros in transfer budget is worth 0.44 league points.}%
\label{fig:premier_league}%
\end{figure}
\iffalse
Our objective is to explore whether the market value of a player closely follows his on-field expected contribution.
Given that in European soccer there is no salary cap similar to north American professional sports leagues (i.e., we cannot put a monetary value to 1 win), we will rely on relative comparisons.
For this we will begin by building a model for a player's market value.
There are various factors that can affect the market value of a player and hence, are included in our model as explanatory variables.
In particular we include the player's age, his FIFA rating, the player's potential based on the upper limit on his rating provided by FIFA, the player's position, as well as the supply of players at the same position and with the same rating (in particular +/- 1).
Table \ref{tab:mv_mod} presents our results.
\begin{table}[ht]\centering
\begin{tabular}{c c }
\toprule
\textbf{Variable} & \textbf{Player Market Value (in millions)} \\
\midrule
Intercept & -14.92*** \\
& (0.73) \\
Age & -0.15*** \\
& (0.009) \\
Rating & 0.30*** \\
& (0.01) \\
Potential & 0.01*** \\
& (0.001) \\
Supply & -0.006*** \\
& (0.0003) \\
Position(GK) & -4.56*** \\
& (1.47)\\
Position(M) & -7.19***\\
& (0.94)\\
Position(O) & -12.27***\\
& (1.14)\\
{\bf Interaction terms} & \\
Position(GK)$\cdot$ Supply & -0.025*** \\
& (0.002)\\
Position(M)$\cdot$ Supply& -0.0023*** \\
& (0.0004)\\
Position(O)$\cdot$ Supply& -0.11*** \\
& (0.0007)\\
Position(GK)$\cdot$ Rating & 0.056** \\
& (0.027)\\
Position(M)$\cdot$ Rating& 0.11*** \\
& (0.017)\\
Position(O)$\cdot$ Rating& 0.15***\\
& (0.02)\\
Position(GK)$\cdot$ Potential & 0.035 \\
& (0.030)\\
Position(M)$\cdot$ Potential& 0.025 \\
& (0.018)\\
Position(O)$\cdot$ Potential& 0.067** \\
&(0.021) \\
\midrule
N & 10,997 \\
\bottomrule
\addlinespace[1ex]
\multicolumn{2}{l}{\textsuperscript{***}$p<0.01$,
\textsuperscript{**}$p<0.05$,
\textsuperscript{*}$p<0.1$}
\end{tabular}
\caption{Market Value Regression Model Coefficients}
\label{tab:mv_mod}
\end{table}
\fi
\section{Conclusions and Discussion}
\label{sec:discussion}
In this work our objective is to understand positional values in soccer and develop a metric that can provide an estimate for the {\em expected} contribution of a player on the field.
We start by developing a win probability model for soccer games based on the ratings of the four lines of the teams (attack, middlefield, defense and goalkeeper).
We then translate this positional values to expected league points added above a replacement player {{\tt tHoops}} considering a team's formations.
We further show how this framework can be useful by analyzing transfer fees and players' wages and relating them back to each player's {{\tt tHoops}}.
Our results indicate that specific positions are over-valued when only considering their contribution to winning the game.
We believe that this study will trigger further research on the positional value in soccer.
An immediate improvement over our current model is to consider the actual formation that the teams used (a piece of information missing in our current dataset).
This will allow us to build a multilevel regression model where we will include covariates for more fine grained positions (e.g., center back, right back, center middlefielder etc.).
We can also include information about substitutions during a game (another piece of information not available to us).
This will allow us to (a) obtain a weighted average for the average rating of a line based on the substitutions, and (b) a much more accurate estimate for a player's total playing time.
Furthermore, our current study is based on static player ratings obtained from FIFA.
This only allows us to estimate the {\bf expected} league points added over a replacement player.
While these ratings capture the overall performance of a player during past season(s) and hence, it is still appropriate for estimating his monetary value, actual game ratings for players will allow us to estimate the {\em {\bf actual}} league points added over replacement by a player over the course of a season.
These game ratings for example can be composed through appropriate analysis of player tracking data, which at the least will provide us with information about how much time a combo-player (e.g., a left middlefielder who can also play left wing/forward) played at each line.
We will explore these direction as part of our future research, while we will also explore the applicability of a similar approach towards quantifying positional value for American Football (NFL).
In particular, using player ratings from NFL Madden (in a similar way we use player ratings from FIFA), we can evaluate the contribution of 1 unit increase in the Madden rating of a player to the expected points added from a team's play.
This could be a significant step towards defining a metric similar to Wins Above replacement for NFL, and finally understanding the contribution of each position in winning. | {
"attr-fineweb-edu": 1.738281,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdaa6NNjgBvv4PBGH | \section*{Appendix}
\vspace{-2pt}
In the appendix, we provide more information on model reproductions, descriptions of features, the model component, and results.
\vspace{-8pt}
\section{Dataset preprocessing}
\vspace{-2pt}
\label{app:mod}
Primarily for the dataset, we drop all matches with own-goal, since it is rare and hard to classify into any group of action, but has a significant impact on the match results. Next, we split the dataset for train/valid/test according to the 0.8/0.1/0.1 ratio for matches in each football league. However, training the models on the entire dataset requires a significant amount of time (more than 20 hours). Therefore, in order to verify more model architectures and applied grid searching, we have reduced the training set and validation set to 100,000 ($5\%$ of the training set) and 10,000 rows of record respectively. In general, Table \ref{tab:dataset} has summarized the number of matches in the train/valid/test set.
Table \ref{tab:dataset} shows how the dataset is split according to football leagues.
\begin{table}[!htb]
\caption{Dataset splitting method.}
\label{tab:dataset}
\begin{tabular}{|l|l|l|l|}
\hline
Football league & Training (Matches) & Validation (Matches) & Testing (Matches)\\
\hline
Premier League & - & - & 37\\
La Liga & - & - & 37\\
Ligue 1 & - & - & 37\\
Serie A & - & - & 37\\
and Bundesliga & 73 & 7 & 30 \\
\hline
\end{tabular}
\end{table}
Furthermore, Fig \ref{fig:time} presents how the events record rows being sliced into a time window as input features and target features. We take 40 events recorded in time $i-40$ to $i-1$ to forecast the event in time $i$ using the NMSTPP model. In addition, the slicing only happens within the events of a match but not across matches and disregards which team the possession belongs to.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.35]{./fig/time_window.png}
\caption{
The time window slicing method for input features and target features.
}
\label{fig:time}
\end{figure}
\vspace{-8pt}
\section{Hyperparameter grid search}
\label{app:hyp}
Primarily, Table \ref{tab5} summarize all the hyperparameter value or option being grid searched and the best hyperparameter for the NMSTPP model is bolded.
\begin{table}[!htb]
\caption{Grid searched Hyperparameter and its value (option). The best value (option) for each Hyperparameter is bolded.}\label{tab5}
\begin{tabular}{|l|l|}
\hline
Hyperparameter & Grid searched value (option)\\
\hline
$Seqlen$ & 1,10,\textbf{40},100\\
$dim\_feedforward$ & 1,2,4,8,16,32,64,128,256,512,\textbf{1024},2048,4096,8192,16384\\
$order$ &\textbf{\{$\boldsymbol{t,z,m}$\}},\{$t,m,z$\},…,\{$z,t,m$\}\\
$num\_layer\_t$ & \textbf{1},2,4,8,16 \\
$num\_layer\_z$ & \textbf{1},2,4,8,16 \\
$num\_layer\_m$ & 1,\textbf{2},4,8,16 \\
$activation\_function$ & \textbf{None}, ReLu, Sigmoid,Tanh\\
$drop\_out$ & \textbf{0},0.1,0.2,0.5\\
\hline
\end{tabular}
\end{table}
For the hyperparameter order, we compared the order of interevent time $t_i$, zones $z_i$, and actions $m_i$ in equation \ref{eq5}. Table \ref{tab3} compares the performance of NMSTPP model when we interchange the order. The result shows that following the order interevent time $t_i$, zones $z_i$ as in equation \ref{eq5} provides the best result. Moreover, the order mainly affects the CEL of action and is able to create a difference up to 0.11.
\begin{table}
\caption{Performance comparisons with different connection orders NMSTPP models on the validation set. Model total loss, RMSE on interevent time $t$, CEL on zone, and CEL on action are reported.}\label{tab3}
\begin{tabular}{l|l|l|l|l}
\hline
Order (first/second/third) & Total loss & $RMSE_{t}$& $CEL_{zone}$ & $CEL_{action}$\\
\hline
zone/$t$/action & 4.58 & 0.10 & 2.06 & 1.44 \\
action/zone/$t$ & 4.57 & 0.11 & 2.06 & 1.38 \\
zone/action/$t$ & 4.49 & 0.10 & 2.06 & 1.39 \\
$t$/action/zone & 4.46 & 0.10 & 2.06 & 1.36 \\
action/$t$/zone & 4.43 & 0.10 & 2.06 & 1.34 \\
$t$/zone/action & \textbf{4.40} & 0.10 & 2.04 & 1.33 \\
\hline
\end{tabular}
\end{table}
Furthermore in model validation, to deal with imbalance classes in the zone and action, The CELs are weighted. The weight was calculated with the compute$\_$class$\_$weight function from the python scikit-learn package. The calculation follows the following equation:
\begin{equation}
\text{weight of class i} =\frac{\text{number of sample}}{\text{number of class} \times \text{number of sample in class i}}
\end{equation}
In addition, for better forecast result validation, we have multiplied the weight for the action dribble by 1.16. This method increases the accuracy of the dribble forecast while decreasing the accuracy in other action classes.
\section{Features description and summary}
\label{app:fea}
Firstly, Fig \ref{fig:pitch} and \ref{fig:zone} gives a more detailed description of event location represented in (x,y) coordinate and in the zone according to Juego de posición (position game) respectively.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{./fig/pitch.png}
\caption{
WyScout pitch (x,y) coordinate. The goal on the left side belongs to the team in possession and the goal on the right side belongs to the opponent, figure retrieved from
\url{https://apidocs.wyscout.com/\#section/Data-glossary-and-definitions/Pitch-coordinates}.
}
\label{fig:pitch}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{./fig/zone.png}
\caption{Pitch zoning according to Juego de posici\'{o}n (position game). The goal on the left side belongs to the team in possession and the goal on the right side belongs to the opponent. More details of Juego de posición can be found in \url{https://spielverlagerung.com/2014/11/26/juego-de-posicion-a-short-explanation/}}
\label{fig:zone}
\end{figure}
Secondly, Table \ref{tab:action} summarize how the action type defined by WyScout are being grouped into the 5 action group used in this study.
\begin{table}[!htb]
\caption{WyScout action type and subtype grouping \cite{simpson2022seq2event}.}\label{tab:action}
\begin{tabular}{|l|l|}
\hline
Action type (subtype) & Grouped action type (proportion)\\
\hline
Pass (Hand pass) &\\
Pass (Head pass) &\\
Pass (High pass) &\\
Pass (Launch) &\\
Pass (Simple pass) & Pass $p$ (66.99$\%$)\\
Pass (Smart pass) &\\
Others on the ball (Clearance) \&\
Free Kick (Goal kick) &\\
Free Kick (Throw in) &\\
Free Kick (Free Kick) &\\
\hline
Duel (Ground attacking duel) &\\
Others on the ball (Acceleration) & Dribble $d$ (8.48$\%$)
\\
Others on the ball (Touch) &\\
\hline
Pass (Cross) &\\
Free Kick (Corner) & Cross $x$ (3.27$\%$)\\
Free Kick (Free kick cross) &\\
\hline
Shot (Shot) &\\
Free Kick (Free kick shot) & Shot $s$ (1.68$\%$)\\
Free Kick (Penalty) &\\
\hline
After all action in a possession& Possession end $\_$ (19.58$\%$)\\
\hline
Other& N/A\\
\hline
\end{tabular}
\end{table}
Thirdly, Fig \ref{fig:hea} shows the heatmap of each grouped action, and the pitch is zoned according to the Juego de posición (position game).
\begin{figure}
\centering
\includegraphics[width=.4\textwidth]{./fig/pass_pg.png} \quad
\includegraphics[width=.4\textwidth]{./fig/dribble_pg.png} \quad
\\
\medskip
\includegraphics[width=.4\textwidth]{./fig/cross_pg.png} \quad
\includegraphics[width=.4\textwidth]{./fig/shot_pg.png} \quad
\\
\medskip
\includegraphics[width=.4\textwidth]{./fig/end_pg.png}
\caption{Heatmap for Grouped action type. (Top left) heatmap for action pass, (Top right) heatmap for action dribble, (Middle left) heatmap for action cross, (Middle Right) heatmap for action shot, (Bottom) heatmap for possession end. The goal on the left side belongs to the team in possession and the goal on the right side belongs to the opponent.}
\label{fig:hea}
\end{figure}
Lastly, a more detailed description of other continuous features is given below and the calculation of the features is given in the code. Using the center point of the zone to represent the location of the zone. We created the following features:
\begin{itemize}
\item$zone\_s:$ distance from the previous zone to the current zone.
\item$zone\_deltay:$ change in the zone y coordinate.
\item$zone\_deltax:$ change in the zone x coordinate.
\item$zone\_sg:$ distance from the opposition goal center point to the zone.
\item$zone\_thetag:$ angles from the opposition goal center point to the zone.
\end{itemize}
\section{Transformer encoder \cite{vaswani2017attention}}
\label{app:encoder}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.4]{./fig/encoder.png}
\caption{Transformer encoder \cite{vaswani2017attention}.}
\label{fig:encoder}
\end{figure}
The architecture of the transformer encoder is shown in Fig \ref{fig:encoder}. The main component in the transformer encoder is the multi-head attention which composes of multiple self-attention heads. For one self-attention head as applied in this study, let $X$ be the input matrix with each row representing the features of an event.
The matrix first passes through the positional encoding as the following equation.
\begin{equation}
X=(X+Z)
\end{equation}
Assume $X \in \mathbb{R}^{N\times K} $, meaning X consists of $N$ events, and each event consists of $K$ features. The entries in matrix $Z$ can be determined with the following equation. Where, $n \in {1,2,...,N}$, $k \in {1,2,...,K}$ and $d$ is a scalar, set to 10,000 in \cite{vaswani2017attention}.
\begin{alignat}{1}
Z(n,2k)=sin(\frac{n}{n^{2k/d}}) \\
Z(n,2k+1)=cos(\frac{n}{n^{2k/d}}) \nonumber
\end{alignat}
Afterward, the following equation shows the calculation of the self-attention head. Where $Q,K,V$ are the queries, keys, and values matrix respectively, $W^Q,W^K \in \mathbb{R}^{K\times d_k}$, and $W^V \in \mathbb{R}^{K\times d_v}$ are trainable parameters, and $d_k, d_v$ are hyperparameters.
\begin{alignat}{1}
Attention(Q,K,V)=softmax(\frac{QK^T}{\sqrt{d_k}})V \\
Q=XW^Q, K=XW^K, V=XW^V \nonumber
\end{alignat}
Lastly, after the output matrix from the self-attention head passes through add and norm layer \cite{ba2016layer,he2016deep}, feedforward layers, and a final add and norm layers. It results in an encoded matrix.
\section{Neural temporal point process (NTPP) framework \cite{shchur2021neural}}
\label{app:ntpp}
In general, the NTPP framework is a combination of ML and the ideas of the point process, allowing for flexible model architecture. To begin with, a marked temporal point process in time [0,T] can be defined as $X={(m_i,t_i)}^ N_{i=1}$, where N is the total number of events, $m \in {1,2,...,K}$ is the mark, and $0<t_1<...<t_i<...<t_N<T$ is the arrival time under the definition in \cite{shchur2021neural}. In addition the history of at time $t$ is defined as $H_t={(m_i,t_i)}^{t-1}_{i=1}$. Thus the conditional intensity function for type m can be defined as follows:
\begin{equation}
\lambda_m(t|H_t)=\lim_{\Delta t \downarrow 0} \frac{P(\text{event of type m in } [t,t+\Delta t))}{\Delta t}
\end{equation}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.3]{./fig/NTPP.png}
\caption{NTPP model construct method \cite{shchur2021neural}.}
\label{fig:NTPP}
\end{figure}
The construct of an NTPP model can be defined with 3 steps as in Fig \ref{fig:NTPP} \cite{shchur2021neural}. First, represent the event into a features vector. Second, encode the history into a history vector. Third, predict the next event.
For the first step, depending on different event data, the exact procedure might be different. But in general, applying embedding to class features and concatenating them with continuous features will result in a feature vector.
Next, in the second step, RNN, GRU \cite{chung2014empirical}, LSTM \cite{graves2012long}, and transformer encoder \cite{vaswani2017attention} are often used for history encoding. Nevertheless, transformer encoders are found to be more efficient in recent studies but further verification is needed \cite{shchur2021neural}.
Lastly, in the third step, there are many ways to define the distribution of the interevent time. For example, probability density function, cumulative distribution function, survival function, hazard function, and cumulative hazard function.
\section{NMSTPP model validation}
\label{app:nms}
Fig \ref{fig:nms} shows the accuracy for each zone forecasting, in another word, the value in the main diagonal of Fig \ref{fig:zcm} confusion matrix.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.4]{./fig/016_zone_acc.png}
\caption{NMSTPP model zone forecast accuracy.}
\label{fig:nms}
\end{figure}
\section{HPUS details and validation}
\label{app:eps}
In this section, more details on the calculation of HPUS, the application, and validation are provided. For HPUS, Fig \ref{fig:are} demonstrated how the zones of the pitch are further converted into areas for the calculation of HPUS. Moreover, Fig \ref{fig:exp} is the plot for the exponential decaying function. The 0.3 in the function is a hyperparameter, it was selected such that it gives significant weight to 5-6 events matching the average length of possession 5.2 (from the training set data).
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.4]{./fig/area_plot.png}
\caption{Pitch Area for HUPS. The goal on the left side belongs to the team in possession and the goal on the right side belongs to the opponent.}
\label{fig:are}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.5]{./fig/exp.png}
\caption{Exponentially decaying function for HUPS weights assignment.}
\label{fig:exp}
\end{figure}
Moreover, Table \ref{tab:table} shows the premier league 2017-2018 team ranking, team name, average goal scored, average xG, average HPUS, average HPUS$+$, and the HPUS ratio. The HPUS ratio is the ratio of average HPUS$+$ to average HPUS, it is surprising that each team has an HPUS ratio near 0.3. This emphasizes the importance of a high HPUS, creating possessions that are likely to be converted into scoring opportunities. In addition, Fig \ref{fig:denplus} shows teams' HPUS$+$ density for possession in matches over 2017-2018 premier league season.
\begin{table}[!htb]
\caption{Preimer league 2017-2018 team ranking and match averaged performance statistics and metrics.}\label{tab:table}
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
Ranking & Team name & Goal & xG & HPUS & HPUS+ & HPUS ratio \\
\hline
1 & Manchester City & 2.79 & 2.46 & 626.86 & 213.41 & 0.34 \\
2 & Manchester United & 1.79 & 1.63 & 537.44 & 174.03 & 0.32 \\
3 & Tottenham Hotspur & 1.95 & 1.87 & 600.71 & 192.40 & 0.32 \\
4 & Liverpool & 2.21 & 2.08 & 586.66 & 186.07 & 0.32 \\
5 & Chelsea & 1.63 & 1.55 & 557.87 & 187.25 & 0.34 \\
6 & Arsenal & 1.95 & 1.93 & 603.23 & 169.23 & 0.28 \\
7 & Burnley & 0.95 & 0.89 & 412.62 & 125.11 & 0.30 \\
8 & Everton & 1.16 & 1.18 & 435.69 & 117.64 & 0.27 \\
9 & Leicester City & 1.47 & 1.35 & 461.40 & 139.62 & 0.30 \\
10 & Newcastle United & 1.03 & 1.19 & 423.97 & 124.12 & 0.29 \\
11 & Crystal Palace & 1.18 & 1.53 & 446.03 & 136.75 & 0.31 \\
12 & Bournemouth & 1.18 & 1.06 & 470.88 & 130.71 & 0.28 \\
13 & West Ham United & 1.26 & 1.01 & 438.44 & 135.98 & 0.31 \\
14 & Watford & 1.16 & 1.23 & 467.41 & 139.15 & 0.30 \\
15 & Brighton and Hove Albion & 0.89 & 0.97 & 418.84 & 126.12 & 0.30 \\
16 & Huddersfield Town & 0.74 & 0.85 & 437.80 & 128.00 & 0.29 \\
17 & Southampton & 0.97 & 1.11 & 486.45 & 156.15 & 0.32 \\
18 & Swansea City & 0.74 & 0.80 & 417.77 & 120.04 & 0.29 \\
19 & Stoke City & 0.92 & 0.98 & 399.63 & 116.44 & 0.29 \\
20 & West Bromwich Albion & 0.82 & 0.93 & 410.14 & 119.54 & 0.29 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.6]{./fig/PS_plus_dist_plot.png}
\caption{Team HPUS$+$ density for possession in matches over 2017-2018 premier league season.}
\label{fig:denplus}
\end{figure}
\section{Introduction}
\vspace{-9pt}
\label{sec:introduction}
Football\footnote{Also known as ''Association Football'', or ''Soccer''.} has simultaneously been an influential sport and important industry around the globe \cite{li2020analysis}. Estimate has shown that the FIFA World Cup Qatar 2022 entertained over 5 billion viewers\footnote{https://www.fifa.com/about-fifa/president/news/gianni-infantino-says-fifa-world-cup-is-perfect-opportunity-to-promote, estimated by the FIFA President Gianni Infantino}, and it has also been established to be a pillar industry for countries like Italy and Great Britain \cite{zhang2022sports}. In modern football, data analysis plays an important role for fans, players, and coaches. Data analysis can be leveraged by players to improve performance, and generate insight for the coaching process and tactical decision-making. Furthermore, it provides fans with quantified measures and deeper insights into the game \cite{berrar2019guest,simpson2022seq2event}.
For years, analysis and research have focused on players' actions when they are in possession of the ball, with each player statistically shown to have 3 minutes on average with the ball \cite{fernandez2018wide}. Therefore, it is critical for players to utilize these three minutes, and for analysts and researchers to evaluate the effectiveness and efficiency of these on-ball action events.
There have been numerous attempts in the existing sports analysis literature to understand how the sequence of events in the past affects the next event or its outcome. The majority of the studies \cite{queiroz2021estimating,sicilia2019deephoops,simpson2022seq2event,watson2021integrating,zhang2022sports} used machine learning (ML) models to
handle the long sequences of event data. The ML models encoded the long history events vectors (each vector representing a historical event, and with the event being described by features in the vector) into a vector representation by incorporating Long Short-Term Memory (LSTM) \cite{graves2012long}, Gated Recurrent Unit (GRU) \cite{chung2014empirical}, or Transformer Encoder \cite{vaswani2017attention}.
The Seq2Event model was recently proposed by Simpson et al. \cite{simpson2022seq2event}. The model combines the encoding method and dense layers to forecast the location and type of the next event. Furthermore, the poss-util metric is developed based on the predicted event probability to evaluate the effectiveness of a football team's possession. However, in order to assess the effectiveness and efficiency of the team's on-ball actions, all three factors, temporal, spatial, and action type, should be considered and modeled dependently. For example, shots are frequently taken when the players are close to the opponent's goalpost, but rarely when they are far away. Furthermore, if it takes a long time before the team gets the ball close to the opponent's goal, the opponent will have sufficient time to react, with shot-taking no longer being the best option. Such an example demonstrates the need for a dependent model capable of handling all three factors.
In this paper, we proposed modeling football event data under the Neural temporal point process (NTPP) framework \cite{shchur2021neural}. The proposed model utilizes the long sequence encoding method and models the forecast of the next event's temporal, spatial, and action types factors under the point process literature.
First, we introduced method of modeling the football match event data under the NTPP framework, explaining how to model the event factors dependently.
Second, we presented the best-fitted model Transformer-Based Neural Marked Spatio Temporal Point Process Model (NMSTPP). The model is capable of modeling temporal information of events, which has been overlooked in previous studies.
Finally, we demonstrated how the NMSTPP model can be applied for evaluating the effectiveness and efficiency of the team's possession by proposing a new performance metric: Holistic possession utilization score (HPUS).
Summarily, our main contributions are as follows:
(1) With the NTPP framework, we proposed the NMSTPP model to model football events data interevent time, location, and action simultaneously and dependently;
(2) To evaluate possession in football, we have proposed a more holistic metric, HPUS. The HPUS validation and application have been provided in this paper;
(3) Using open-source data, we determined the optimal architecture and validated the forecast results of the model. The ablation study of the NMSTPP model showed that the dependency could increase forecast performance and the validated HPUS showed the necessity of simultaneous modeling for holistic analysis.
\vspace{-11pt}
\section{Proposed framework}
\vspace{-4pt}
\label{sec:pro}
In this section, we describe how to model football event data under the
NTPP framework and use the model to evaluate a possession period. In Section \ref{ssec:def}, we first describe how we define football event data as a point process and how we incorporate ML to form the NMSTPP model. Afterward, in Section \ref{ssec:arc}, we introduce the model architecture of the NMSTPP model. Lastly, in Section \ref{ssec:exp}, we describe the HPUS for possession period evaluation.
\vspace{-8pt}
\subsection{Define football event data as NMSTPP}
\vspace{-2pt}
\label{ssec:def}
Although there are multiple ways to define a point process, for a temporal point process $\{(t_i)\}^N_{i=1}$, one method is by defining the conditional probability density function (PDF) $f(t_{i}|H_{i})$ of the interevent time for the next event $t_{i}$ given the history event $H_{i}=\{t_{1},t_{2},...,t_{i-1}\}$ \cite{rasmussen2018lecture}. With factorization, the joint PDF of the events' interevent time can be represented with the following formula \cite{rasmussen2018lecture}:
\begin{equation}
\label{eq1}
f(...,t_{1},t_{2},...)=\prod_{i=1}^N f(t_{i}|t_{1},t_{2},...,t_{i-1})=\prod_{i=1}^N f(t_{i}|H_{i})
\end{equation}
Furthermore, by taking the marks $m$ and spatial $z$ information of an event into consideration, the joint PDF of a marked spatio temporal point process (MSTPP) $\{(t_i,z_i,m_i)\}^N_{i=1}$ can be extended from equation \ref{eq1} and represented as follows:
\begin{equation}
\label{eq2}
f(...,(t_{1},z_{1},m_{1}),(t_{2},z_{2},m_{2}),...)=\prod_{i=1}^N f(t_{i},z_{i},m_{i}|H_{i})
\end{equation}
Prior to defining the conditional PDF for MSTPP, we first connected the football match on-ball action event data with MSTPP.
The marks $m$ correspond to the on-ball action type (e.g., shot, cross, pass, and so on), spatial $z$ corresponds to the location (zone) of the football pitch indicating where the event happened (further explained in Section \ref{ssec:dataset}), and temporal $t$ corresponds to the interevent time.
Afterward, rather than defining the conditional PDF (PMF) for MSTPP in equation \ref{eq2} directly, we applied the decomposition of multivariate density function \cite{cox1975partial} to equation \ref{eq2} \cite{narayanan2020bayesian}. This results in conditional PDF as follows:
\begin{equation}
\label{eq3}
\prod_{i=1}^N f(t_{i},z_{i},m_{i}|H_{i})=\prod_{i=1}^N f_t(t_{i}|H_{i})f_z(z_{i}|t_{i},H_{i})f_m(m_{i}|t_{i},z_{i},H_{i}).
\end{equation}
This equation is the multiplication of $t, z, m$ conditional PDF, where $t, z, \text{and } m$ are interchangeable. Using equation \ref{eq3} allows us to define PDFs $f_t, f_z, f_m$ differently, but without assuming $t, z, \text{and }m$ are independent as long as the defined conditional PDFs take all given information into consideration.
Although defining the PDFs with distributions or models based on point processes (e.g., Poisson process \cite{kingman1992poisson}, Hawkes process \cite{hawkes1971spectra}, Reactive point process \cite{ertekin2015reactive}, and so on) are common, we applied ML algorithms to estimate the PDFs. This has been proven to be more effective in multiple fields \cite{du2016recurrent,xiao2017modeling,zhang2020self,zuo2020transformer}. Based on maximum negative log-likelihood estimation, the MSTPP loss function to be minimized can be presented as follows.
\begin{equation}
\label{eq4}
L(\theta)=\sum_{i=1}^N 10\times RMSE_{t_i}+CEL_{z_i}+CEL_{m_i}.
\end{equation}
This equation composes of the root mean square error (RMSE) for $t$ and cross-entropy loss (CEL) for $z, m$. The CELs are weighted to deal with unbalanced classes (more details in Appendix \ref{app:hyp}) and RMSE was multiplied by 10 to keep the balance between the three cost functions.
It should be noted that taking all events data as input directly for the ML model would be ineffective and inefficient. The data may consist of a large amount of noise, with large amount of input features potentially increasing the number of trainable parameters in the models, and consequently leading to a long training time. The feasible solution from the NTPP framework \cite{shchur2021neural} is to encode the information from the history events information $(\vec{y}_1,\vec{y}_2,...\vec{y}_{i-1}),\ \vec{y}_i= \left[t_i,m_i,z_i\right] $ into a fixed-size single vector $\vec{h}_i$ with LSTM \cite{graves2012long}, GRU \cite{chung2014empirical}, or Transformer Encoder \cite{vaswani2017attention}. Based on a previous study \cite{simpson2022seq2event}, Transformer Encoder has been found to be slightly less effective, but significantly more efficient than LSTM. Therefore, in this study, we applied Transformer Encoder to encode the history events information.
Furthermore, MSTPP models that are based on the combination of point process literature and ML methods can be referred to as Neural MSTPP (NMSTPP) models.
\vspace{-8pt}
\subsection{NMSTPP model architecture}
\vspace{-2pt}
\label{ssec:arc}
In this subsection, the NMSTPP model architecture and related hyperparameter are explained. The NMSTPP model with the optimal hyperparameter is presented in Fig. \ref{fig1}. The grid searched hyperparameter values are presented in Appendix \ref{app:hyp}, while a more detailed description of transformer encoder \cite{vaswani2017attention} and NTPP \cite{shchur2021neural} are presented in Appendix \ref{app:encoder} and \ref{app:ntpp}, respectively.
\begin{figure}
\includegraphics[scale=0.29]{./fig/NMSTPP3.png}
\caption{NMSTPP model architecture. (Stage 1) The input of the model, interevent time, zone, action, and other continuous features of events at $j \in {[i-40:i-1]}$ (here, we set $seqlen$ to 40). (Stage 2) Apply embedding and dense layer to the input, with positional encoding and transformer encoder to obtain the history vector and pass the vector through a dense layer. (Stage 3) Apply neural network to forecast the interevent time, zone, and action of event $j=i$. (Stage 4) The outputs of the model are one value for interevent time, 20 logits for zone, and 5 logits for action. (Stage 5) The output in stage 4 and the ground truth are used
to calculate the cost function directly.}
\label{fig1}
\end{figure}
\vspace{-10pt}
\subsubsection{Stage 1: Input.} First, we summarized the features set for the model. For each event $(t_{j},z_{j},m_{j}),\ j \in {[i-seqlen:i-1]}$, we used the following input features, which resulted in a matrix of size $(seqlen,1+1+1+5)$:
\begin{itemize}
\item Interevent time $t_i$: the time between the current event and the previous event.
\item Zone $z_i$: zone on the football pitch where the event takes place; the zone number was assigned randomly from 1 to 20 (more details on Section \ref{ssec:dataset}).
\item Action $m_i$: type of action in the event; feasible actions are pass $p$, possession end $\_$, dribble $d$, cross $x$, and shot $s$.
\item Other continuous features: engineered features mainly describe the change in zone
(further explanation in Appendix \ref{app:fea}).
\end{itemize}
Hyperparameter: $seqlen$, the sequence length of the historical events.
\vspace{-10pt}
\subsubsection{Stage 2: History encoding.} In this stage,
a dense layer is first applied to interevent time $t_i$ and other continuous features, with an embedding layer applied to zone $z_i$ and action $m_i$ respectively, allowing the model to better capture information in the features \cite{simpson2022seq2event}. Afterward, with the position encoding and transformer encoder from the Transformer model \cite{vaswani2017attention} (more details on Appendix \ref{app:encoder}), a fixed-size encoded history vector with size $(31)$ can be retrieved. Lastly, the history vector passes through another dense layer to allow better information capturing \cite{simpson2022seq2event}.
Hyperparameter: $dim\_feedforward$, numbers of feedforward layers in the transformer encoder.
\subsubsection{Stage 3: Forecasting.} The purpose of this stage is to forecast the interevent time, zone, and action of the next event $(t_{i},z_{i},m_{i})$. In general, we estimated the conditional PDFs in equation \ref{eq3} with neural network (NN). Specifically, for zone $z$ and action $m$, the NNs are estimating the conditional probability mass function (PMF) as they are discrete classes. On the other hand, we decided to model the relationship between history $H$ and the interevent time $t$ directly.
As a result, with the history vector $H$, the models for forecasting can approximately be presented in the following formulas:
\begin{alignat}{1}
\label{eq5}
f_t(t_{i}|H_{i}) & \approx NN_t(H_{i})=t_{i} \nonumber\\
f_z(z_{i}|t_{i},H_{i}) &\approx NN_z(t_{i},H_{i})=\vec{z_{i}}\\
f_m(m_{i}|t_{i},z_{i},H_{i}) &\approx NN_m(t_{i},\vec{z_{i}},H_{i})=\vec{m_{i}} \nonumber
\end{alignat}
where the outputs of neural networks $NN_z$ and $NN_m$: $\vec{z_{i}}$ and $\vec{m_{i}}$ are a vector of predicted logits for all zones and action types with sizes 20 and 5 respectively.
Hyperparameter:
\begin{itemize}
\item $order:$ order of $t,z,\text{and }m$ in Equation \ref{eq5}, which are interchangeable.
\item $num\_layer:$ numbers of hidden layers for $NN$, where $NN_t$, $NN_z$, and $NN_z$ can have different numbers of hidden layers.
\item $activation\_function:$ activation function for hidden layers in $NN_t$, $NN_z$, and $NN_m$.
\item $drop\_out:$ dropout rate for hidden layers in $NN_t$, $NN_z$, and $NN_m$.
\end{itemize}
\vspace{-10pt}
\subsubsection{Stage 4: Output.}
The final outputs of the model are $t_{i},\vec{z_{i}},\vec{m_{i}}$. We considered the class with max logit in $\vec{z_{i}},\vec{m_{i}}$ as the predicted class. When probabilities are required, we scaled the logits into range [0,1].
\vspace{-10pt}
\subsubsection{Stage 5: Cost function.}Furthermore, $t_{i},\vec{z_{i}},\vec{m_{i}}$, and the ground truth were used to calculate the cost function directly. The cost function in equation \ref{eq4} would still apply after the modification in stage 3. With the cost function, the NMSTPP model can be trained from end to end with a gradient descent algorithm, in which the popular adam optimizer \cite{kingma2014adam} has been selected.
\vspace{-8pt}
\subsection{Holistic possession utilization score (HPUS)}
\vspace{-2pt}
\label{ssec:exp}
For a more comprehensive possession analysis in football,
we developed the holistic possession utilization score (HPUS) metric by extending poss-util metric \cite{simpson2022seq2event}.
The poss-util is a metric for analyzing possession utilization. Firstly, the attack probability is calculated by summing the predicted probability of the cross and shot of an event. Then, the attack probability of $n$ events in possession to obtain the poss-util is summed. The calculation is concluded with equation \ref{eq:poss}. Furthermore, \text{-}1 is multiplied to the poss-util if a shot or cross event does not exist in the possession. Lastly, the percentile rank is applied to both positive and negative poss-util, with the resulting metrics poss-util in range [\text{-}1,1].
\begin{equation}
\label{eq:poss}
\text{poss-util}=\sum_{i=1}^{n} P(\text{Cross, Shot)}
\end{equation}
On the other hand, with the NMSTPP model, the expected interevent time, zone, and action type can be calculated and applied to the metrics. Given the information, we proposed the HPUS for analyzing the effectiveness and efficiency of a possession period. The calculations of HPUS are presented as follows.
Holistic action score (HAS) $\in$ [0:10] is first computed as follows:
\begin{equation}
\label{eq6}
\text{HAS}=\frac{\sqrt{E(Zone\cdot Action|H)}}{t}=\frac{\sqrt{E(Zone|H)E(Action|Zone,H)}}{t},
\end{equation}
\begin{align}
E(zone|H) ={}& 0P(Area_0)+5P(Area_1)+10P(Area_2),\label{eq7}\\
\begin{split}\label{eq8}
E(Action|Zone,H) ={}& 0P(\text{Possession loss})+5P\text{(Dribble, Pass})\\
& +10P(\text{Cross, Shot}),
\end{split}\\
t ={}& \begin{cases}
\text{1}, & \text{if }$t<1$,\\
t, & \text{o/w},
\end{cases}\label{eq9}
\end{align}
In equation \ref{eq6}, the expected value of zone and action were used to evaluate the effectiveness of each action. The multiplication of the two expected values allows for a more detailed score assignment. In HAS, a shot is assigned with a high score of 10, but the distance affects how likely the shot will lead to goal-scoring. Consequently, depending on the distance to the opponent's goal, the score should be lower when far from the opponent's goal and vice versa. In addition, the assignment of areas in equation \ref{eq7} is visualized in Fig. \ref{fig:are}.
Furthermore, the division of interevent time is used to account for the efficiency of the action. The more efficient the action is, the less time it takes, and the harder for the opponent to respond. Therefore, a higher score is awarded for less time taken. Additionally, we took the square root to scale the score in range [0:10] and let $t=1$ if $t<1$ to avoid the score from exploding.
HPUS summarizes the actions in possession, and is computed as follows:
\begin{equation}
\text{HPUS}=\sum_{i=1}^{n} \phi(n+1-i)\frac{\sqrt{E(Zone_i\cdot Action_i|H_i)}}{E(Time|H_i)}=\sum_{i=1}^{n} \phi(n+1-i)\text{HAS}_i,
\label{eq10}
\end{equation}
\begin{equation}
\phi(x)=exp(-0.3(x-1)).
\label{eq11}
\end{equation}
In equation \ref{eq10}, for each possession with $n$ actions, the HPUS was calculated as the weighted sum of the $n$ actions' HAS. The weights assignment starts from the last action and the weights are calculated with an exponentially decaying function as in equation \ref{eq11}, and visualized in Fig. \ref{fig:exp}. This exponentially decaying function allows the HPUS to give the most focus on the last action, which is the result of the entire possession period. In addition, the remaining actions were given lesser focus as they get far away from the last action. As a result, the HPUS is able to reflect the final outcome and the performance in the possession period at the same time.
Furthermore, similar to poss-util metric \cite{simpson2022seq2event} we created HPUS$+$ that only considers possession that leads to an attack (cross or shots) at the end of the possession.
\vspace{-11pt}
\section{Experiments}
\vspace{-4pt}
\label{sec:experiment}
In this section, we validate the architecture and the performance of the NMSTPP model and the HPUS. The training, validation, and testing set include 73, 7, and 178 matches, respectively, with more details of the dataset splitting presented in table \ref{tab:dataset}. The code is available at
\ifarxiv
\url{https://github.com/calvinyeungck/Football-Match-Event-Forecast}.
\else
\url{https://anonymous.4open.science/r/NMSTPP_model}.
\fi
All models were trained with two AMD EPYC 7F72 24-Core Processors and one Nvidia RTX A6000.
\vspace{-8pt}
\subsection{Dataset and preprocessing}
\vspace{-2pt}
\label{ssec:dataset}
\subsubsection{Dataset.} Based on the 2017/2018 football season, we used football match event data from the top five leagues, the Premier League, La Liga, Ligue 1, Serie A, and Bundesliga. The event data used in this study were retrieved from the WyScout Open Access Dataset \cite{pappalardo2019public}. Currently, this dataset is the largest for football match event data, and it is published for the purpose of facilitating research in football data analytic development. In the event data, the action of the player who controls the football is captured in the event data in this dataset. Including the type of action (passes, shots, fouls, and so on), there are 21 action types and 78 subtypes in total. Further, the football pitch position where the action happens, is recorded in (x,y) coordinates, along with the time the event happens, the outcome of the action, amongst others. In addition, the xG data for validation were retrieved from \url{https://understat.com/}. More details of dataset preprocessing are presented in Appendix \ref{app:mod}.
\vspace{-8pt}
\subsubsection{Features engineering.} In most football match on-ball action events data, including the WyScout Open Access Dataset \cite{pappalardo2019public}, the record of location and action are usually (x,y) coordinated and detailed with classified action types.
However, to increase the explainability and reduce the complexity of the data, the (x,y) coordinates are first grouped into 20 zones (numbered randomly) according to the Juego de posici\'{o}n (position game) method. This method has been applied by famous football coach Pep Guardiola and the famous football team Bayern Munich in training.
The grouping method allows the output of our model to provide a clear indication for football coaches and players. Moreover, detailed classified action types are grouped into 5 action classes (pass, dribble, cross, shot, and possession end).
Similar methods have been applied in previous studies \cite{simpson2022seq2event,van2021would} and have proven to be effective. More details and summary of football pitch (x,y) coordinates, 20 zones, and 5 action classes have been provided in Figs. \ref{fig:pitch}, \ref{fig:zone}, and \ref{fig:hea}, and Table \ref{tab:action}.
Furthermore, from the created zone feature, we created extra features to provide the model with more information. The extra features include the distance from the previous zone to the current zone, change in the zone (x,y) coordinates, and distance and angle from the opposition goal center point to the zone. Detailed description of the extra features is presented in Appendix \ref{app:fea}.
\vspace{-8pt}
\subsection{Comparison with baseline models}
\vspace{-2pt}
\label{ssec:verify-psm}
To show the effectiveness and efficiency of the NMSTPP model, we compared the NMSTPP model with baseline models. The baseline models we applied are the statistical model and modified Seq2event model \cite{simpson2022seq2event}. The statistical model is a combination of the second-order autoregression AR(2) model for interevent time forecast and transition probabilities for estimating the PMF for zones and actions. Three modified Seq2event models were obtained by first adding one extra output on the last dense layer serving as the interevent time forecast and trained with the cost function in equation \ref{eq4}. Furthermore, in historical encoding, the transformer encoder (Transformer) and unidirectional LSTM (Uni-LSTM) were applied. Additionally, for a fair comparison, we fine-tuned the Modified Seq2Event's (Transformer) transformer encoder layer feedforward network dimension, increasing it from 8 to 2048.
\begin{table}
\caption{Quantitative comparisons with baseline models. Model total loss, RMSE on interevent time $t$, CEL on zone, CEL on action, training time (in minutes), and the number of trainable parameters (in thousand) are reported.}\label{tab1}
\scalebox{0.85}{
\begin{tabular}{l|l|l|l|l|l|l}
\hline
Model & Total loss & $RMSE_{t}$& $CEL_{zone}$ & $CEL_{action}$ & $T_{training}$ (min) & Params (K)\\
\hline
AR(2)-Trans-prob & 6.98 & 0.12 & 2.34 & 3.40 & N/A & N/A \\
Modified Seq2Event (Transformer) & 4.57 & 0.11 & 2.11 & 1.39 & 47 & 13 \\
Modified Seq2Event (Uni-LSTM) & 4.51 & 0.10 & 2.11 & 1.37 & 129 & 4 \\
Fine-tuned Seq2Event (Transformer) & 4.48 & 0.10 & 2.09 & 1.36 & 79 & 137 \\
NMSTPP & \textbf{4.40} & 0.10 & 2.04 & 1.33 & 49 & 79 \\
\hline
\end{tabular}
}
\end{table}
Table \ref{tab1} compares the performance based on the validation set, training time, and the number of trainable parameters the model had. In terms of effectiveness, The NMSTPP model had the best performance in forecasting the validation set matches events. Compared with the baseline models, the NMSTPP model outperformed in the total loss, zone CEL loss, and action CEL loss, and shared the best interevent time $t$ RMSE performance. In terms of efficiency, the modified Seq2event model (Transfomer)\cite{simpson2022seq2event} had the fastest training time, followed by the NMSTPP model ($+2$ min). However, the NMSTPP model had significantly 66 thousand more trainable parameters than the modified Seq2event model, and better performed ($-0.17$) in total loss. Overall, the NMSTPP model was the most effective and relatively efficient model, showing our methods could better model the football event data.
\vspace{-8pt}
\subsection{Ablation Studies}
\vspace{-2pt}
Upon validating the effectiveness and efficiency of the NMSTPP model, we validated the architecture of the NMSTPP model. First, we focused on stage 3 of the model (Section \ref{ssec:arc}), comparing the performance when the forecasting models for interevent time, zones, and actions are dependent and independent (i.e., $NN_t,NN_z,NN_m$ in equation \ref{eq5} will be a function of only $H_{i}$).
\begin{table}
\caption{Performance comparisons with disconnected NMSTPP models on the validation set. Model total loss, RMSE on interevent time $t$, CEL on zone, and CEL on action are reported.}\label{tab2}
\begin{tabular}{l|l|l|l|l}
\hline
Dependence & Total loss & $RMSE_{t}$& $CEL_{zone}$ & $CEL_{action}$\\
\hline
Independent NMSTPP & 4.44 & 0.10 & 2.04 & 1.37 \\
Dependent NMSTPP & \textbf{4.40} & 0.10 & 2.04 & 1.33 \\
\hline
\end{tabular}
\end{table}
Table \ref{tab2} compares the performance of the independent NMSTPP and dependent NMSTPP. The result implied that the dependent NMSTPP had a better performance than the independent NMSTPP by 0.04 total loss, with the difference coming from the CEL of action. Therefore, it is necessary to model the forecasting model for interevent time, zones, and actions dependently, as in equation \ref{eq5}.
In addition, we compared the use of (x,y) coordinate \cite{simpson2022seq2event} and zone features in this study. Table \ref{tab4} compares NMSTPP model's RMSE of interevent time $t$ and CEL of action when using the two features. The result indicated that there are no significant differences in the performance. Therefore, the use of zone did not decrease the performance of the NMSTPP model, but could increase the explainability of the model's output for football players and coaches.
\begin{table}
\caption{Performance comparisons with (x,y) coordinates features on the validation set. Model RMSE on interevent time $t$ and CEL on action are reported.}\label{tab4}
\begin{tabular}{l|l|l}
\hline
Features set & $RMSE_{t}$& $CEL_{action}$\\
\hline
zone & 0.10 & 1.33 \\
(x,y) & 0.10 & 1.33 \\
\hline
\end{tabular}
\end{table}
\vspace{-8pt}
\subsection{Model verification}
\vspace{-2pt}
In this subsection, we further analyze the prediction result of the NMSTPP model. The following results are based on the forecast of the NMSTPP model on the testing set. The model was trained with a slightly adjusted CEL weight of the action dribble for higher accuracy in the dribble class (more details in Appendix \ref{app:hyp}).
First, we analyzed the use of long sequences (40) of historical events. Fig. \ref{fig:zcm} (left) shows the self-attention score heatmap for the last row of the self-attention matrix. The score identified the contribution of each historical event to the history vector. In the heatmap, the weights of the events were between 0.01 and 0.06, and there were no trends or indications implying that the length of the historical events sequence 40 was either too long or too short.
Second, we analyzed the forecast of interevent time by comparing the CDF of the predicted interevent time and the true interevent time. Fig. \ref{fig:cdf} (right) shows that the CDF of the predicted and true interevent time were generally matched. Therefore, even without specifying a distribution for interevent time, the NMSTPP model could match the sample distribution.
\begin{figure}[!htb]
\centering
\includegraphics[width=.43\textwidth]{./fig/attention_score_query_40_1.5.png}
\includegraphics[width=.53\textwidth]{./fig/016_time.png}
\caption{CDF of the predicted interevent time and the true interevent time (left) and self-attention heatmap (right).}
\label{fig:cdf}
\end{figure}
Lastly, we analyzed the forecast of the zone and action with the mean probability confusion matrix (CM). Fig. \ref{fig:zcm} shows the CM heatmap for zone and action, respectively. In addition, the detailed zone accuracy was presented in Fig. \ref{fig:nms}. In both CM and on average, the correct assigned class had the highest probability and could be identified from the figures. This result suggests that the NMSTPP model was able to infer the zone and action of the next event.
\begin{figure}[!htb]
\centering
\includegraphics[width=.48\textwidth]{./fig/016_zone.png}
\includegraphics[width=.48\textwidth]{./fig/016_action.png}
\caption{Zone (left) and action (right) confusion matrix heatmap (mean probability).}
\label{fig:zcm}
\end{figure}
\vspace{-8pt}
\subsection{HPUS verification and application to premier league}
\vspace{-2pt}
Upon verifying the NMSTPP model, we verified the HPUS and demonstrated the application of HPUS to 2017-2018 premier league season. In validating the HPUS, we first calculated the average HPUS and HPUS$+$ for each team in the premier league. Afterward, we calculated the correlation between the average HPUS, HPUS$+$, xG, goal, and the final ranking. Table \ref{tab:table} shows the value of the metrics and Fig. \ref{fig:2017} (left) shows the correlation matrix heatmap for the metrics. From the correlation matrix, the average goal (-0.84), xG (-0.81), HPUS (-0.78), and HPUS$+$ (-0.74) had significant negative correlation to the final ranking of the team, implying that the four metrics could reflect the final outcome in a season and could be applied to compare different teams' performances. Nevertheless, the HPUS and HPUS$+$ had slightly less ($\leq 0.07$) significant correlation than goal and xG. However, in the NMSTPP model, HPUS or HPUS$+$, the goal data (directly related to the match outcome) had never been used. Therefore, the slightly less significant correlation was reasonable. In addition, the HPUS (0.92,0.92) and HPUS$+$ (0.91,0.90) had significant correlation with goal and xG, thereby implying that the proposed metrics were able to reflect the attacking performances of the teams. Summarily, the HPUS metrics were capable of evaluating all types of major events in football, and were able to reflect a team's final ranking and attacking performance.
Subsequently, the applications of the HPUS metrics are described. As an initial step, we analyzed teams' possession by plotting the HPUS densities. In Fig. \ref{fig:2017} (right), three teams (final ranking) are considered: Manchester City (1), Chelsea (5), and Newcastle United (10). As Fig. \ref{fig:2017} (right) shows, the team with a higher ranking was able to utilize the possession and generated more high HPUS possession and less low HPUS possession.
\begin{figure}[!htb]
\centering
\vspace{-10pt}
\includegraphics[width=.43\textwidth]{./fig/0.3_cm.png}
\includegraphics[width=.53\textwidth]{./fig/PS_dist_plot.png}
\vspace{-10pt}
\caption{2017-2018 season premier league team statistics correlation matrix heatmap (left) and teams' HPUS density for possession in matches over 2017-2018 premier league season (right).}
\label{fig:2017}
\vspace{-10pt}
\end{figure}
Lastly, we analyzed the change in HPUS and HPUS$+$ in a match. In Fig. \ref{fig:match}, two matches are selected, Manchester City vs Newcastle United (Time: 2018, Jan 21, Result: 3:1) and
Chelsea vs Newcastle United (Time: 2017, Dec 2, Result: 3:1). Primarily, the change in HPUS (left) and HPUS$+$ (right) provided different information. The former quantified the potential attack opportunities a team had created, while the latter quantified how many of those opportunities were converted to attack. In the Manchester City vs Newcastle (top) match, although Newcastle United was able to create opportunities, but was unable to convert them into attacks.
Secondarily, both changes in HPUS and HPUS$+$ provided a good indication of the team's performance. Although both matches ended in 3:1 against Newcastle United, the match against Chelsea (Bottom), shows that Newcastle United created more opportunities and converted more opportunities into an attack. Therefore, we concluded that Newcastle United performed better against Chelsea than against Manchester City. In conclusion, HPUS and HPUS$+$ provided in-depth information on teams' performance. Furthermore, the analysis based on HPUS was still feasible even if important events like goals and shots were absent.
\begin{figure}[!htb]
\centering
\includegraphics[width=.48\textwidth]{./fig/Manchester_City_Newcastle_United_ps.png}
\includegraphics[width=.48\textwidth]{./fig/Manchester_City_Newcastle_United_ps_plus.png}
\medskip
\includegraphics[width=.48\textwidth]{./fig/Chelsea_Newcastle_United_ps.png} \quad
\includegraphics[width=.48\textwidth]{./fig/Chelsea_Newcastle_United_ps_plus.png} \quad
\vspace{-10pt}
\caption{Matches cumulative HPUS (left) and HPUS$+$ (right) values change per 5 minutes in regular time. (Top) Manchester City vs Newcastle United (Time: 2018, Jan 21, Result: 3:1); (Bottom) Chelsea vs Newcastle United (Time: 2017, Dec 2, Result: 3:1). The first half is in 0-45 minutes and the second half is in 60-105 minutes; dotted line implies a goal scored in the 5 minutes period. }
\vspace{-10pt}
\label{fig:match}
\end{figure}
\vspace{-10pt}
\section{Related work}
\vspace{-4pt}
\label{sec:related}
There are many types of sequential data in sports (football, basketball, and rugby union), match results, event data of the ball and the player, and so on. To model sequential event data in sports, ML techniques, and point process techniques are the most common techniques applied by researchers.
In the proposed ML models, recurrent neural networks (RNN) and self-attention are the most popular key components. For RNN, GRU \cite{chung2014empirical} and LSTM \cite{graves2012long} have been applied to model the possession termination action in rugby union \cite{sicilia2019deephoops}, next event location and action type in football \cite{simpson2022seq2event}, as well as the outcome of a sequence of play in basketball \cite{watson2021integrating}. However, in long sequence data, the gradient calculation for models with RNN components are usually complex and thus leading to a long training time. Meanwhile, in recent times, the self-attention mechanism in natural language processing has been found to model long sequential data more efficiently. Therefore, the self-attention mechanism has been applied to replace the RNN component \cite{zhang2020self,zuo2020transformer}. For self-attention, the transformer encoder \cite{vaswani2017attention} that includes self-attention mechanism has been applied to model the next event location and action type in football \cite{simpson2022seq2event}. In addition, the combination of self-attention and LSTM has been applied to model match result in football \cite{zhang2022sports}.
In the proposed point process models, the player's shooting location in basketball can be defined as a Log-Gaussian Cox process \cite{moller1998log} \cite{miller2014factorized}. Moreover, football event data can be defined as a marked spatial-temporal point process as in equation \ref{eq3}. In \cite{narayanan2020bayesian}, the interevent time, zone, and action types are defined as gamma distribution, transition probability, and Hawkes process \cite{hawkes1971spectra} based model, respectively. The Hawkes process based model for action types is based on history and the predicted interevent time, demonstrating how the important component of an event can be modeled dependently.
Summarily, in modeling event data, most sports sequential ML models only consider the partial component (location, action, or outcome type) of the next event and model the forecast independently. While point processes are able to model all components of event data, the combination of ML and point process (e.g., NTPP models \cite{du2016recurrent,xiao2017modeling,zhang2020self,zuo2020transformer}) are found to be more effective than the point process model.
Therefore, to provide a more comprehensive analysis of football event data, we proposed modeling the football event data based on the NTPP framework.
Subsequent to modeling the football event data, performance metrics based on the model result could provide a clear indication on the performance and summarize the data. The most famous performance metric, expected goal (xG) was first purposed in hockey \cite{macdonald2012expected}, and later applied to football \cite{eggels2016expected}. In \cite{eggels2016expected}, xG is equivalent to the probability that a goal-scoring opportunity is converted into a goal. The xG is modeled directly from the spatial, player, and tactical features with a random forest model. Despite the popularity of xG, it is inapplicable without the existence of a goal-scoring opportunity, and from Table \ref{tab:action}, goal-scoring opportunities (shots) are rare events in football matches. Since then, there have been multiple metrics proposed to resolve the limitation. For instance, the probability an off-ball player will score in the next action known as an off-ball scoring opportunity (OBSO) \cite{spearman2018beyond} (the variant is \cite{teranishi2022evaluation}), the probability that a pass is converted into an assist known as an expected assist (xA)
\url{ https://www.statsperform.com/opta-analytics/}, and score opportunities a player can create via passing or shooting known as an expected threat (xT) \url{ https://karun.in/blog/expected-threat.html }.
Nevertheless, most metrics solely focused on inferencing the following event or outcome with only one previous event. Meanwhile, the metric valuing actions by estimating probabilities (VAEP) \cite{decroos2019actions} showcases success in using three previous events to model the probability of scoring and conceding (the variants are \cite{toda2022evaluation,umemoto2022location}). Moreover, the possession utilization (poss-util) \cite{simpson2022seq2event} using sequence of historical events to forecast the attacking probability of the next event has also found success in possession performance analysis. Yet, as mentioned previously, a football event is composed of three important components: time, location, and action type. Hence, based on poss-util, we have proposed a more holistic possession performance metrics HPUS with the proposed NMSTPP model.
\vspace{-11pt}
\section{Conclusion}
\vspace{-4pt}
\label{sec:conclusion}
In this study, we have proposed the NMSTPP model to model the time, location, and action types of football match events more effectively, and the HPUS metric, a more comprehensive performance metric for team possessions analysis.
Our result suggested that the NMSTPP model is more effective than the baseline models, and that the model architecture is optimized under the proposed framework. Moreover, the HPUS was able to reflect the team's final ranking, average goal scored, and average xG, in a season.
In the future,
since we have reduced the training set and validation set to consist only of matches from Bundesliga for computation efficiency, further improvement in the model's performance is expected when training the model with more data. Last but not least, the HPUS metric is only one of the many metrics that could possibly be derived based on the NMSTPP model. Conclusively, the NMSTPP model could be applied to develop more performance metrics, and hence, other sports with sequential events consisting of multiple important components can also benefit from this model.
\ifarxiv
\vspace{-11pt}
\section*{Acknowledgments}
\vspace{-4pt}
The authors would like to thank Mr. Ian Simpson for the fruitful discussions about football event data modeling. This work was financially supported by JST SPRING, Grant Number JPMJSP2125. The author Calvin C. K. Yeung would like to take this opportunity to thank the "Interdisciplinary Frontier Next-Generation Researcher Program of the Tokai Higher Education and Research System."
\fi
\vspace{-11pt}
\bibliographystyle{splncs04}
\ifarxiv
| {
"attr-fineweb-edu": 1.835938,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfK7xK3YB9raX2l3r |
\section{Introduction}
Millions of people watch sports events in stadiums, arenas, sports bars, and at home. In the esports world, in which the sport is happening in a virtual space, public viewing places are replaced with streaming platforms. In these platforms, thousands and even millions of viewers watch an event and share their opinions and experiences via text chat.
In this work, we study communication between viewers of The International (TI), one of the most significant esports events in the world, and the biggest annual tournament in Dota 2, a team-based multiplayer online battle arena.
We used chat data from the main broadcast channel on Twitch.tv during TI7 (2017) to study what is called ``crowdspeak''\cite{ford_chat_2017} -- a meaningful and coherent communication of thousands of viewers in the chat.
First, we investigate the thematic structure of viewers communications and disentangle contextual meanings of emotes and text shortcuts using Structural Topic Modelling (STM)\cite{roberts_stm:_2018}.
Second, we explore the connection between game events and topics occurring in the chat using cross-correlation analysis. In-game events to some extent define the topical structure of the chat -- they provoke emotional response or discussion of players and teams, while the lack of action on screen leads to viewers frustration which is expressed with boredom-related memes and emotes.
Last, we unveil the nature of the inequality between topics in the chat. In larger chats, participants tend to focus on fewer topics while in smaller chats a variety of discussion points can be found. As the tournament progresses, viewers become more emotionally engaged and focused on cheering, omitting other topics.
Based on our findings, we propose design ideas aimed to enhance viewers' experience.
\section{Related Works}
Groups of ``like-minded fans'' watch together sports events in public spaces, such as a pub or a bar, and cheer for their favourite teams and athletes ritually \cite{weed_pub_2007}. As fans share their experience with each other in various forms, spectatorship becomes an inherently social activity \cite{rubenking_sweet_2016}.
Viewers provide each other with information cues on how to behave oneself in a chat forming a ``normative behaviour'' \cite{cialdini_social_2004,vraga_filmed_2014} which is quite different from the one in small chats \cite{ford_chat_2017}. Messages flow rapidly, forming a ``waterfall of text'' \cite{hamilton_streaming_2014}, which makes it almost impossible to read messages one-by-one and have a meaningful conversation.
When an interesting event happens in a game (e.g., a death of a hero in our case), viewers react ``loudly'': they post a burst of messages, creating a ``local meaning of the [in-]game event'' \cite{recktenwald_toward_2017}. Viewers copy and paste, or type emotes, abbreviations, and memes, launching cascades of messages which disrupt the usual flow of communication \cite{seering_shaping_2017} causing a ``communication breakdown'' \cite{recktenwald_toward_2017}.
Communication in massive chats is far from meaningless. Users engage in various practices which ensure chat coherence. They post and re-post abbreviations, acronyms, and emotes in chat, creating a ``crowdspeak'' in which messages from many viewers unite into ``voices'' \cite{ford_chat_2017, trausan-matu_polyphonic_2010} -- particular positions or discussion threads adopted and expressed by many participants. The most straightforward approach in the operationalisation of voice is to consider a repetition of a word or a phrase to be a voice and the number of repetition to be its strength\cite{trausan-matu_polyphonic_2010}. Fort et al. \cite{ford_chat_2017} hypothesised that massive chats would have fewer unique voices in comparison to smaller chats; however, this hypothesis was not confirmed nor rejected.
\section{Background}
Dota 2 is an online multiplayer game in which two teams of five players compete for domination over the game field (map). Confrontation involves elimination of opponent team's characters (heroes) which later respawn to continue the fight. A typical game lasts for approximately 15 to 45 minutes\footnote{https://dota.rgp.io/\#chart-avg-duration}. During esports events, teams confront each other in matches, which consist of 1 to 5 games each.
TI7, like most sports tournaments, has several stages: groups, playoff, and finals (see Table \ref{tab:stages} for details). During group stages, the initial position of the team for the playoff is decided. In the playoff, losing teams are eliminated from the tournament. In the end, two teams compete in the finals.
\section{Data and Methods}
We employed the Chatty \footnote{chatty.github.io/} application to record chat message of the \textbf{dota2ti} channel on Twitch.tv which broadcasted most of the matches of TI7. In total, we collected more than 3 million chat messages from approximately 180 thousand unique viewers. We complemented these data with information on in-game events, which we obtained by employing Open Dota 2 API and the Dotabuff.com database.
\begin{table}[h]
\small
\caption{TI7 stages}
\begin{tabular}{lrrr}
\textbf{Stage} & \textbf{Groups} & \textbf{Playoff} & \textbf{Finals} \\
\toprule
\textbf{Days} & 4 & 5 & 1 \\
\textbf{Games} & 100 & 43 & 7 \\
\textbf{Messages} & 819857 & 1831529 & 381690 \\
\textbf{Mean msg. length (SD)} & 30 (47) & 26 (44) & 32 (47) \\
\textbf{Documents} & 29378 & 36165 & 5672 \\
\textbf{Mean doc. length (SD)} & 867 (726) & 1405 (962) & 2274 (1387) \\
\textbf{Viewers} & 78106 & 128278 & 59070 \\
\textbf{Mean per viewer} & 10 (38) & 14 (56) & 6 (16) \\
\textbf{Share of emotes} & 32\% & 36\% & 28\% \\
\textbf{Share of mentions} & $\sim$1\% & $\sim$1\% & \textless 1\% \\
\textbf{Mean viewers per game (SD)} & 2573 (3444) & 8084 (15225) & 14490 (14919) \\
\textbf{Mean msgs. per game} & 8199 & 42594 & 54527 \\
\bottomrule
\end{tabular}
\label{tab:stages}
\end{table}
\subsection{Structural Topic Modelling}
We applied Structural Topic Modelling (STM) \cite{roberts_stm:_2018,roberts_structural_2013} to analyze the contents of the chat. STM shares the same basic approach with other probabilistic topic models (e.g. Latent Dirichlet Allocation \cite{blei_probabilistic_2012}): it takes a corpus of text documents as an input and produces a given number of topics -- groups of words which occur in text often together.
STM, however, can take into account connection of topics probability with document-level metadata, which in our case was the stage of the tournament the game belongs to: Groups, Playoff, and Finals.
Chat messages on Twitch.tv are very short, often consisting of one or several words, emotes or shortcuts, which is not suitable for probabilistic topic modelling. We concatenated messages into documents, each covering a 7 second time window (mean = 7.99 messages per second, SD = 7, max = 115).
\subsection{Analysis of Event-driven Nature of Communication}
We applied cross-correlation analysis \cite{brockwell_time_2013} to investigate the connection between in-game events and topics prevailing in the chat. In this work, for a given time window, we consider a topic to prevail in case it is the most frequently occurring according to the STM results. For each time window, we also calculated the number of happened in-game events: usually, 1 or 0. We tested the resulting time-series with the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test to ensure they are stationary \cite{kwiatkowski_testing_1992}.
Cross-correlation analysis tests if there is a correlation between two time series with a lag in some range. It produces a vector of correlation coefficients, showing whether there is a tendency for events in one time series to precede, follow, or occur concurrently with the events in another.
As a result, for each of 100 topics, we computed a vector of correlation coefficients between two time series - in-game events and topic-prevalence - with lags in seven-seconds time frames. These temporal patterns show us if topics in the chat are preceded, followed, or prevail in the chat during the in-game event.
Having 100 vectors of correlation coefficients, we united them into groups of similar patterns. We used Spearman's correlation as a measure of similarity between vectors and applied hierarchical clustering to produce the groups.
\subsection{Analysis of Voices and Topical Inequality}
To analyse the connection between the context (tournament stage), the number of participants and unique voices\cite{trausan-matu_polyphonic_2010} in chats, we propose an alternative to Ford et al. operationalisation \cite{ford_chat_2017}, treating STM topics as proxies to unique voices and looking at the topical inequality, measured by Gini coefficient\cite{zeileis_ineq:_2014}.
For every game, we calculated Gini coefficient in the chat during the game. Gini coefficient ranges from 0 to 1, where 0 means absolute equality among topics (all topics are represented equally in the chat) and 1 means that only one topic is present while others are missing. We treat Gini coefficient as an estimate of the extent to which some voices prevail (are ``stronger''\cite{trausan-matu_polyphonic_2010}) in the chat over others.
Our analysis of this inequality is two-fold: (1) expanding Ford et al. approach to the whole text corpora, we look at Gini coefficient distribution in connection with the number of the particular game spectators, (2) we test for significant changes in median Gini index between different tournament stages.
Ford et al. suggested that massive chats would contain fewer unique voices and be less polyphonic. In our case, the number of voices (i.e., the number of topics) is predefined and constant. However, we can estimate and test the relation between the number of viewers in chat and topical inequality.
We applied Spearman's correlation test to test the hypothesis that topics in larger chats would be less equally distributed.
We also assessed differences in Gini coefficient between stages using Kruskal-Wallis test and Pairwise Mann-Whitney U-test to investigate if the topical inequality depends on the context of the chat.
\section{Analysis and Results}
\subsection{Event-driven Nature of Communication}
After clustering topics according to their temporal patterns, we resulted in four groups of topics which we labeled based on their contents: \textbf{Copypastas and Complains}, \textbf{Emotional Response}, \textbf{Game and Stream Content}, and \textbf{Professional Scene}.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{clusters}
\caption{Temporal patterns of groups of topics}
\label{fig:clusters}
\end{figure}
Association between these clusters and the topic model captures the differences in a context for similar topics and particular tokens (words, emotes), which is an especially fortunate feature for an emote-based contextually rich communication of streaming chats.
\paragraph{Boredom and Frustration.}
When nothing happens in the game, viewers are bored and convey this boredom and frustration in various ways. They send specific emotes (e.g. \textit{ResidentSleeper}) or even start ``copypasta'' cascades by copy-pasting certain emotes and messages.
When no in-game events are happening, topics of this cluster are in the chat. When the tension starts building and viewers anticipate interesting events to happen, viewers stop sending boredom-related messages. During and after the event, the level of these topics in the chat remains low.
\paragraph{Emotional Response.}
This cluster of topics represents spectators' response to in-game events: the death of a character, destruction of a building, or a cameraman missing an in-game event, for example. Messages often contain only one word or emote. Viewers write ``gg'' (abbreviation of ``good game''), ``ez'' (short handing for ``easy'') at the end of a match, ``322'' (a Dota 2 meme\footnote{http://knowyourmeme.com/memes/322}) to mock a player or team due to their poor performance, emotes ``\textit{pogchamp}'' (glory) and ``\textit{kreygazm}'' (excitement) to convey their feelings to what is happening on in the game.
These messages appear and fill the chat soon before an in-game event and disappear shortly after, as viewers anticipate something happening and then react loudly.
\paragraph{Game and Stream Content.}
Topics in this cluster reflect viewers' reaction to whatever is happening on screen at the moment. They discuss and cheer teams and players. Even though players can not perceive the audience, viewers still address their messages to them: ``BudStar BudStar BudStar LETS GO LIQUID BudStar BudStar BudStar''.
Topics of this cluster appear in the chat mostly after in-game events. In general, spectators comment and discuss stream content during the whole stream except for moments when in-game situations grab their attention.
\paragraph{Professional Scene.}
Topics of this cluster do not significantly relate to in-game events. Viewers cheer professional players and teams, discuss their in-game behaviour and even past incidents involving players almost all the time, and we did not notice any temporal pattern associated with these topics.
Besides listed, there are, of course, other topics in the chat which are less loud. They are included in the aforementioned four groups. Thus, our interpretation of each group is based on the prevailing content of its topics and does not necessarily take into account all of the variety of themes and discussion objects that can be found in the chat.
\subsection{Voices and Topical Inequality}
The finals of any sports tournament attract lots of people who come to cheer their favourite team, and TI7 was not an exception. Kruskal-Wallis test showed that there is a significant difference ($H = 32.2, p < 0.001$) in Gini coefficient among stages. Using Pairwise Mann-Whitney U-test, we found that the stages differ significantly (See Table \ref{tab:topic53}) and Gini coefficient increases with the tournament's progress (See Fig. \ref{fig:topic53}).
Further exploration showed that 58\% of messages during Finals belong to the single topic (topic 53) which is dedicated to cheering for one of the finals participants (See Fig.\ref{fig:letsgo_liquid}). Closer to the final game of the tournament cheering increases, displacing all other voices and discussion threads.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{topic53}
\caption{Gini coefficient during different stages of TI7}
\label{fig:topic53}
\end{figure}
When we removed topic 53 from calculations, we could no longer claim significant differences between median Gini coefficients for Finals and other stages, while the difference between Groups and Playoff remains significant (see Fig. \ref{fig:topic53}).
\begin{table}[h]
\caption{Pairwise comparison of Gini coefficient of games on different stages (Mann-Whitney U-test)}
\small
\begin{tabular}{lcc}
& \textbf{(a) Topic 53 Included} & \textbf{(b) Topic 53 Excluded} \\
\textbf{Groups - Playoff} & $p < 0.001$ & $p < 0.001$ \\
\textbf{Groups - Finals} & $p < 0.001$ & $p = 0.55$ \\
\textbf{Playoff - Finals} & $p < 0.001$ & $p = 0.16$ \\
\end{tabular}
\label{tab:topic53}
\end{table}
Thus, during the Finals, a single topic was dominating the chat, reducing unique voices to variations of chanting for the favourite team.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{letsgo_liquid}
\caption{Cheering for Team Liquid during Finals}
\label{fig:letsgo_liquid}
\end{figure}
While a limited number of cases in the later stages of TI7 does not allow us to explore the relationship between topic inequality and chat size, we explored this relationship on the case of the Group stage, performing Spearman's correlation test for number of unique participants in each game and Gini coefficient for topics in that game. The test showed significant positive ($\rho = 0.47, p < 0.001$) correlation between the size of the chat and the Gini coefficient (see Fig. \ref{fig:groups_corr}). Thus, in smaller chats topics are distributed more equally than in larger chats.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{groups_corr}
\caption{Gini coefficient and number of participants during group stage}
\label{fig:groups_corr}
\end{figure}
\subsection{Prevalent Topics Over Stages}
To compare stages of TI7 with each other, we compiled a list of prevalent topics which are associated (based on STM model) with the given stage.
Group stage has the set of 34 prevalent topics (see Fig. \ref{fig:topic_shares}). Viewers mostly discuss broadcast-related issues (yellow) and famous players (green). They copy-paste texts unrelated to the stream content (e.g. No job \textit{4Head} No girlfriend \textit{4Head} No friends \textit{4Head} No talents \textit{4Head} Wasting time on Twitch \textit{4Head} Must be me ) (cyan). The expression of emotions is present (brown) but does not stand out from other topics.
During Playoff (31 prevalent topics), in which teams are getting eliminated, chat communication becomes more focused on the game. Viewers discuss game elements: balance or position of the camera (violet), and actively express emotions (brown).
In the Finals (19 prevalent topics), more than a half of all messages were expressing support to Team Liquid -- a western team which was opposing team Newbie from China. The emotional response to events is also present (brown) while forms of copypasta other than chanting almost disappear.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{topic_shares.png}
\caption{Topics prevalent in TI7 stages. Size represents the share of messages on the topic}
\label{fig:topic_shares}
\end{figure}
\section{Discussion}
In-game events, the context of the game, and the number of participants in the chat -- all of these factors contribute to the topical structure and contents of the chat during the stream.
Using Topic Modelling and statistical analysis methods we reveal some important factors behind the viewers' behaviour in the chat. We show that the chat is overall event-driven: in-game events or the lack of those define the contents of strong voices heard in the chat. The context of the game affects the strength of heard voices: the closer to Finals, the stronger gets the cheering while other voices fade. The size of the chat contributes to the inequality between voices: larger chats have fewer strong voices while smaller chats are more polyphonic. The crowd adapts participation practices to characteristics of communication flow and context, which is driven by game events. E.g. copypasta on the earlier tournament stages can convey boredom and frustration, but when the tournament tension goes up, it is used mostly to cheer the favourite team. Thus, the chat becomes a tribune, an arena, or a sports bar, in which visitors watch the game and engage in discussions or cheer for their favourite team loudly.
We suggest that the experience of spectators is mediated by the same intra-audience effect \cite{cummins_mediated_2017} that emerges during live spectating and is consistent with a sports arena metaphor \cite{hamilton_streaming_2014,ford_chat_2017}. However, we observe that a metaphor of a sports bar is more accurate since players do not receive any feedback from chat participants during the game. While shouts and chants do not reach the addressee, spectators still find these responses important \cite{ford_chat_2017}, and thus the metaphor can lead to new design ideas.
\section{Design Implications}
The event-driven and sentiment sharing nature of massive tournaments' chats suggests rethinking its design goals.
During analysis, we found that chat communication is driven by events represented on the stream. Moreover, there is a relationship between stage (and the number of participants) and voice taking practices which implies that in larger chats audience becomes more focused on particular topics rather than speaking of everything.
Based on our results, we would like to propose two design implications to enhance the chat participants experience.
\subsection{Highlights}
Due to its event-driven nature, the chat can be a valuable and reliable source of information regarding in-game events: which are most notable, intense, or funny. A record of the game can be automatically cut (see \cite{wang_automatic_2005} for examples) into a short, meaningful clip which would convey all vital information about the game for those viewers who want to re-experience it.
By annotating the record with chat topic and event patterns metadata, we can ensure the preservation of the critical moments of the game which evoke the most vivid reaction in the audience.
\subsection{Visualization of Prevailing Voice}
As tension grows during the tournament and the topical inequality rises in the chat, certain topics become dominant and occupy significant screen space, leaving almost no place for other topics/voices. During the Finals of TI7, this topic was dedicated to cheering for the Team Liquid (see Fig. \ref{fig:letsgo_liquid}).
We suggest providing additional instruments of sentiment-sharing in the form of graphical elements or counters which would indicate the current sentiment of the chat. These indicators will inform users of the loudest voices in the chat, provoking to join one of them and participate in the coherent practice.
\begin{acks}
Acknowledgements are hidden for the review.
\end{acks}
| {
"attr-fineweb-edu": 1.654297,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcNTxK6wB9dbuSz7B |
\section{Conclusion}
\label{sec:conclusion}
The proliferation of optical and device tracking systems in the stadia of teams in professional leagues in recent years have produced a large volume of player and ball trajectory data, and this has subsequently enabled a proliferation of research efforts across a variety of research communities.
To date, a diversity of techniques have been brought to bear on a number of problems, however there is little consensus on the key research questions or the techniques to use to address them.
Thus, we believe that this survey of the current research questions and techniques is a timely contribution to the field.
This paper surveys the recent research into team sports analysis that is primarily based on spatio-temporal data, and describes a framework in which the various research problems can be categorised.
We believe that the structured approach used in this survey reflects a useful classification for the research questions in this area.
Moreover, the survey should be useful as a concise introduction for researchers who are new to the field.
\section{Data Mining}
\label{sec:data-mining}
The representations and structures described in Sections~\ref{sec:representation}--\ref{sec:networks} are informative in isolation, but may also be the input for more complex algorithmic and probabilistic analysis of team sports.
In this section, we present a task-oriented survey of the techniques that have been applied, and outline the motivations for these tasks.
\subsection{Applying Labels to Events}
Sports analysts are able to make judgments about events and situations that occur in a match, and apply qualitative or quantitative attributes to that event, for example, to rate the riskiness of an attempted shot on goal, or the quality of a pass.
Event labels such as these can be used to measure player and team performance, and are currently obtained manually by video analysis.
Algorithmic approaches to automatically produce such labels may improve the efficiency of the process.
\citeN{horton-2015} presented a classifier that determines the quality of passes made in football matches by applying a label of \emph{good}, \emph{OK} or \emph{bad} to each pass made, and were able to obtain an accuracy rate of \SI{85.8}{\percent}.
The classifier uses features that are derived from the spatial state of the match when the pass occurs, including features derived from the dominant region described in Section~\ref{sec:dominant-region}, which were found to be important features to the classifier.
In research by \citeN{beetz-2009}, the approach was to cluster passes, and to then induce a decision tree on each cluster where the passes were labelled as belonging to the cluster or not.
The feature predicates, learned as splitting rules, in the tree could then be combined to provide a description of the important attributes of a given pass.
\citeN{bialkowski-2014a} used the formation descriptors computed with the algorithm presented in \cite{bialkowski-2014b} (see Section~\ref{sub:identifying-formations}) to examine whether formations could accurately predict the identity of a team.
In the model, a linear discriminant analysis classifier was trained on features describing the team formation, and the learned model was able to obtain an accuracy of \SI{67.23}{\percent} when predicting a team from a league of \num{20} teams.
In \citeN{maheswaran-2012} the authors perform an analysis of various aspects of the rebound in basketball to produce a rebound model.
The rebound is decomposed into three components: the location of the shot attempt; the location where the rebound is taken; and the height of the ball when the rebound is taken.
Using features derived from this model, a binary classifier was trained to predict whether a missed shot would be successfully rebounded by the offensive team.
The model was evaluated and obtained an accuracy rate of \SI{75}{\percent} in experiments on held-out test data.
\subsection{Predicting Future Event Types and Locations}
The ability to predict how play will unfold given the current game-state has been researched extensively, particularly in the computer vision community.
This has an application in automated camera control, where the camera filming a match must automatically control its pitch, tilt and zoom.
The framing of the scene should ideally contain not just the current action, but the movement of players who can be expected to be involved in future action, and the location of where such future action is likely to occur.
\citeN{kim-2010} considered the problem of modelling the evolution of football play from the trajectories of the players, such that the location of the ball at a point in the near future could be predicted.
Player trajectories were used to compute a dense motion field over the entire playing area, and points of convergence within the motion field identified.
The authors suggest that these points of convergence indicate areas where the ball can be expected to move to with high probability, and the experiments described in the paper demonstrate this with several examples.
\citeN{yue-2014} construct a model to predict whether a basketball player will shoot, pass to one of four teammates, or retain possession.
The action a player takes is modelled using a multi-class conditional random field.
The input features to the classifier include latent factors representing player locations which are computed using non-negative matrix factorization, and the experimental results show that these features improve the predictive performance of the classifier.
\citeN{wei-2014} constructed a model to make short-term predictions of which football player will be in possession of the ball after a given interval. They propose a model -- augmented-Hidden Conditional Random Fields (aHCRF) -- that combines local observation features with the hidden layer states to make the final prediction of the player who possess the ball.
The experimental results show that they are able to design a model that can predict which player will be in possession of the ball after \SI{2}{\second} with \SI{99.25}{\percent} accuracy.
\subsection{Identifying Formations}
\label{sub:identifying-formations}
Sports teams use pre-devised spatial formations as a tactic to achieve a particular objective.
The ability to automatically detect such formations is of interest to sports analysts and coaches.
For example, a coach would be interested in understanding the proportion of time that a team maintains an agreed formation, and also when the team is compelled by the circumstances of the match to alter its formation.
Moreover, when preparing for a future opponent, an understanding of the formation used, and periods where the formation changes would be of interest.
A formation is a positioning of players, relative to the location of other objects, such as the pitch boundaries or goal/basket, the players' team-mates, or the opposition players.
Formations may be spatially anchored, for example a zone defence in basketball where players position themselves in a particular location on the playing area, see Fig.~\ref{fig:bball-zone-defence}.
On the other hand, a formation may vary spatially, but maintain a stable relative orientation between the players in the formation.
For example, the defensive players in a football team will position themselves in a straight line across the pitch, and this line will move as a group around the pitch, depending on the phase of play, Fig.~\ref{fig:football-back-four-defence}.
Finally, a different type of formation is a \emph{man marking} defence, where defending players will align themselves relative to the attacking players that they are marking, Fig.~\ref{fig:bball-man-marking}.
In this scenario, the locations of defenders may vary considerably, relative to their teammates or to the boundaries of the playing area.
\begin{figure}
\subfloat[Zone Defence]{
\includegraphics[width=0.355\linewidth]{figures/bball_zone_defence}
\label{fig:bball-zone-defence}}
\subfloat[Man-marking Defence]{
\includegraphics[width=0.355\linewidth]{figures/bball_man_marking}
\label{fig:bball-man-marking}}
\subfloat[Back-four Defence]{
\includegraphics[width=0.285\linewidth]{figures/football_back_four_defence}
\label{fig:football-back-four-defence}}
\caption{Examples of typical formations used in basketball and football. \protect\subref{fig:bball-zone-defence} The zone defence is spatially anchored to the dimensions of the court and the players positioning is invariant to the phase of play. \protect\subref{fig:bball-man-marking} Defenders who are man-marking will align themselves relative to their opposing player, typically between the attacker and the basket. \protect\subref{fig:football-back-four-defence} The back-four formation in football maintains the alignment of players in the formation, but will move forward and laterally, depending on the phase of play.}
\label{fig:defense-types}
\end{figure}
Moreover, the players that fulfil particular roles within a formation may switch, either explicitly through substitutions or dynamically where players may swap roles for tactical reasons.
The following approaches have been used to determine formations from the low-level trajectory signal.
\citeN{lucey-2013} investigated the assignment of players to roles in field hockey, where teams use a formation of three lines of players arrayed across the field.
At any time $t$ there is a one-to-one assignment of players to roles, however this assignment may vary from time-step to time-step.
This problem is mathematically equivalent to permuting the player ordering $\vec{p}_t^{\tau}$ using a permutation matrix $\matr{x}_t^{\tau}$ which assigns the players to roles $r_t^{\tau} = \matr{x}_t^{\tau} \vec{p}_t^{\tau}$.
The optimal permutation matrix $\matr{x}_t^{\tau}$ should minimise the total Euclidean distance between the reference location of each role and the location of the player assigned to the role, and can be computed in closed form using the Hungarian algorithm~\cite{kuhn-1955}.
\citeN{wei-2013} used this approach as a preprocessing step on trajectory data from football matches, and the computed role locations were subsequently used to temporally segment the matches into game phases.
\citeN{lucey-2014} applied role assignment to basketball players in sequences leading up to three-point shots.
They analysed close to \num{20000} such shots and found that role-swaps involving particular pairs of players in the moments preceding a three-point shot have a significant impact on the probability of the shooter being \emph{open} -- at least \SI{6} feet away from the nearest marker -- when the shot is made.
Furthermore, \citeN{bialkowski-2014b} observed that the role assignment algorithm presented by \citeN{lucey-2013} required a predefined prototype formation to which the players are assigned.
They consider the problem of simultaneously detecting the reference location of each role in the formation, and assigning players to the formation, using an expectation maximization approach~\cite{dempster-1977}.
The initial role reference locations are determined as the mean position of each player.
The algorithm then uses the Hungarian algorithm to update the role assignment for each player at each time-step, and then the role reference locations are recomputed according to the role assignment.
The new locations are used as input for the next iteration, and process is repeated until convergence.
The learned formations for each team and match were then clustered into six formations, and the authors claim that the clustered formations were consistent with expert knowledge of formations used by football teams.
This was validated experimentally by comparing the computed formation with a formation label assigned by an expert, and an accuracy of \SI{75.33}{\percent} was obtained.
In a subsequent paper, \citeN{bialkowski-2014} investigated differences in team strategies when playing home or away, by using formations learned with the role assignment algorithm.
By computing the mean position when teams are playing at home from when they are playing away, they observed that teams defend more deeply when away from home, in other words they set their formation closer to the goal they are defending.
A qualitatively different formation is for players to align themselves with the positions of the opposition players, such as \emph{man-marking} defense used in basketball, see Fig.~\ref{fig:bball-man-marking}.
\citeN{franks-2015} defined a model to determine which defender is marking each attacker.
For a given offensive player at a given time, the mean location of the defender is modelled as a convex combination of three locations: the position of the attacker, the location of the ball and the location of the hoop.
The location of a defender, given the observed location of the marked attacker, is modelled as a Gaussian distribution about a mean location.
The matching between defenders and the attacker that they are marking over a sequence of time-steps is modelled using a Hidden Markov Model, ensuring that the marking assignments are temporally smoothed.
\subsection{Identifying Plays and Tactical Group Movement}
Predefined \emph{plays} are used in many team sports to achieve some specific objective.
American football uses highly structured plays where the entire team has a role and their movement is highly choreographed.
On the other hand, plays may also be employed in less structured sports such as football and basketball when the opportunity arises, such as the \emph{pick and roll} in basketball or the \emph{one-two} or \emph{wall pass} in football.
Furthermore, teammates who are familiar with each other's playing style may develop ad-hoc productive interactions that are used repeatedly, a simple example of which is a sequence of passes between a small group of players.
Identification of plays is a time-consuming task that is typically carried out by a video analyst, and thus a system to perform the task automatically would be useful.
An early attempt in this direction attempted to recognise predefined plays in American football~\cite{intille-1999}.
They model a play as a choreographed sequence of movements by attacking players, each trying to achieve a local goal, and in combination achieve a group goal for the play.
The approach taken was to encode predefined tactical plays using a temporal structure description language that described a local goal in terms of a sequence of actions carried out by an individual player.
These local goals were identified in the input trajectories using a Bayesian belief network.
A second belief network then identified whether a global goal had been achieved based on the detected local goals -- signifying that the play has occurred.
Two papers by Li~\emph{et al.}\xspace investigated the problem of identifying group motion, in particular the type of offensive plays in American football. \citeN{li-2009} presented the Discriminative Temporal Interaction Network (DTIM) framework to characterise group motion patterns. The DTM is a temporal interaction matrix that quantifies the interaction between objects at two given points in time.
For each predefined group motion pattern -- a play -- a multi-modal density was learned using a properly defined Riemannian metric, and a MAP classifier was then used to identify the most likely play for a given input set of trajectories.
The experiments demonstrated that the model was able to accurately classify sets of trajectories into five predefined plays, and outperformed several other common classifiers for the task.
This model has the advantage of not requiring an \emph{a priori} definition of each player's movement in the play, as required in \citeN{intille-1999}.
\citeN{li-2010} considered group motion segmentation, where a set of unlabelled input trajectories are segmented into the subset that participated in the group motion, and those that did not.
The problem was motivated by the example of segmenting a set of trajectories into the set belonging to the offensive team (who participated in the play) and the defensive team (who did not).
The group motion is modelled as a dynamic process driven by a \emph{spatio-temporal driving force} -- a densely distributed motion field over the playing area. The driving force is modelled as a $3 \times 3$ matrix $F(t_0, t_f, x, y)$ such that $X(t_f) = F(t_0, t_f, x, y)X(t_0)$.
Thus, an object located at $X(t_0)$ at time $t_0$ will be driven to $X(t_f)$ at time $t_f$.
Using Lie group theory \cite{rossmann-2002}, a Lie algebraic representation $f$ of $F$ is determined with the property that the space of all $f$s is linear, and thus tractable statistical tools can be developed upon $f$.
A Gaussian mixture model was used to learn a fixed number of driving forces at each time-step, which was then used to segment the trajectories.
There has been number of diverse efforts to identify commonly occurring sequences of passes in football matches.
In \citeN{borrie-2002}, the pitch is subdivided into zones and sequences of passes are identified by the zones that they start and terminate in.
A possession can thus be represented by a string of codes representing each pass by source and target zone, and with an elapsed time between them.
They introduce \emph{T-pattern} analysis which is used to compute possessions where the same sequence of passes are made with consistent time intervals between each pass, and frequently occurring patterns could thus be identified.
\citeN{camerino-2012} also used T-pattern analysis on pass strings, however the location of passes was computed relative to the formation of the team in possession, e.g.
between the defense and midfield, or in front of the attacking line.
An algorithm to detect frequently occurring sequences of passes was presented in \citeN{gudmundsson-2014}.
A suffix tree~\cite{weiner-1973} was used as a data structure $D$ to store sequences of passes between individual players.
A query $(\tau, o)$ can then be made against $D$ that returns all permutations of $\tau$ players such that the ball is passed from a player $p_1$ to $p_{\tau}$, via players $p_2, \ldots, p_{\tau - 1}$ at least $o$ times, and thus determine commonly used passing combinations between players.
\citeN{vanhaaren-2015}
considered the problem of finding patterns in offensive football strategies.
The approach taken was to use inductive logic programming to learn a series of clauses describing the pass interactions between players during a possession sequence that concludes with a shot on goal.
The passes were characterised by their location on the pitch, and a hierarchical model was defined to aggregate zones of the pitch into larger regions.
The result is a set of rules, expressed in first-order predicate logic, describing the frequently-occurring interaction sequences.
Research by \citeN{wang-2015} also aimed to detect frequent sequences of passing.
They claim that the task of identifying tactics from pass sequences is analogous to identifying topics from a document corpus, and present the Team Tactic Topic Model (T$^3$M) based on Latent Dirichlet Allocation \cite{blei-2003}.
Passes are represented as a tuple containing an order-pair of the passer and receiver, and a pair of coordinates representing the location where the pass was received.
The T$^3$M is an unsupervised approach for learning common tactics, and the learned tactics are coherent with respect to the location where they occur, and the players involved.
\subsection{Temporally Segmenting the Game}
Segmenting a match into phases based on a particular set of criteria is a common task in sports analysis, as it facilitates the retrieval of important phases for further analysis.
The following paragraphs describe approaches that have been applied this problem for various types of criteria.
Hervieu~\emph{et al.}\xspace~\shortcite{hervieu-2009,hervieu-2011} present a framework for labelling phases within a handball match from a set of predefined labels representing common attacking and defensive tactics.
The model is based on a hierarchical parallel semi-Markov model (HPaSMM) and is intended to model the temporal causalities implicit in player trajectories.
In other words, modelling the fact that one player's movement may cause another player to subsequently alter their movement.
The upper level of the hierarchical model is a semi-Markov model with a state for each of the defined phase labels, and within each state the lower level is a parallel hidden Markov model for each trajectory. The duration of time spent in each upper level state is modelled using a Gaussian mixture model.
In the experiments, the model was applied to a small dataset of handball match trajectories from the $2006$ Olympics Games final, and resulted in accuracy of \SI{92}{\percent} accuracy on each time-step, compared to the ground truth provided by an expert analyst.
The model exactly predicted the sequence of states, and the misclassifications were all the result of time-lags when transitioning from one state to the subsequent state.
\citeN{Perse2009a} investigated segmentation of basketball matches.
A framework with two components was used, the first segmented the match duration into sequences of offensive, defensive or time-out phases.
The second component identified basic activities in the sequence by matching to a library of predefined activities, and the sequences of activities were then matched with predefined templates that encoded known basketball plays.
\citeN{wei-2013} considered the problem of automatically segmenting football matches into distinct game phases that were classified according to a two-level hierarchy, using a decision forest classifier.
At the top level, phases were classed as being \emph{in-play} or a \emph{stoppage}. \emph{In-play} phases were separated into highlights or non-highlights; and \emph{stoppages} were classified by the reason for the stoppage: \emph{out for corner}, \emph{out for throw-in}, \emph{foul} or \emph{goal}.
The classified sequences were subsequently clustered to find a team's most probable method of scoring and of conceding goals.
In a pair of papers by Bourbousson~\emph{et al.}\xspace~\shortcite{bourbousson-2010a,bourbousson-2010b}, the spatial dynamics in basketball was examined using relative-phase analysis. In \citeN{bourbousson-2010a}, the spatial relation between dyads of an attacking player and their marker were analysed.
In \citeN{bourbousson-2010b}, the pairwise relation between the centroid of each team was used, along with a \emph{stretch index} that measured the aggregate distance betweens players and their team's centroid.
A Hilbert transformation was used to compute the relative phase in the $x$ and $y$ direction of the pairs of metrics.
Experimental results showed a strong in-phase relation between the various pairs of metrics in the matches analysed, suggesting individual players and also teams move synchronously.
The authors suggest that the spatial relations between the pairs are consistent with their prior knowledge of basketball tactics.
\citeN{frencken-2011} performed a similar analysis of four-a-side football matches. They used the centroid and the convex hull induced by the positions of the players in a team to compute metrics, for example the distance in the $x$ and $y$ direction of the centroid, and the surface area of the convex hull.
The synchronized measurements for the two teams were modelled as coupled oscillators, using the HKB-model~\cite{haken-1985}.
Their hypothesis was that the measurements would exhibit in-phase and anti-phase coupling sequences, and that the anti-phase sequences would denote game-phases of interest.
In particular, the authors claim that there is a strong linear relationship between the $x$-direction of the centroid of the two teams, and that phases where the centroid's $x$-directions cross are indicative of unstable situations that are conducive to scoring opportunities.
They note that such a crossing occurs in the build up to goals in about half the examples.
\begin{open}
Coaches and analysts are often interested in how the \emph{intensity} of a match varies over time, as periods of high intensity tend to be present more opportunities and threats.
It is an interesting open problem to determine if it is possible to compute a measure of intensity from spatio-temporal data, and thus be able to determine high-intensity periods.
\end{open}
\subsection{Movement Models and Dominant Regions}
\label{sec:dominant-region}
A team's ability to control space is considered a key factor in the team's performance, and was one of the first research areas in which computational tools were developed. Intuitively a player dominates an area if he can reach every point in that area before anyone else (see Definition~\ref{def:DR}). An early algorithmic attempt to develop a computational tool for this type of analysis was presented by \citeN{taki-1996}, which defined the \emph{Minimum Moving Time Pattern} -- subsequently renamed the \emph{Motion Model} -- and the \emph{Dominant Region}.
\subsubsection{Motion Model} \label{sssec:MM}
The motion model presented by \citeN{taki-1996} is simple and intuitive: it is a linear interpolation of the acceleration model.
It assumes that potential acceleration is the same in all directions when the player is standing still or moving very slowly. As speed increases it becomes more difficult to accelerate in the direction of the movement.
However, their model did not account for deceleration and hence is only accurate over short distances.
\citeN{Fujimura2005} presented a more realistic motion model, in particular they incorporated a resistive force that decrease the acceleration.
The maximum speed of a player is bounded, and based on this assumption, \citeN{Fujimura2005} formulated the following equation of motion:
\begin{equation} \label{eq:MM}
m \frac{d}{dt} v = F-kv,
\end{equation}
where $m$ is the mass, $F$ is the maximum driving force, $k$ is the resistive coefficient, and $v$ is the velocity. The solution of the equation is:
$$v = \frac {F}{k} - (\frac {F}{k}-v_0) \cdot \exp (-\frac {k}{m} t),$$
where $v_0$ is the velocity at time $t=0$. If the maximum speed $v_{\max}=F/k$ and the magnitude of the resistance $\alpha=k/m$ are known, then the motion model is fixed. To obtain $\alpha$ and $v_{\max}$, \citeN{Fujimura2005} studied players' movement on video and empirically estimated $\alpha$ to be $1.3$ and $v_{\max}$ as $7.8$m/s.
This is then generalised to two dimensions as follows:
$$m \frac{d}{dt} \vec{v} = \vec{F} -k \vec{v}.$$
Solving the equation we get that all the points reachable by a player, starting at position $x_0$ with velocity $\vec{v_0}$, can reach point $x$ within time $t$ form the circular region centred at
$$x_0+\frac{1-e^{-\alpha t}}{\alpha} \cdot \vec{v_0} \quad \text{with radius}
\quad \vec{v}_{\max} \cdot \frac{1-e^{-\alpha t}}{\alpha}.$$
They compared this model empirically and observed that the model yields a good approximation of actual human movement, but they stated that a detailed analysis is a topic for future research.
A different model was used in a recent paper by \citeN{cervone-2014a} with the aim to predict player movement in basketball. They present what they call a micro-transition model. The micro-transition model describes the player movement during a single possession of the ball.
Separate models are then used for defense and attack. Let the location of an attacking player $\ell$ at time $t$ be $(x^{\ell}(t),y^{\ell}(t))$. Next they model the movement in the $x$ and $y$ coordinates at time $(t+\eps)$ using
\begin{equation} \label{eqn:motion}
x^{\ell}(t+\eps)=x^{\ell}(t)+\alpha^{\ell}_x[x^{\ell}(t)-x^{\ell}(t-\eps)]+ \eta_x^{\ell}(t)],
\end{equation}
and analogously for $y^{\ell}(t+\eps)$.
This expression derives from a Taylor series expansion of the function for determining the ball-carrier's position such that $\alpha^{\ell}_x[x^{\ell}(t)-x^{\ell}(t-\eps)] \approx \eps x^{\ell}(t)$, and $\eta^{\ell}_x(t)$ represents the contribution of higher order derivatives modelling accelerations and jerks.
When a player receives the ball outside the three-point line, the most common movement is to accelerate towards the basket.
On the other hand, a player will decelerate when closer to the basket.
Players will also accelerate away from the boundary of the court as they approach it. To capture this behaviour the authors suggest mapping a player's location to the additive term $\eta_x^{\ell}(t)$ in~\eqref{eqn:motion}.
The position of the five defenders are easier to model, conditioned on the evolution of the attack's positions, see \citeN{cervone-2014a} for details.
Next we consider how the motion models have been used to develop other tools.
\subsubsection{Dominant Regions} \label{sssec:DR}
The original paper by~\citeN{taki-1996} defined the dominant region as:
\begin{definition} \label{def:DR}
The \emph{dominant region} of a player $p$ is the region of the playing area where $p$ can arrive before any other player.
\end{definition}
Consequently the subdivision induced by the dominant regions for all players will partition the playing area into cells.
In a very simple model where acceleration is not considered, the dominant region is equivalent to the Voronoi region and the subdivision can be efficiently computed~\cite{fortune-1987}. However, for more elaborate motion models, such as the ones described in Section~\ref{sssec:MM}, the distance function is more complex.
For some motion models the dominant region may not be a connected area~\cite{taki-2000},
an example is shown in Fig.~\ref{fig:MM}a. A standard approach used to compute the subdivision for a complex distance function is to compute the intersection of surfaces in three dimensions, as shown in Fig.~\ref{fig:MM}b. However, this is a complex task and time-consuming for non-trivial motion models. Instead approximation algorithms have been considered in the literature.
\begin{figure}
\begin{center}
\includegraphics[width=.8\textwidth]{figures/DR-examples}
\end{center}
\caption{(a) Showing the dominant region for two players. The left player is moving to the right with high speed and the right player is standing still. Using the motion models discussed in Section~\ref{sssec:MM} the resulting dominant region for a single player might not be connected. (b) A standard approach used in computational geometry to subdivide the plane is to compute the projection of the intersection of surfaces in three dimensions onto the plane.}
\label{fig:MM}
\end{figure}
Taki and Hasegawa~\shortcite{taki1999,taki-2000} implemented algorithms to compute dominant regions, albeit using a simple motion model. Instead of computing the exact subdivision they considered the $640\times480$ pixels that at that time formed a computer screen and for each pixel they computed the player that could reach that pixel first, hence, visualizing the dominant regions. The same algorithm for computing the dominant region was used by~\citeN{Fujimura2005}, although they used a more realistic motion model, see Section~\ref{sssec:MM}.
However, the above algorithms were shown to be slow in practice, for example preliminary experiments by \citeN{nakanishi-2009} stated that the computation requires \SIrange{10}{40}{\second} for a $610\times420$ grid. To achieve the real-time computation required for application in the RoboCup robot football tournament~\cite{kitano-1997}, the authors proposed an alternative approach. Instead of computing the time
required for every player to get to every point,
\citeN{nakanishi-2009} used a so-called \emph{reachable polygonal region} (RPR). The RPR of a player $p$ given time $t$ is the region that $p$ can reach within time $t$. An advantage with using the RPR for computing dominant regions is that more complex motion models can be used by simply drawing the RPR for different values of $t$. They presented the following high-level algorithm. Given a sequence of time-steps $t_i$, $1\leq i \leq k$ compute the RPRs for each player and each time-step.
The algorithm then iterates through the sequence of time-steps and for each pair of players, the \emph{partial dominant regions} are constructed from the RPRs.
The partial dominant regions are then combined with the dominant regions computed in the previous time-step to form new dominant regions.
Assuming that the RPR is a convex area for any $p$ and any $t$, Nakanishi~\emph{et al.}\xspace claim a factor of $1000$ improvement in computation time at the cost of roughly a $10\%$ drop in accuracy.
\begin{comment}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{PartialRPR.eps}
\caption{(a) Two RPRs intersecting. (b) The symmetric difference between two RPRs. (c) The partial RPRs used in the algorithm by~\protect\cite{nakanishi-2009}.}
\label{fig:partial-RPR}
\end{center}
\end{figure}
\end{comment}
\citeN{gudmundsson-2014} used RPRs induced from real trajectory data. They also presented an algorithm for constructing an approximate dominant region subdivision, which is superficially similar to the algorithm by \cite{nakanishi-2009}. However, instead of computing partial dominant regions for each pair of players at each time-step, an approximate bisector is constructed for every pair of players. An example of an approximate bisector between two players is shown in Fig.~\ref{fig:DiscBis}a, and in Fig.~\ref{fig:DiscBis}b the final subdivision generated by the algorithm in \citeN{gudmundsson-2014} is depicted.
\begin{comment}
In brief, the algorithm has four steps. First, compute the RPRs for each player and each time-step. Second, for every pairwise combination of players, the intersection points are determined between the RPRs for each time-step. In the third step, the intersection points are used to produce an approximate bisector between each pair of players. This is done using a modified version of Kruskal's minimum spanning tree algorithm \citeN{Kruskal1956}, constrained so that the degree of every vertex is at most two, and thus the output is a set of one or more disconnected paths that span the intersection points, as shown in Fig.~\ref{fig:DiscBis}a. The last step constructs the smallest enclosing polygon around each player from the approximate bisectors, and this is the player's dominant region. An example of the output is shown in Fig.~\ref{fig:DiscBis}b.
\end{comment}
\begin{figure}
\subfloat[~]{
\includegraphics[width=0.40\linewidth]{figures/ellipse_boundary_moving}
\label{fig:DiscBis:bisector}}
\subfloat[~]{
\includegraphics[width=0.59\linewidth]{figures/football_dominant_region}
\label{fig:DiscBis:dominant-region}}
\caption{\protect\subref{fig:DiscBis:bisector} An approximate bisector between two players using the intersection points of the RPRs. \protect\subref{fig:DiscBis:dominant-region} An example of the approximate dominant region subdivision by \protect\citeN{gudmundsson-2014}.}
\label{fig:DiscBis}
\end{figure}
A closer study of a player's dominant region was performed by~\citeN{fonseca-2012} in an attempt to
describe the spatial interaction between players.
They considered two variables denoting the
smallest distance between two teammates and the size of the dominant region.
They observed that the individual dominant regions
seem to be larger for the attacking team.
They also found that for the defending team the two measures were more irregular which indicates that their movement was more unpredictable that the movement of the attacking team.
According to the authors, the player and team dominant regions
detect certain match events
such as ``when the ball is received by an attacker inside the defensive structure, revealing behavioural patterns that may be used to explain the performance outcome."
\citeN{ueda2014} compared the team-area and the dominant region (within the team-area) during offensive and defensive phases.
The \emph{team area} is defined as the smallest enclosing orthogonal box containing all the field players of the defending team.
The results seem to show that there exists a correlation between the ratio of the dominant region to team area, and the performance of the team's offence and defence.
Dominant regions of successful attacks were thinner than those for unsuccessful attacks that broke down with a turnover event located near the centre of the playing area.
The conclusion was that the dominant region is closely connected to the offensive performance, hence, perhaps it is possible to evaluate the performance of a group of players using the dominant region.
\begin{open}
The function modelling player motion used in dominant region computations has often been simple for reasons of tractability or convenience.
Factors such as the physiological constraints of the players and \emph{a priori} momentum have been ignored.
A motion function that faithfully models player movement and is tractable for computation is an open problem.
\end{open}
\subsubsection{Further Applications}
The dominant region is a fundamental structure that has been shown to support several other interesting measures, and are discussed next.
\begin{enumerate}
\item {\bf (Weighted) Area of team dominant region.} \citeN{taki-1996} defined a \emph{team dominant region} as the union of dominant regions of all the players in the team.
Variations in the size of the team dominant region was initially regarded by \cite{taki-1996} as a strong indication on the performance of the team.
However, \citeN{Fujimura2005} argued that the size of a dominant region does not capture the contribution of a player. Instead they proposed using a \emph{weighted} dominant region, by either weighting with respect to the distance to the goal, or with respect to the distance to the ball.
They argued that both these approaches better
model the contribution of a player compared to simply using the size of the dominant region.
However, no further analysis was performed.
\citeN{Fujimura2005} also suggested that the weighted area of dominant regions can be used to evaluate attacking teamwork: tracking the weighted dominant region (``defensive power") over time for the defender marking each attacker will indicate the attacker's contribution to the team.
\item {\bf Passing evaluation.}
A player's \emph{passable area} is the region of the playing area where the player can potentially receive a pass. The size and the shape of the passable area depends on the motion model, and the positions of the ball and the other players. Clearly this is also closely related to the notion of dominant region.
\begin{definition} \cite{gudmundsson-2014}
A player $p$ is open for a pass if there is some direction and (reasonable) speed that the ball can be passed, such that $p$ can intercept the ball before all other players.
\end{definition}
\citeN{taki-2000} further classified a pass as ``successful" if the first player that can receive the pass is a player from the same team.
This model was extended and implemented by \citeN{Fujimura2005}, as follows. They empirically developed a motion model for the ball, following formula~(\ref{eq:MM}) in Section~\ref{sssec:MM}.
They then defined the \emph{receivable pass variation} (RPV) for each player to be the number of passes the player can receive among a set of sampled passes.
They experimentally sampled \SI{54000} passes by discretizing $[0,2\pi)$ into \SI{360} unit directions and speeds between \SI{1} and \SI{150}{\km\per\hour} into \SI{150} units.
\citeN{gudmundsson-2014} also used a discretization approach, but viewed the problem slightly differently.
Given the positions, speeds and direction of motion of the players, they approximated who is open for a pass for each discretized ball speed.
For each fixed passing speed they built RPRs for each player and the ball over a set of discrete time-steps. Then
an approximate bisector is computed between the ball and the player. Combining the approximate bisectors for all the players,
a piecewise linear function $f$ is generated over the domain $[0,2\pi)$. The segments of the bisectors that lie on the lower envelope of $f$ map to intervals on the domain where the player associated with the bisector is open for a pass. An example of the output is shown in Fig.~\protect\ref{fig:football-passability} for a fixed ball speed.
\begin{comment}
Then for each fixed passing speed they build RPRs for each player and the ball over a set of discrete time-steps $t$.
To determine the points in the plane where a player $p$ can first intercept the ball if passed at a given speed, the intersection points of the passable region boundary and the ball's potential locations are computed for each discrete time $t$.
They then compute an approximate bisector $f_p$ between the ball and the player $p$, that is, if the ball is kicked in a direction with the fixed speed this boundary describes the first possible time of contact between the ball and the player given the direction of the ball.
Combining the bisectors for all players other than the passer, a piecewise linear function $f$ is generated over the domain $[0,2\pi)$.
The segments of the bisectors that lie on the lower envelope of $f$ map to intervals on the domain where the player associated with the bisector is open for a pass.
\end{comment}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/football_passability}
\end{center}
\caption{Available receivers of a pass by player Red~$2$ where velocity of the ball is \protect\SI{20}{\meter\per\second}. Each sector represents an interval on $[0,2\pi)$ that indicates which player may receive the pass. Players may receive a pass made at more than one interval, for example Blue~$7$.}
\label{fig:football-passability}
\end{figure}
\begin{open}
The existing tools for determining whether a player is open to receive a pass only consider passes made along the shortest path between passer and receiver and where the ball is moving at constant velocity.
The development of more realistic model that allows for aerial passes, effects of ball-spin, and variable velocities is an interesting research question.
\end{open}
\item {\bf Spatial Pressure.} An important tactical measure is the amount of spatial pressure the team exerts on the opposition.
Typically when a team believes that the opponent is weak at retaining possession of the ball, then a high pressure tactic is used.
\citeN{taki-1996} defined spatial pressure for a player $p$ as:
$$m\cdot (1-P) + (1-m)\cdot (1-d/D),$$
where, for a fixed radius $r$, $P$ denotes the fraction of the disk of radius $r$ with center at $p$ that lies within the dominant region of opposing players, $d$ is the distance between $p$ and the ball, $D$ is the maximal distance between $p$ and any point on the playing area, and $m$ is a preset weight.
This definition was also used by \citeN{horton-2015}. See Fig.~\ref{fig:Applications} for two examples of spatial pressure.
\begin{open}
The definition of spatial pressure in \citeN{taki-1996} is simple and does not model effects such as the direction the player is facing or the direction of pressuring opponents, both of which would intuitively be factors that ought to be considered.
Can a model that incorporates these factors be devised and experimentally tested?
\end{open}
\item {\bf Rebounding.}
Traditionally a player's rebounding performance has been measured as the average number of rebounds per game. \citeN{maheswaran-2014} presented a model to quantify the potential to rebound unsuccessful shots in basketball in more detail. Simplified the model considers three phases. The first phase is the \emph{position} of the players when the shot is made. From the time that the ball is released until it hits the rim, the players will try
to move into a better position
-- the \emph{crash} phase.
After the crash phase the players have the chance to make the rebound.
The proficiency of a player in rebounding is the measured by the \emph{conversion} -- the third phase.
Both the positioning phase and the crash phase make use of the dominant region (Voronoi diagram) to value the position of the player, i.e., they
compute a ``real estate" value of the dominant region of each player both when the shot is made, and when the shot hits the rim.
These values, together with the conversion, are combined into a \emph{rebounding} value.
\begin{comment}
\todo{Have not made any changes to the next two paras. I think we need to focus them!}
Two of these phases use the dominant region: the positioning and the crash phase. For the positioning they first address how to value the initial position of the players when the shot is taken. They give a value to the ``real estate" that each player owns at the time of the shot. This breaks down into two questions: (i) What is the real estate for each player? (ii) What is it worth? For the first question, they use the dominant region. To address the second question, they condition on where the shot was taken and calculate a spatial probability distribution of where all rebounds for similar shots were obtained. To assign each player a value for initial positioning, integrate the spatial distribution over the dominant region for that player. This yields the likelihood of that player getting the rebound if no one moved when the shot was taken and they controlled their cell.
For the crash phase they look at the trajectory of the ball and calculate the time when it is closest to the center of the rim. At this point, they reapply the dominant region analysis above and calculate the rebound percentages of each player, i.e., the value of the real estate that each player has at the time the ball hits the rim. They then look at the relationship of the difference between the probability at the rim compared to the probability at the shot. Finally, they subtract the value of the regression at the player's initial positioning value from the raw crash delta to form the players Crash value. Intuitively, the value indicates how much more probability is added by this player beyond what a player with similar initial positioning would add. To measure hustle, the difference in rebound success probability between the time the shot was taken, and the time that the ball was available for rebounding was computed, and a regression model fitted. Finally, conversion is the rate at which a player who is able to make the rebound, does actually make it.
\end{comment}
\end{enumerate}
\begin{figure}
\begin{center}
\includegraphics[width=0.7\textwidth]{figures/Applications}
\caption{Comparing the pressure that the encircled player is under in the two pictures shows that the encircled player in the right figure is under much more pressure.}
\label{fig:Applications}
\end{center}
\end{figure}
\section{Introduction}
\label{sec:introduction}
Team sports are a significant recreational activity in many societies, and attract participants to compete in, watch, and also to capitalise from the sport.
There are several sporting codes that can be classed together as \emph{invasion sports} in that they share a common structure: two teams are competing for possession of a ball (or puck) in a constrained playing area, for a given period of time, and each team has simultaneous objectives of scoring by putting the ball into the opposition's goal, and also defending their goal against attacks by the opposition.
The team that has scored the greatest number of goals at the end of the allotted time is the winner.
Football (soccer), basketball, ice hockey, field hockey, rugby, Australian Rules football, American football, handball, and Lacrosse are all examples of invasion sports.
Teams looking to improve their chances of winning will naturally seek to understand their performance, and also that of their opposition.
Systematic analysis of sports play has been occurring since the $1950$s using manual notation methods~\cite{reep-1968}.
However human observation can be unreliable -- experimental results in \citeN{franks-1986}
showed that the expert observers' recollection of significant match events
is as low as \SI{42}{\percent} -- and in recent years, automated systems to capture and analyse sports play have proliferated.
Today, there are a number of systems in use that capture spatio-temporal data from team sports during matches.
The adoption of this technology and the availability to researchers of the resulting data varies amongst the different sporting codes and is driven by various factors, particularly commercial and technical.
There is a cost associated with installing and maintaining such systems, and while some leagues mandate that all stadium have systems fitted, in others the individual teams will bear the cost, and thus view the data as commercially sensitive.
Furthermore, the nature of some sports present technical challenges to automated systems, for example, sports such as rugby and American football have frequent collisions that can confound optical systems that rely on edge-detection.
To date, the majority of datasets available for research are sourced from football and basketball, and the research we surveyed reflects this, see Fig.~\ref{fig:references-histogram}.
The National Hockey League intends to install a player tracking system for the $2015\mbox{/}6$ season, and this may precipitate research in ice hockey in coming years~\cite{sportvision}.
\begin{figure}
\begin{center}
\includegraphics[scale=1.0]{figures/references_histogram}
\end{center}
\caption{Spatial sports research papers cited in this survey, by year, $1995$-$2015$ (to date), divided by sporting code. There has been a significant increase in papers published in this area as data has become available for researchers, particularly in football and basketball.}
\label{fig:references-histogram}
\end{figure}
Sports performance is actively researched in a variety of disciplines.
To be explicit, the research that we consider in this survey fulfils three key criteria:
\begin{enumerate}
\item We consider \textbf{team-based invasion sports}.
\item The model used in the research has \textbf{spatio-temporal data} as its primary input.
\item The model performs some \textbf{non-trivial computation} on the spatio-temporal data.
\end{enumerate}
The research covered has come from many research communities, including machine learning, network science, GIS, computational geometry, computer vision, complex systems science, statistics and sports science.
There has been a consequent diversity of methods and models used in the research, and our intention in writing this survey was to provide an overview and framework on the research efforts to date.
Furthermore, the spatio-temporal data extracted from sports has several useful properties that make it convenient for fundamental research. For instance, player trajectories exhibit small spatial and temporal range, dense sampling rates, a small number of agents (i.e. players), highly cooperative and adversarial interaction between agents, and a latent structure.
As such, we believe that this survey is a timely contribution to this emerging area of research.
This survey contains the following sections. In Section~\ref{sec:representation} we describe the primary types of spatio-temporal data captured from team sports.
We describe the properties of these data and outline the sports from which it is currently available.
Section~\ref{sec:subdivision} describes approaches that have been used to subdivide the playing area into regions that have a particular property.
The playing area may be discretized into a fixed subdivision and the occurrences of some phenomena counted, for instance a player occupying a particular region or a shot at goal occurring from that region, producing an \emph{intensity map} of the playing area.
On the other hand, a subdivision of the playing area based on areas that are dominated by particular players has also been used in several papers.
In Section~\ref{sec:networks}, we survey approaches that represent temporal sequences of events as \emph{networks} and apply network-theoretic measure to them. For example, sequences of passes between players can be represented as a network with players as the vertices and weighted edges for the frequency of passes between pairs of players, and network measures be computed to quantify the passing performance.
Section~\ref{sec:data-mining} provides a task-oriented survey of the approaches to uncover information inherent in the spatio-temporal data using \emph{data mining} techniques.
Furthermore, several papers define metrics to
measure
the performance of players and teams, and these are discussed in Section~\ref{sec:performance-metrics}.
Finally, in Section~\ref{sec:visualisation}, we detail the research into \emph{visualisation} techniques to succinctly present metrics of sports performance.
\iffalse
The contribution of this survey is to present an overview of the diverse efforts across a number of research communities into the computational analysis of spatio-temporal data captured from team sports.
The structure of this paper serves as a framework that categorises the surveyed papers, and their relationships.
The paper is a useful starting point for researchers who are new to the field, and also provides an overview for all researchers of the questions and directions that have been explored.
We also identify a number of open research questions that may be of interest to researchers in the relevant fields.
\fi
\section{Performance Metrics}
\label{sec:performance-metrics}
Determining the contribution of the offensive and defensive components of team play has been extensively researched, particularly in the case of basketball which has several useful properties in this regard.
For example, a basketball match can be easily segmented into a sequence of \emph{possessions} -- teams average around \num{92} possessions per game~\cite{kubatko-2007} -- most of which end in a shot at goal, which may or may not be successful.
This segmentation naturally supports a variety of offensive and defensive metrics \cite{kubatko-2007}, however the metrics are not spatially informed, and intuitively, spatial factors are significant when quantifying both offensive and defensive performance.
In this section we survey a number of research papers that use spatio-temporal data from basketball matches to produce enhanced performance metrics.
\subsection{Offensive Performance}
Shooting effectiveness is the likelihood that a shot made will be successful, and \emph{effective field goal percentage} (EFG) is a de-facto metric for offensive play in basketball~\cite{kubatko-2007}.
However, as \citeN{chang-2014} observe, this metric confounds the efficiency of the shooter with the difficulty of the shot.
Intuitively, spatial factors such as the location where a shot was attempted from, and the proximity of defenders to the shooter would have an impact of the difficulty of the shot.
This insight has been the basis of several efforts to produce metrics that provide a more nuanced picture of a player or team's shooting efficiency.
Early work in this area by \citeN{reich-2006} used shot chart data (a list of shots attempted, detailing the location, time, shooter and outcome of each shot).
The paper contained an in-depth analysis of the shooting performance of a single player -- Sam Cassell of the Minnesota Timberwolves -- over the entire $2003\mbox{/}2004$ season.
A vector of boolean-valued predictor variables was computed for each shot, and linear models fitted for shot frequency, shot location and shot efficiency.
By fitting models on subsets of the predictor variables, the authors analysed the factors that were important in predicting shot frequency, location and efficiency.
\citeN{miller-2014} investigated shooting efficiency by using vectors computed with non-negative matrix factorization to represent spatially distinct shot-types, see Section~\ref{sub:matrix-factorization}.
The shooting factors were used to estimate spatial shooting efficiency surfaces for individual players.
The efficiency surfaces could then be used to compute the probability of a player making a shot conditioned on the location of the shot attempt, resulting in a spatially-varying shooting efficiency model for each individual player.
\citeN{cervone-2014a} present \emph{expected possession value} (EPV), a continuous measure of the expected points that will result from the current possession. EPV is thus analogous to a ``stock ticker'' that provides a valuation of the possession at any point in time during the possession.
The overall framework consists of a macro-transition model that deals with game-state events such as passes, shots and turnovers, and micro-transition model that describes player movement within a phase when a single player is in possession of the ball. Probability distributions, conditioned on the spatial layout of all players and the ball, are learned for the micro- and macro-transition models.
The spatial effects are modelled using non-negative matrix factorization to provide a compact representation that the authors claim has the attributes of being computationally tractable with good predictive performance.
The micro- and macro-transition models are combined in a Markov chain, and from this the expected value of the end-state -- scoring $0$, $2$ or $3$ points -- can be determined at any time during the possession.
Experimental results in the paper show how the EPV metric can support a number of analyses, such as \emph{EPV-Added} which compares an individual player's offensive value with that of a league-average player receiving the ball in the same situation; or \emph{Shot Satisfaction} which quantifies the satisfaction (or regret) of a player's decision to shoot, rather than taking an alternative option such as passing to a teammate.
\citeN{chang-2014} introduces another spatially-informed measure of shooting quality in basketball: \emph{Effective Shot Quality} (ESQ).
This metric measures the value of a shot, were it to be taken by the league-average player.
ESQ is computed using a learned least-squares regression function whose input includes spatial factors such as the location of the shot attempt, and the proximity of defenders to the shooter.
Furthermore, the authors introduce EFG+, which is calculated by subtracting ESQ from EFG.
EFG+ is thus an estimate of how well a player shoots relative to expectation, given the spatial conditions under which the shot was taken.
A further metric, \emph{Spatial Shooting Effectiveness}, was presented by \citeN{shortridge-2014}.
Using a subdivision of the court, an empirical Bayesian scoring rate estimator was fitted using the neighbourhood of regions to the shot location.
The spatial shooting effectiveness was computed for each player in each region of the subdivision, and is the difference between the points-per-shot achieved by the player in the region and the expected points-per-shot from the estimator.
In other words, it is the difference between a player's expected and actual shooting efficiency, and thus measures how effective a player is at shooting, relative to the league-average player.
\citeN{lucey-2014a} considered shooting efficiency in football.
They make a similar observation that the location where a shot is taken significantly impacts the likelihood of successfully scoring a goal.
The proposed model uses logistic regression to estimate the probability of a shot succeeding -- the \emph{Expected Goal Value} (EGV).
The input features are based on the proximity of defenders to the shooter and to the path the ball would take to reach the goal; the location of the shooter relative to the lines of players in the defending team's formation; and the location where the shot was taken from.
The model is empirically analysed in several ways.
The number of attempted and successful shots for an entire season is computed for each team in a professional league, and compared to the expected number of goals that the model predicts, given the chances.
The results are generally consistent, and the authors are able to explain away the main outliers.
Furthermore, matches where the winning team has fewer shots at goal are considered individually, and the expected goals under the model are computed.
This is shown to be a better predictor of the actual outcome, suggesting that the winning team was able to produce fewer -- but better -- quality chances.
\subsection{Defensive Performance}
Measures of defensive performance have traditionally been based on summary statistics of \emph{interventions} such as blocks and rebounds in basketball~\cite{kubatko-2007} and tackles and clearances in football.
However, \citeN{goldsberry-2013} observed that, in basketball the defensive effectiveness ought to consider factors such as the spatial dominance by the defence of areas with high rates of shooting success; the ability of the defence to prevent a shot from even being attempted; and secondary effects in the case of an unsuccessful shot, such as being able to win possession or being well-positioned to defend the subsequent phase.
In order to provide a finer-grained insight into defensive performance, \citeN{goldsberry-2013} presented \emph{spatial splits} that decompose shooting frequency and efficiency into a triple consisting of close-range, mid-range and 3-point-range values.
The offensive half-court was subdivided into three regions, and the shot frequency and efficiency were computed separately for shots originating in each region.
These offensive metrics were then used to produce defensive metrics for the opposing team by comparing the relative changes in the splits for shots that an individual player was defending to the splits for the league-average defender.
An alternate approach to assessing the impact of defenders on shooting frequency and efficiency was taken by Franks~\emph{et al.}\xspace~\shortcite{franks-2015,franks-2015a}.
They proposed a model that quantifies the effectiveness of man-to-man defense in different regions of the court.
The proposed framework includes a model that determines {who's marking whom} by assigning each defender to an attacker.
For each attacker, the canonical position for the defender is computed, based on the relative spatial location of the attacker, the ball and the basket.
A hidden Markov model is used to compute the likelihood of an assignment of defenders to attackers over the course of a possession, trained using the expectation maximization algorithm~\cite{dempster-1977}.
A second component of the model learned spatially coherent shooting type bases using non-negative matrix factorization on a shooting intensity surface fitted using a log-Gaussian Cox process.
By combining the assignment of markers and the shot type bases, the authors were able to investigate the extent to which defenders inhibit (or encourage) shot attempts in different regions of the court, and the degree to which the efficiency of the shooter is affected by the identity of the marker.
Another aspect of defensive performance concerns the actions when a shot is unsuccessful, and both the defence and attack will attempt to \emph{rebound} the shot to gain possession.
This was investigated by \citeN{maheswaran-2014} where they deconstructed the rebound into three components: \emph{positioning}; \emph{hustle} and \emph{conversion}, described in Section~\ref{sssec:DR}.
Linear regression was used to compute metrics for player's \emph{hustle} and \emph{conversion}, and experimental results showed that the top-ranked players on these metrics were consistent with expert consensus of top-performing players.
On the other hand, \citeN{wiens-2013} performs a statistical evaluation of the options that players in the offensive team have when a shot is made in basketball.
Players near the basket can either \emph{crash the boards} -- move closer to the basket in anticipation of making a rebound -- or retreat in order to maximise the time to position themselves defensively for the opposition's subsequent attack.
The model used as factors the players' distance to the basket, and proximity of defenders to each attacking player.
The experimental results suggested that teams tended to retreat more than they should, and thus a more aggressive strategy could improve a team's chances of success.
The analysis of defense in football would appear to be a qualitatively different proposition, in particular because scoring chances are much less frequent. To our knowledge, similar types of analysis to those presented above in relation to basketball have not been attempted for football.
\begin{open}
There has been significant research into producing spatially-informed metrics for player and team performance in basketball, however there has been little research in other sports, particularly football.
It is an open research question whether similar spatially-informed sports metrics could be developed for football.
\end{open}
\section{Network tools for team performance analysis}
\label{sec:networks}
Understanding the interaction between players is one of the more important and complex problems in sports science.
Player interaction can give insight into a team's playing style, or be used to assess the importance of individual players to the team.
Capturing the interactions between individuals is a central goal of social network analysis~\cite{Wasserman1997} and techniques developed in this discipline have been applied to the problem of modelling player interactions.
An early attempt to use networks for sports analysis was in an entertaining study by \citeN{Gould1979} where they explore all passes made in the 1977 FA Cup Final between Liverpool and Manchester United.
They studied the simplicial complexes of the passing network and made several interesting observations, including that the Liverpool team had two ``quite disconnected" subsystems and that Kevin Keegan was ``the linchpin of Liverpool".
However, their analysis, while innovative,
did not attract much attention.
\begin{comment}
One of the first modern papers introducing network analysis tools to football was by \citeN{onody-2004} who constructed a graph that modelled all Brazilian football players as vertices and linked two players by an edge if they ever played in the same team. They showed that several metrics on the resulting network follow power laws or exponential distributions. In this section we will specifically survey the analytical tools on individual teams and players.
\end{comment}
In the last decade numerous papers have appeared that apply social network analysis to team sports. Two types of networks have dominated the research literature to date: \emph{passing networks} and \emph{transition networks}.
Passing networks have been most frequently studied type in the research field.
To the best of our knowledge, they were first introduced by \citeN{Passos2011}.
A passing network is a graph $G=(V,E)$ where each player is modelled as a vertex and two vertices $v_1$ and $v_2$ in $V$ have a directed edge $e=(v_1,v_2)$ from $v_1$ to $v_2$ with integer weight $w(e)$ such that the player represented by vertex $v_1$ has made $w(e)$ successful passes to the player represented by vertex $v_2$.
A small example of a passing graph is shown in Fig.~\ref{fig:PassNetwork}a.
Passing networks can be constructed directly from \emph{event logs}, defined in Section~\ref{sec:representation}. A temporal sequence of passes made in a match is encoded as a path within the passing network.
A passing network that is extended with outcomes, as illustrated in Fig.~\ref{fig:PassNetwork}b, is then referred to as a \emph{transition network}.
\begin{figure}
\begin{center}
\includegraphics[width=0.85\textwidth]{figures/PassingNetwork}
\end{center}
\caption{(a) A passing network modelling four players $\{A,B,C,D\}$ and the passes between the players. (b) A transition network is a passing network extended with outcomes. For example, twice player $C$ made a shot on goal and once the player lost possession.}
\label{fig:PassNetwork}
\end{figure}
Many properties of passing networks have been studied, among them \emph{density}, \emph{heterogeneity}, \emph{entropy}, and \emph{Nash equilibria}. However, the most studied measurement is \emph{centrality}.
We begin by considering centrality and its variants, and then we briefly consider some of the other measures discussed in the literature.
\subsection{Centrality}
Centrality measures were introduced in an attempt to determine the key nodes in a network, for example, to identify the most popular persons in a social network or super-spreaders of a disease~\cite{Newman2010}.
In team sports the aim of using centrality measurements is generally to identify key players, or to estimate the interactivity between team members. For an excellent survey on network centrality see~\citeN{Borgatti2005}.
\begin{comment}
It should be noted that in the graph theory literature it has often been highlighted that centrality indices have two important limitations. The first limitation is that a centrality which is optimal for one application is often sub-optimal for a different application. The second limitation is that vertex centrality does not indicate the relative importance of vertices. A ranking only orders vertices by importance, it does not quantify the difference in importance between different levels of the ranking.
Many of the papers mentioned below use data gathered from a time period when Spain dominated world football, and some papers show that for their proposed measurements the Spanish team stands out. Of course this is the desirable outcome, however, one should keep in mind that many of the proposed measurements reward passes which also reward Spain and the players of Spain since they are known to use a tactic with a very high number of passes, the so-called \emph{tiki-taka} style.
\end{comment}
\subsubsection{Degree centrality}
The simplest centrality measure is the \emph{degree centrality}, which is the number of edges incident to a vertex.
For directed networks one usually distinguish between the in-degree and the out-degree centrality.
In sports analysis the out-degree centrality is simply referred to as \emph{centrality} while the in-degree centrality is usually called the \emph{prestige} of a player.
Some papers do consider both centrality and prestige, see for example~\citeN{Clemente2015c}, but most of the literature has focused on centrality.
\citeN{fewell-2012} considered a transition graph on basketball games where the vertices represented the five traditional player positions (point guard, shooting guard, small forward, power forward, and center), possession origins and possession outcomes. The centrality was computed on the transition graph, split into two outcomes: ``shots" and ``others". The measure was computed on \num{32} basketball games and prior knowledge about the importance of players to the teams involved was compared to the centrality values of the players. They used degree centrality to compare teams that heavily rely on key players
with teams with a more even distribution between their team members.
Unfortunately, the data was not definitive since the overall centrality rankings did not show a strong relation to the teams performance.
\citeN{grund-2012} used degree centrality together with Freeman centralization~\cite{Freeman1978}.
The idea by Freeman was
to consider the relative centrality of the most important node in the network. That is, how central is the most central node compared to the centrality of the other nodes in the network. The Freeman centrality is measured as the sum of the differences between the node with the highest degree centrality and all other nodes; divided by a value depending only on the size of the network~\cite{Freeman1978}.
They used an extensive set of \num{283259} passes from \num{760} English Premier League games for their experiments. From a team performance perspective \citeN{grund-2012} set out to answer two hypotheses: (i) increased interaction between players leads to increased team performance; and (ii) increased interaction centralization leads to decreased team performance.
The latter is strongly connected to centrality and \citeN{grund-2012} went on to show that a high level of centralization decreases team performance.
In a series of recent papers, Clemente~\emph{et al.}\xspace~\shortcite{clemente-2015b,Clemente2014,Clemente2015c,Clemente2015} argue that centrality may
recognise how players collaborate, and also the nature and strength of their collaboration.
For example, central midfielders and central defenders usually show higher degree centrality then other players. Some exceptions were shown in~\citeN{Clemente2014a} where the left and right defenders also obtained very high degree centrality. In general goal-keepers and forwards have the lowest centrality measure.
\subsubsection{Betweenness Centrality}
The betweeness centrality of a node is the number of times it lies on the shortest path between two other nodes in the network. Originally it was introduced by~\citeN{Freeman1977} in an attempt to estimate ``a human's potential control of communication in a social network".
\citeN{Pena2012} claimed that the betweenness centrality measures how the ball-flow between players depends on a particular player and as such provides a measure of the impact of the ``removal" of that player from the game, either by being sent off or by being isolated by the opponents. They also argued that, from a tactical point of view, a team should aim to have a balanced betweenness score for all players.
A centrality measure closely related to the betweenness centrality is flow centrality. The flow centrality is measured by the proportion of the entire flow between two vertices that occur on paths of which a given vertex is a part.
\citeN{Duch2010} considered flow centrality for transition networks where the weight of an edge from a player $v_1$ to a player $v_2$ is equal to the fraction of passes initiated by $v_1$ to reach $v_2$. Similarly, the shooting accuracy for a player (the weight of the edge from the player to the vertex ``shots on goal") is the fraction of shots made by the player that end up on goal. They then studied the flow centrality over all paths that results in a shot. They also take the defensive performance into account by
having each player initiate a number of flow paths which is comparable to the number of times the player wins possession of the ball.
The \emph{match performance} of the player is then the normalised value of the logarithm of this combined value.
They argue that this
gives an estimate of the contribution of a single player and also of the whole team.
The team's match performance value is the mean of the individual player values. Using these values, both for teams and individual players, \citeN{Duch2010} analysed \num{20} games from the football 2008 UEFA European Cup. They claim that their measurements provide ``sensible results that are in agreement with the subjective views of analysts and spectators'', in other words, the better paid players tend to contribute more to the team's performance.
\begin{comment}
\citeN{fewell-2012} considered both degree centrality (see above) and two versions of flow centrality. They used flow centrality to evaluate the importance of a player position within the transition graph. They also calculated a more restrictive flow centrality that only included player appearances if they were part of the last three nodes before an outcome. The aim of the latter model is to focus more on the set-up phase of the play.
\end{comment}
\subsubsection{Closeness Centrality}
The standard distance metric used in a network is the length (weight or cost) of the shortest path between pairs of nodes. The \emph{closeness centrality} of a node is defined as the inverse of the \emph{farness} of the node, which is the sum of its distance to all other nodes in the network~\citeN{Bavelas1950}.
\citeN{Pena2012} argued that the closeness score is an estimate of how easy it is to get the ball to a specific player, i.e., a high closeness score indicates a well-connected player within the team. They made a detailed study using the 2010 FIFA World Cup passing data.
The overall conclusion they reached was that there is a high correlation between high scores in closeness centrality, \emph{PageRank} and clustering (see below), which supports the general perception of the players' performance reported in the media at the time of the tournament.
\subsubsection{Eigenvector Centrality and \emph{PageRank}}
The general idea of Eigenvector centrality and PageRank is that the importance of a node depends, not only on its degree, but also on the importance of its neighbours.
\citeN{Cotta2013} used the eigenvector centrality calculated with the power iteration model by \citeN{mises1929}. The measure aims to identify which player has the highest probability to be in possession of the ball after a sequence of passes. They also motivated their measure by a thorough analysis of three games from the 2010 FIFA World Cup, where they argued the correlation between the eigenvector centrality score and the team's performance.
A variant of the eigenvector centrality measure is \emph{PageRank}, which was one of the algorithms used by Google Search to rank web-pages~\cite{Brin1998}.
The passing graph is represented as an adjacency matrix $A$ where each entry $A_{ji}$ is the number of passes from player $j$ to player $i$.
In football terms, the \emph{PageRank} centrality index for player $i$ is defined as: $$x_i = p \sum_{j\neq i} \frac{A_{ji}}{L^{out}_j} \cdot x_j + q,$$ where $L^{out}_j=\sum_k A_{jk}$ is the total number of passes made by player $j$,
$p$ is the parameter representing the probability that a player will decide to give the ball away rather than keep it and shoot, and $q$ is a `free' popularity assigned to each player.
Note that the \emph{PageRank} score of a player is dependant on the scores of the player's team mates.
\citeN{Pena2012} argue that the \emph{PageRank} measure gives each player a value that is approximately the likelihood that the player will be in possession of the ball after a fixed number of passes.
Using data from the 2010 FIFA World Cup, they computed the \emph{PageRank} for the players in the top \num{16} teams, but focused their discussion on the players in the top four teams: Spain, Germany, Uruguay and the Netherlands.
They showed that the \emph{PageRank} of players in the Dutch and Uruguay teams were more evenly distributed than players from Spain and Germany.
This indicates that no player in those teams has a predominant role in the passing scheme while Xavi Hernandez (Spain) and Bastian Schweinsteiger (Germany) were particularly central to their teams.
\subsection{Clustering Coefficients}
A clustering coefficient is a measure of the degree of which nodes in a network are inclined to cluster together.
In the sport science literature both the \emph{global} and the \emph{local} clustering coefficients have been applied.
The idea of studying the global cluster coefficient of the players in a team is that it reflects the cooperation between players, that is, the higher coefficient for a player the higher is his cooperation with the other members of the team~\cite{Clemente2014,fewell-2012,Pena2012}. \citeN{fewell-2012} also argued that a high global clustering coefficient indicates that attacking decisions are taken by several players, and thus increases the number of
possible attacking paths that have to be assessed by the defence.
\citeN{Pena2012} showed, using the 2010 FIFA World Cup passing data, that Spain, Germany and the Netherlands consistently had very high clustering scores when compared to Uruguay, suggesting that they were extremely well connected teams, in the sense that almost all players contribute.
\citeN{Cotta2013} considered three games involving Spain from the 2010 FIFA World Cup and used the local clustering coefficient as a player coefficient. They studied how the coefficient changed during the games, and argued for a correlation between the number of passes made by Spain and the local clustering coefficient. They claimed that Spain's clustering coefficient remains high over time, ``indicating the elaborate style of the Spanish team".
It should be noted that it is not completely clear that there is a strong connection between the clustering coefficient and the team performance. For example, \citeN{Pena2012} stated that in their study they did not get any reasonable results and ``will postpone the study of this problem for future work."
\begin{open}
Various centrality and clustering measures have been proposed to accurately represent some aspect of player or team performance.
A systematic study reviewing all such measures against predefined criteria, and on a large dataset would be a useful contribution to the field.
\end{open}
\subsection{Density and Heterogeneity}
In general it is believed that stronger collaboration (i.e. more passes) will make the team stronger. This is known as the \emph{density-performance hypothesis}~\cite{balkundi2006}. Therefore a widely-assessed measure of networks is density, which is traditionally calculated as the number of edges divided by the total number of possible edges. This is the density measure used by Clemente et al. in a series of recent papers~\shortcite{Clemente2014b,Clemente2015c,Clemente2015,clemente-2015b}.
For weighted graphs the measurement becomes slightly more complex. \citeN{grund-2012} defined the \emph{intensity} of a team as the sum of the weighted degrees over all players divided by the total time the team have possession of the ball, i.e., possession-weighted passes per minute.
Related to the density is \emph{passing heterogeneity}, which \citeN{Cintia2015} defined as the standard deviation of the vertex degree for each player in the network.
High heterogeneity of a passing network means that the team tends to coalesce into sub-communities, and that there is a low level of cooperation between players~\cite{Clemente2015}. One interesting observation made by \citeN{Clemente2015} was that the density usually went down in the 2nd half while the heterogeneity went up.
\begin{open}
The density-performance hypothesis suggests an interesting metric of team performance.
Can this hypothesis be tested scientifically?
\end{open}
\subsection{Entropy, Topological Depth, Price-of-Anarchy and Power Law Distributions}
As described above, \citeN{fewell-2012} considered an extended transition graph for basketball games, where they also calculated \emph{player entropy}. Shannon entropy~\cite{shannon-2001} was used to
estimate the uncertainty of a ball transition.
The \emph{team entropy} is the aggregated player entropies, which can be computed in many different ways. \citeN{fewell-2012} argue that from the perspective of the opposing team the real uncertainty is the number of options, and computed the team entropy from the transition matrix describing ball movement probabilities across the five standard player positions and the two outcomes.
\citeN{Skinner2010a} showed that passing networks have two interesting properties.
They identified a correspondence between a basketball transition network and a traffic network, and used insights from the latter to make suppositions about the former. They posited that there may be a difference between the Nash equilibrium of a transition network and the community optimum -- the \emph{Price of Anarchy}. In other words,
for the best outcome one should not always select the highest-percentage shot.
A similar observation was made in \citeN{fewell-2012} who noted that the low flow centrality of the most utilised position (point guard) seems to indicate that the contribution of key players can be negatively affected by controlling the ball more often than other players.
Related to the same concept, \citeN{Skinner2010a} suggested that removing a key player from a match -- and hence the transition network -- may actually \emph{improve} the team performance, a phenomena known as the \emph{Braess' paradox} in network analysis~\cite{braess-2005}.
\begin{comment}
\citeN{yamamoto-2011} showed that passing networks extracted from the 2006 FIFA World Cup had a \emph{power law} distribution. The consequences of the property is far from clear. The paper suggested several possibilities.
\citeN{Clemente2014a} also discuss \emph{topological inter-dependencies} which I don't really see the point of, so might ignore it.
\end{comment}
\section{Representing Sports Play using Spatio-Temporal Data}
\label{sec:representation}
The research surveyed in this paper is based on \emph{spatio-temporal data}, the defining characteristic of which is that it is a sequence of samples containing the time-stamp and location of some phenomena.
In the team sports domain, two types of spatio-temporal data are commonly captured: \emph{object trajectories} that capture the movement of players or the ball; and \emph{event logs} that record the location and time of match events, such as passes, shots at goal or fouls.
These datasets, described in detail below, individually facilitate the spatiotemporal analysis of play, however they are complementary in that they describe different aspects of play, and can provide a richer explanation of the game when used in combination.
For example, the spatial formation in which a team arranges itself in will be apparent in the set of player trajectories.
However, the particular formation used may depend on whether the team is in possession of the ball, which can be determined from the event log.
On the other hand, a \emph{shot at goal} event contains the location from where the shot was made, but this may not be sufficient to make a qualitative rating of the shot.
Such a rating ought to consider whether the shooter was closely marked by the defence, and the proximity of defenders -- properties that can be interpolated from the player trajectories.
\subsection{Object Trajectories}
The movement of players or the ball around the playing area are sampled as a timestamped sequence of location points in the plane, see Fig.~\ref{fig:input-data-schematic}.
The trajectories are captured using optical- or device-tracking and processing systems.
\emph{Optical tracking systems} use fixed cameras to capture the player movement, and the images are then processed to compute the trajectories~\cite{bradley-2007}.
There are several commercial vendors who supply tracking services to professional sports teams and leagues \cite{tracab,impire,prozone,sportvu}.
On the other hand, \emph{device tracking systems} rely on devices that infer their location, and are attached to the players' clothing or embedded in the ball or puck.
These systems can be based on GPS~\cite{catapult} or RFID~\cite{sportvision} technology.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{figures/input_data_schematic}
\end{center}
\caption{Example of the trajectory and event input data and an illustration of their geometric representations. Each trajectory is a sequence of location points, and these can be used to extrapolate the basic geometry of a player at a given time-step. Similarly, the geometry of events such as the \emph{pass} shown, can be computed from the trajectories of the involved players.}
\label{fig:input-data-schematic}
\end{figure}
The trajectories produced by these systems are dense, in the sense that the location points samples are uniform and frequent -- in the range of \SIrange[range-units=single]{10}{30}{\hertz}.
The availability of spatio-temporal data for research varies.
Some leagues capture data from all matches, such as the NBA~\cite{nba-stats} and the German Football Leagues~\cite{impire}, in other cases, teams capture data at their stadia only.
League-wide datasets are not simply larger, but also allow for experiments that control for external factors such as weather, injuries to players, and playing at home and on the road.
\subsection{Event Logs}
Event logs are a sequence of significant events that occur during a match.
Events can be broadly categorised as \emph{player events} such as passes and shots; and \emph{technical events}, for example fouls, time-outs, and start/end of period, see Fig.~\ref{fig:input-data-schematic}.
Event logs may be derived in part from the trajectories of the players involved, however they may also be captured directly from video analysis, for example \citeN{opta} uses this approach.
This is often the case in sports where there are practical difficulties in capturing player trajectories, such as rugby and American football.
Event logs are qualitatively different from the player trajectories in that they are not dense -- samples are only captured when an event occurs -- however they can be semantically richer as they include details like the type of event and the players involved.
The models and techniques described in the following sections all use object trajectories and/or event logs as their primary input.
\section{Playing Area Subdivision}
\label{sec:subdivision}
Player trajectories and event logs are both low-level representations, and can be challenging to work with.
One way
to deal with this issue is to discretize the playing area into regions and assign the location points contained in the trajectory or event log to a discretized region.
The frequency -- or intensity -- of events occurring in each region is a spatial summary of the underlying process, alternatively, the playing area may be subdivided into regions such that each region is dominated in some sense by a single player, for example by the player being able to reach all points in the region before any other player.
There are a variety of techniques for producing playing area subdivisions that have been used in the research surveyed here, and are summarised in this section.
\subsection{Intensity Matrices and Maps}
\label{sub:intensity-maps}
Spatial data from team sports have the useful property that they are constrained to a relatively small and symmetric playing area -- the pitch, field or court.
The playing area may be subdivided into regions and events occurring in each region can be counted to produce an intensity matrix, and can be visualised with an intensity map, see Fig.~\ref{fig:intensity-map}.
This is a common preprocessing step for many of the techniques described in subsequent sections.
\begin{figure}
\subfloat[Left-back]{
\includegraphics[width=0.5\linewidth]{figures/football_player_heatmap_108}
\label{fig:football-player-heatmap-108}}
\subfloat[Striker]{
\includegraphics[width=0.5\linewidth]{figures/football_player_heatmap_2}
\label{fig:football-player-heatmap-2}}
\caption{Example intensity maps showing areas of the football pitch that the player's occupy. The player trajectories have been oriented such that the play is from left to right. \protect\subref{fig:football-player-heatmap-108} The left-back is positioned on the left of the field, but is responsible for taking attacking corner-kicks from the right.
\protect\subref{fig:football-player-heatmap-2} The striker predominantly stays forward of the half-way line, however will retreat to help defend corner-kicks.}
\label{fig:intensity-map}
\end{figure}
When designing a spatial discretization, the number and shape of the induced regions can vary.
A common approach is to subdivide the playing area into rectangles of equal size~\cite{bialkowski-2014a,borrie-2002,cervone-2014a,franks-2015,lucey-2012,Narizuka2014a,shortridge-2014}, for example see Fig.~\ref{fig:cartesian-subdiv}.
However, the behaviour of players may not vary smoothly in some areas.
For example: around the three-point line on the basketball court, a player's propensity to shoot varies abruptly; or the willingness of a football defender to attempt a tackle will change depending on whether they are inside the penalty box.
The playing area may be subdivided to respect such predefined assumptions of the player's behaviour.
\citeN{camerino-2012} subdivides the football pitch into areas that are aligned with the penalty box, see Fig.~\ref{fig:camerino-subdiv}, and interactions occurring in each region where counted.
Similarly, \citeN{maheswaran-2014} and \citeN{goldsberry-2013} define subdivisions of the basketball half-court that conforms with the three-point line and is informed by intuition of shooting behaviour, see Fig.~\ref{fig:maheswaran-subdiv}.
Transforming the playing area into polar space and inducing the subdivision in that space is an approach used in several papers.
This approach reflects the fact that player behaviour may be similar for locations that are equidistant from the goal or basket.
Using the basket as the origin, polar-space subdivisions were used by \citeN{reich-2006} and by \citeN{maheswaran-2012}.
\citeN{yue-2014} used a polar-space subdivision to discretize the position of the players marking an attacking player.
Under this scheme, the location of the attacking player was used as the origin, and the polar space aligned such that the direction of the basket is at \SI{0}{\degree}, see Fig.~\ref{fig:yue-polar-subdiv}.
\begin{figure}
\centering
\subfloat[Hand-designed]{
\centering
\includegraphics[width=0.5\linewidth]{figures/camerino_subdiv}
\label{fig:camerino-subdiv}}
\\
\subfloat[Hand-designed]{
\includegraphics[width=0.32\linewidth]{figures/maheswaran_subdiv}
\label{fig:maheswaran-subdiv}}
\subfloat[Cartesian grid]{
\includegraphics[width=0.32\linewidth]{figures/cartesian_subdiv}
\label{fig:cartesian-subdiv}}
\subfloat[Polar grid]{
\includegraphics[width=0.32\linewidth]{figures/yue_polar_subdiv}
\label{fig:yue-polar-subdiv}}
\caption{Examples of subdivisions used to discretize locations:
\protect\subref{fig:camerino-subdiv}, \protect\subref{fig:maheswaran-subdiv} hand-designed subdivision reflecting expert knowledge of game-play in basketball~\protect\cite{maheswaran-2014} and football~\protect\cite{camerino-2012};
\protect\subref{fig:cartesian-subdiv} subdivision of court into unit-squares~\protect\cite{cervone-2014a};
\protect\subref{fig:yue-polar-subdiv} polar subdivision where origin is centred on ball-carrier and grid is aligned with the basket~\protect\cite{yue-2014}.}
\label{fig:basketball-subdivisions}
\end{figure}
Given a subdivision of the playing area, counting the number of events by each player in each region induces a discrete spatial distribution of players' locations during the match.
This can be represented as an $\mathbb{R}_{+}^{N \times K}$ intensity matrix containing the counts $X$ for $N$ players in each of the $R$ regions of the subdivision.
The event $X$ may be the number of visits by a player to the region, e.g. \citeN{maheswaran-2014} used the location points from player trajectories to determine whether a cell was visited.
\citeN{bialkowski-2014} used event data such as passes and touches made by football players to determine the regions a player had visited.
The number of passes or shots at goal that occur in each region may also be counted.
For example, many papers counted shots made in each region of a subdivision of a basketball court \cite{franks-2015,goldsberry-2013,maheswaran-2012,reich-2006,shortridge-2014}.
Similarly, \citeN{borrie-2002}, \citeN{camerino-2012}, \citeN{Narizuka2014a}, and \citeN{cervone-2014a} counted the number of passes made in each region of a subdivision of the playing area.
\subsection{Low-rank Factor Matrices}
\label{sub:matrix-factorization}
Matrix factorization can be applied to intensity matrices described in Section~\ref{sub:intensity-maps}, to produce a compact, low-rank representation.
This approach has been used in several papers to model shooting behaviour in basketball~\cite{cervone-2014a,franks-2015,yue-2014}.
The insight that motivates this technique is that similar types of players tend to shoot from similar locations, and so each player's shooting style can be
modelled as a combination of a few
distinct \emph{types}, where each \emph{type} maps to a coherent area of the court that the players are likely to shoot from.
The input is an intensity matrix $X \in \mathbb{R}^{N \times V}$. Two new matrices $W \in \mathbb{R}_{+}^{N \times K}$ and $B \in \mathbb{R}_{+}^{K \times V}$ are computed such that $WB \approx X$ and $K \ll N, V$.
The $K$ spatial bases in $B$ represent areas of similar shooting intensity, and the $N$ players' shooting habits are modelled as a linear combination of the spatial bases.
The factorization is computed from $X$ by minimizing some distance measure between $X$ and $WB$, under the constraint that $W$ and $B$ are non-negative.
The non-negativity constraint, along with the choice of distance function encourages sparsity in the learned matrices.
This leads to intuitive results: each spatial basis corresponds to a small number of regions of the halfcourt; and
the shooting style of each player is modelled as the mixture
of a small number of bases, see Fig.~\ref{fig:miller-heatmap} for examples of learned spatial bases.
\begin{figure}
\centering
\subfloat[Corner three-point] {
\includegraphics[width=0.32\linewidth]{figures/miller_heatmap_0}
\label{fig:miller-heatmap-0}}
\subfloat[Top-of-key three-point] {
\includegraphics[width=0.32\linewidth]{figures/miller_heatmap_1}
\label{fig:miller-heatmap-1}}
\subfloat[Right low-post] {
\includegraphics[width=0.32\linewidth]{figures/miller_heatmap_2}
\label{fig:miller-heatmap-2}}
\caption{Examples of spatial bases induced by using non-negative matrix factorization. Each basis represents an intensity map of where a subset of players tend to shoot from.
Shown are three spatial basis intensity maps that represent defined shooting locations.
}
\label{fig:miller-heatmap}
\end{figure}
\citeN{miller-2014} used non-negative matrix factorization to represent shooting locations in basketball. They observe that the shooting intensity should vary smoothly over the court space, and thus fit a Log-Gaussian Cox Process to infer a smooth intensity surface
over the intensity matrix, which is then factorized.
\citeN{yue-2014} used non-negative matrix factorization to model several event types: shooting; passing and receiving.
They include a spatial regularization term in the distance function used when computing the matrix factorization, and claim that spatial regularization can be seen as a frequentist analog of the Bayesian Log-Gaussian Cox process used by \citeN{miller-2014}.
\citeN{cervone-2014a} also used non-negative matrix factorization to find a basis representing player roles, based on their occupancy in areas of the court.
Players who are similar to a given player were identified as those who are closest in this basis, and this was used to compute a similarity matrix between players.
\section{Visualisation}
\label{sec:visualisation}
To communicate the information extracted from the spatio-temporal data, visualization tools are required.
For real-time data the most common approach is so-called \emph{live covers}. This is usually a website that comprise of a text panel that lists high-level updates of the key events in the game in almost real time, and several graphics showing basic information about the teams and the game.
Live covers are provided by leagues (e.g. NHL, NBA and Bundesliga), media (e.g. ESPN) and even football clubs (e.g. Liverpool and Paris Saint-Germain). For visualizing aggregated information the most common approach is to use heat maps. Heat maps are simple to generate, are intuitive, and can be used to visualize various types of data. Typical examples in the literature are, visualizing the spread and range of a shooter (basketball) in an attempt to discover the best shooters in the NHL~\cite{goldsberry-2012} and visualizing the shot distance (ice hockey) using radial heat maps~\cite{Pileggi2012a}.
Two recent attempts to provide more extensive visual analytics systems have been made by~\citeN{perin-2013} and Janetzko~\emph{et al.}\xspace~\shortcite{Stein2015} .
\begin{comment}
A visual search system is presented by \citeN{legg 2013} for scenes in rugby. The approach is based on the configuration of players and their movement. They adopt a glyph-based visual design to visualize actions and events ``at a glance". Glyph-based visualization is a common form of visual design where a data set is depicted by a collection of visual objects, which are called glyphs.
\end{comment}
\begin{comment}
Rusu~\emph{et al.}\xspace~\shortcite{Rusu2010a,Rusu2011a} presented a player football interface called \emph{Soccer Scoop} whose primary aim was to visualize statistics for individual players, and to compare the performance of two players.
\end{comment}
\citeN{perin-2013} developed a system for visual exploration of phases in football. The main interface is a timeline and \emph{small multiples} providing an overview of the game. A \emph{small multiple} is a group of similar graphs or charts designed to simplify comparisons between them. The interface also allows the user to select and further examine the \emph{phases} of the game. A phase is a sequence of actions by one team bounded by the actions in which they first win, and then finally lose possession.
A selected phase can be displayed and the information regarding a phase is aggregated into a sequence of views, where each view only focus on a specific action (e.g. a long ball or a corner). The views are then connected to show a whole phase, using various visualization tools such as a passing network, a time line and sidebars for various detailed information.
In two papers Janetzko~\emph{et al.}\xspace~\shortcite{janetzko-2014,Stein2015} present a visual analysis system for interactive recoginition of football patterns and situations. Their system tightly couple the visualization system with data mining techniques. The system includes a range of visualization tools (e.g., parallel coordinates and scalable bar charts) to show the ranking of features over time and plots the change of game play situations, attempting to support the analyst to interpret complex game situations. Using classifiers they automatically detected the most common situations and introduced semantically-meaningful features for these. The exploration system also allows the user to specify features for a specific situations and then perform a similarity search for similar situations.
\begin{open} \label{probel:visualization}
The area of visual interfaces to support team sports analytics is a developing area of research. Two crucial gaps are large user studies with the aim to (1) explore the analytical questions that experts need support for, and (2) which types of visual analytical tools can be understood by experts?
\end{open}
| {
"attr-fineweb-edu": 1.727539,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUgH3xK1ThhCdy_SFS |
\section{Introduction}
\label{sec:Intro}
Sports is a profitable entertainment sector, capping \$91 billion of annual market revenue over the last decade~\cite{GlobalSportsMarket}. \$15.6 billion alone came from the Big Five European Soccer Leagues (EPL, La Liga, Ligue 1, Bundesliga and Serie A) \cite{EuropeanFootballMarket,BigFiveMarket1,BigFiveMarket2}, with broadcasting and commercial activities being the main source of revenue for clubs~\cite{BroadcastingRevenue}.
TV broadcasters seek to attract the attention and indulge the curiosity of an audience, as they understand the game and edit the broadcasts accordingly. In particular, they select the best camera shots focusing on actions or players, allowing for semantic game analysis, talent scouting and advertisement placement. With almost 10,000 games a year for the Big Five alone, and an estimated audience of 500M+ people at each World Cup~\cite{SoccerAudience}, automating the video editing process would have a broad impact on the other millions of games played in lower leagues across the world. Yet, it requires an understanding of the game and the broadcast production.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/GraphicalAbstract-SoccerNet-V2.pdf}
\caption{\textbf{SoccerNet-v2} constitutes the most inclusive dataset for soccer video understanding and production, with \texttildelow 300k annotations, 3 computer vision tasks and multiple benchmark results.}
\label{fig:Pooling}
\end{figure}
Recent computer vision works on soccer broadcasts focused on low-level
video understanding~\cite{moeslund2014computer}, \eg
localizing a field and its lines~\cite{Cioppa2018ABottom,farin2003robust,homayounfar2017sports},
detecting players~\cite{Cioppa_2019_CVPR_Workshops,yang2017robust},
their motion~\cite{felsen2017will,manafifard2017survey},
their pose~\cite{Bridgeman_2019_CVPR_Workshops, Zecha_2019_CVPR_Workshops},
their team~\cite{Istasse_2019_CVPR_Workshops},
the ball~\cite{Sarkar_2019_CVPR_Workshops,Theagarajan_2018_CVPR_Workshops},
or pass feasibility~\cite{Sangesa2020UsingPB}.
Understanding frame-wise information is useful to enhance the visual experience of sports viewers~\cite{Rematas_2018_CVPR} and to gather player statistics~\cite{thomas2017computer}, but it falls short of higher-level game understanding needed for automatic editing purposes (\eg camera shot selection, replay selection, and advertisement placement).
In this work, we propose a large-scale collection of manual annotations
for holistic soccer video understanding and several benchmarks addressing automatic broadcast production tasks. In particular, we extend the previous SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace dataset with further tasks and annotations, and propose open challenges with public leaderboards. Specifically, we propose three tasks represented in Figure~\ref{fig:Pooling}:
\textbf{(i)} \textit{Action Spotting}, an extension from 3 to 17 action classes of SoccerNet's main task,
\textbf{(ii)} \textit{Camera Shot Understanding}, a temporal segmentation task for camera shots and a camera shot boundary detection task,
and \textbf{(iii)} \textit{Replay Grounding}, a task of retrieving the replayed actions in the game. These tasks tackle three major aspects of broadcast soccer videos: action spotting addresses the understanding of the content of the game, camera shot segmentation and boundary detection deal with the video editing process, and replay grounding bridges those tasks by emphasizing salient actions, allowing for prominent moments retrieval.
\mysection{Contributions.} We summarize our contributions as follows.
\textbf{(i) Dataset.} We publicly release SoccerNet-v2, the largest corpus of manual annotations for broadcast soccer video understanding and production, comprising \texttildelow 300k annotations temporally anchored within SoccerNet's 764 hours of video.
\textbf{(ii) Tasks.} We define the novel task of replay grounding
and further expand the tasks of action spotting, camera shot segmentation and boundary detection, for a holistic understanding of content, editing, and production of broadcast soccer videos.
\textbf{(iii) Benchmarks.} We release reproducible benchmark results along with our code and public leaderboards to drive further research in the field.
\section{Supplementary Material}
\subsection{Annotation Guidelines}
We provided our annotators with the following annotation guidelines to annotate the actions and the camera shots.
\mysection{Actions.} Following the original SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace, we annotate each action with a single timestamp. These actions are illustrated in Figure~\ref{fig:action-types}, and their timestamps are defined as:
\begin{figure*}
\centering
\includegraphics[width=0.98\linewidth]{figures/Action_types.png}
\caption{\textbf{Actions.} An example of each action identified in SoccerNet-v2.}
\label{fig:action-types}
\end{figure*}
\begin{itemize}
\item Ball out of play: Moment when the ball crosses one of the outer field lines.
\item Throw-in: Moment when the player throws the ball
\item Foul: Moment when the foul is committed
\item Indirect free-kick: Moment when the player shoots, to resume the game after a foul, with no intention to score
\item Clearance (goal-kick): Moment when the goalkeeper shoots
\item Shots on target: Moment when the player shoots, with the intention to score, and the ball goes in the direction of the goal frame
\item Shots off target: Moment when the player shoots, with the intention to score, but the ball does not go in the direction of the goal frame
\item Corner: Moment when the player shoots the corner
\item Substitution: Moment when the replaced player crosses one of the outer field lines
\item Kick-off: Moment when, at the beginning of a half-time or after a goal, the two players in the central circle make the first pass
\item Yellow card: Moment when the referee shows the player the yellow card
\item Offside: Moment when the side referee raises his flag
\item Direct free-kick: Moment when the player shoots, to resume the game after a foul, with the intention to score or if the other team forms a wall
\item Goal: Moment when the ball crosses the line
\item Penalty: Moment when the player shoots the penalty
\item Yellow then red card: Moment when the referee shows the player the red card
\item Red card: Moment when the referee shows the player the red card
\end{itemize}
\mysection{Camera shots.} We define the following 13 types of cameras, illustrated in Figure~\ref{fig:camera-types}:
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/Camera-types.png}
\caption{\textbf{Cameras.} An example of each camera shot identified in SoccerNet-v2.}
\label{fig:camera-types}
\end{figure*}
\begin{itemize}
\item Main camera center: Camera shown most of the time. It is placed high in the stadium and is centered on the middle field line. It films the players with a wide field of view, allowing an easy understanding of the game.
\item Main camera left: Camera placed high in the stadium on the left side of the field. It is mostly used to allow for an easy overview of what is happening close to the left goal. It can also be used sometimes to show the right side of the field from a further perspective, mostly for an artistic effect. It is also sometimes called the 16-meter left camera.
\item Main camera right: Counterpart of the main camera left but on the right side of the field.
\item Main behind the goal: Camera placed behind the goal, either on a moving crane or in the stadium. It allows for a perpendicular field of view compared to the other main cameras.
\item Goal line technology camera: Camera often placed next to the main camera left or right, but aligned with the goal line. It is used to check if the ball entirely crosses the line in contentious goal cases.
\item Spider camera: Camera placed above the field and able to move freely in 3 dimensions thanks to long cables. It is often used in replays for a dynamic immersion in the action.
\item Close-up player or field referee: Camera placed on ground-level, either fixed or at the shoulder of an operator, filming the players or the referees on the field with a narrower field of view.
\item Close-up side staff: Located similarly to close-up player cameras, films the reaction of the coaches and the staff outside the field. This also includes players on the bench or warming-up.
\item Close-up corner: Camera often on the shoulder of an operator filming the player that shoots the corner.
\item Close-up behind the goal: Camera either on the shoulder of an operator or fixed on the ground and filming the goalkeeper or the players from behind the goal.
\item Inside the goal: Camera placed inside the goal that is sometimes shown during replays for an artistic effect.
\item Public: Camera possibly located at different places in the stadium with the objective of filming the reaction of the public.
\item Other: all other types of cameras that may not fit in the above definitions and that are most often used for artistic effects (\eg the helicopter camera or a split screen to show simultaneously two different games).
\end{itemize}
\subsection{Annotation Process}
We developed two tools for the annotations: the first for annotating the actions, shown in Figure~\ref{fig:annot-actions}, the second for the camera changes and replay grounding, shown in Figure~\ref{fig:annot-cameras}. For each video, a .json annotation file is created, which constitutes our annotations. The structure of the .json file is illustrated hereafter.
\lstset{basicstyle=\scriptsize\ttfamily}
\begin{lstlisting}[caption=Example of an action annotation in json.]
"UrlLocal": "path/to/game",
"annotations": [
{
"gameTime": "1 - 06:35",
"label": "Offside",
"position": "395728",
"team": "away",
"visibility": "visible"
},
\end{lstlisting}
\begin{lstlisting}[caption=Example of a camera change annotation in json.]
"UrlLocal": "path/to/game",
"annotations": [
{
"change_type": "logo",
"gameTime": "1 - 06:57",
"label": "Main behind the goal",
"link": {
"half": "1",
"label": "Offside",
"position": "395728",
"team": "away",
"time": "06:35",
"visibility": "visible"
},
"position": "417414",
"replay": "replay"
},
\end{lstlisting}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/annotation-actions.png}
\caption{\textbf{Actions annotation tool.} When an action occurs, the annotator pauses the video to open the annotation menu (bottom left) and selects the action, the team that performs it, and whether it is shown or unshown in the video. The right column provides all the actions already annotated for that game, sorted chronologically.}
\label{fig:annot-actions}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/annotation-cameras.png}
\caption{\textbf{Cameras annotation tool.} When a camera transition occurs (in this case, just before for a better visualization), the annotator pauses the video to open the annotation menu (bottom left) and selects the type of camera, the upcoming transition, and the real-time or replay characteristic of the current shot. In the case of a replay, as shown here, the annotator selects the action replayed in the last column, with the possibility to visualize a short clip around the action selected to ensure the correctness of the annotation. The large column on the right provides all the camera shots already annotated for that game, sorted chronologically.}
\label{fig:annot-cameras}
\end{figure*}
These tools were given to our 33 annotators, who are engineering students and soccer fans. Each annotator is attributed a given annotation task with detailed instructions and a set of matches to annotate. In case of doubt, they always have the possibility to contact us so that we control their work in ambiguous situations.
The total annotation time amounts to \texttildelow 1600 hours. Annotating all the actions of a single game takes \texttildelow 105 minutes; annotating all the camera changes requires \texttildelow 140 minutes per game, while only associating each replay shot of a game with its action takes \texttildelow 70 minutes.
\subsection{Human Level Performances}
Manually labeling events with timestamps raises the question of the sharpness of the annotations. In Charades~\cite{sigurdsson2016hollywood}, the average tIoU of human annotators on temporal boundaries is only of 72.5\%\footnote{Gunnar A. Sigurdsson, Olga Russakovsky, and Abhinav Gupta. What actions are needed for understanding human actions in videos? In \emph{IEEE International Conference on Computer Vision (ICCV)}, pages 2156-2165, October 2017.}, and 58.7\% on MultiTHUMOS~\cite{yeung2018every}. Alwassel~\etal\footnote{Humam Alwassel, Fabian Caba Heilbron, Victor Escorcia, and Bernard Ghanem. Diagnosing error in temporal action detectors. In \emph{European Conference on Computer Vision (ECCV)}, pages 264-280, September 2018.} also observe some variability on ActivityNet~\cite{caba2015activitynet}, but note that a reasonable level of label noise still allows performance improvements and keeps the challenge relevant.
Although all the annotations of our SoccerNet-v2 dataset are based on a set of well-defined rules, some uncertainty still resides in the timestamps.
To quantify it, we determine an average human level performance on a common match shared across all the annotators as follows.
We assess the performance of an annotator against another by considering one as the predictor, the other as the ground truth. Then, we average the performances of an annotator against all the others to obtain his individual performance. Finally, we average the individual performances to obtain the human level performance. This yields an Average-mAP of 80.5\% for action spotting, a mIoU of 67.4\% for camera segmentation, and a mAP of 88.2\% for camera shot boundary detection. These metrics indicate that label noise is present but that current algorithms are still far from solving our tasks with a human-level cognition of soccer, as seen in Tables~\ref{tab:ActionSpotting-long},~\ref{tab:camera-shots-results},~\ref{tab:Replay Grounding} of the main paper.
\begin{comment}
\subsection{More Action Spotting Results}
Detailed per-class results on shown and unshown instances are provided in Table~\ref{tab:more-action-spotting}, where we ignore classes with no unshown instance in the test set.
\input{Submission/table/spotting-supplementary}
\subsection{Camera Shot Boundary Evaluation}
As discussed in Section \ref{subsec-Camera shot segmentation and boundary detection}, we use the mAP metric to evaluate the performance of camera shot boundary detection task. However, \emph{Content}~\cite{PySceneDetect} has a binary 0/1 output for boundary detection so we cannot compute the mAP as usual. To address this issue we used different hyperparameters of these methods to build precision-recall curves. Then, using these curves, we calculate the mAP for each method.
\end{comment}
\section{Benchmark Results}
\label{sec:Exp}
\mysection{General comments.} SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace provides high and low quality videos of the 500 games. For easier experimentation, it also provides ResNet features~\cite{He_2016_CVPR} computed at 2 fps, further reduced with PCA to 512 dimensions. Following~\cite{cioppa2020context,Giancola_2018_CVPR_Workshops}, in our experiments, we use these 512-dimensional frame features acting as compressed video representations. We adapt the most relevant existing methods to provide benchmark results on the SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace test set. We release our codes to reproduce them,
and we will host leaderboards on dedicated servers.
\subsection{Action Spotting}
\mysection{Methods.}
We adapt or re-implement efficiently all the methods that released public code on SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace.
1. \emph{MaxPool} and \emph{NetVLAD}~\cite{Giancola_2018_CVPR_Workshops}.
Those models pool temporally the ResNet features before passing them through a classification layer. In particular, non-overlapping segments of 20 seconds are classified as to whether they contain any action class, using a multi-label cross-entropy loss. In testing, sliding windows of 20 seconds with a stride of 1 frame are used to infer an actionness score in time, reduced to an action spot using NMS. We consider the basic yet lightweight max pooling and the learnable NetVLAD pooling with 64 clusters. We re-implement the method
based on the original code for a better scaling to 17 classes.
2. \emph{AudioVid}~\cite{Vanderplaetse2020Improved}. The network uses NetVLAD to pool temporally 20-second chunks of ResNet features, as well as VGGish~\cite{Hershey2017CNN} synchronized audio features, subsampled at 2 fps. Then,
the two sets of features are temporally pooled, concatenated and fed to a classification module, as in~\cite{Giancola_2018_CVPR_Workshops}. The spotting prediction is, by default, at the center of the video chunk. The classification module has been adapted to 17 classes for our experiment.
3. \emph{CALF}~\cite{cioppa2020context}.
SoccerNet's state-of-the art network handles 2-minute chunks of ResNet features and is composed of a spatio-temporal features extractor, kept as is, a temporal segmentation module, which we adapt for 17 classes, and an action spotting module, adapted to output at most 15 spotting predictions per chunk, classified in 17 classes.
The segmentation module is trained with a context-aware loss function, which contains four context slicing hyperparameters per class.
Following~\cite{cioppa2020context}, we determine optimal values for them with a Bayesian optimization~\cite{BayesianOpt}.
We re-implement the method and optimize the training strategy based on the existing code to achieve a decent training time.
\mysection{Results.}
The leaderboard providing our benchmark results for action spotting is given in Table~\ref{tab:ActionSpotting-long}.
We further compute the performances on shown/unshown actions as the Average-mAP for predicted spots whose closest ground truth timestamp is a shown/unshown action.
Qualitative results obtained with CALF adapted are shown in Figure~\ref{fig:result-action-spotting}.
The lightweight yet simplistic MaxPool method hardly reaches the performances of the other methods. The restricted learning capacity of the fully connected layer and the hard pruning from the max pooling probably impede a proper learning for the spotting task.
The three other methods yield tied performances. While CALF performs slightly better globally, AudioVid prevails on shown actions and NetVLAD dominates on unshown actions. Each method outperforms the two others in comparable numbers of classes, with CALF achieving the best performance in 4 of the 5 most represented classes. As the distribution of actions in the training set is almost identical, we could argue that this method better leverages its learning capabilities for the classes with abundant instances. Besides, the actions for which AudioVid ranks first are always preceded or rapidly followed by the whistle of the referee. This emphasizes the usefulness of the audio features used in the network.
\input{table/spotting}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/qualitative_spotting-4.png}
\caption{\textbf{Action spotting result} obtained from CALF adapted: \textcolor{newanthoorangespotting}{\textbf{temporal segmentation}}, \textcolor{newanthobluespotting}{\textbf{ground truth}}, and \textcolor{newanthogreenspotting}{\textbf{spotting predictions}}. The network performs well on corners with only one false positive, and moderately on fouls with a few false negatives.}
\label{fig:result-action-spotting}
\end{figure}
\subsection{Camera Shot Segmentation and \\ Boundary Detection}
\mysection{Methods.}
1. \emph{Basic model}. For our first baseline for the segmentation part, we train a basic model composed of 3 layers of 1D CNN with a kernel of 21 frames, hence aggregated in time, on top of ResNet features, and a MSE loss.
2. \emph{CALF (seg.)}~\cite{cioppa2020context}. We adapt CALF as it provides a segmentation module on top of a spatio-temporal features extractor. We replace its loss with the cross-entropy for easier experimentation and we focus on the segmentation by removing the spotting module. The number of parameters is reduced by a factor of 5 compared with the original model.
3. \emph{Content}~\cite{PySceneDetect}. For the boundary detection task, we test the popular scene detection library PySceneDetect. We use the Content option, that triggers a camera change when the difference between two consecutive frames exceeds a particular threshold value. This method is tested directly on the broadcast videos provided in SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace.
4. \emph{Histogram, Intensity}~\cite{scikitvideo}. We test two scene detection methods of the Scikit-Video library. The Histogram method reports a camera change when the intensity histogram difference between subsequent frames exceeds a given threshold~\cite{Otsuji93projection}. The Intensity method reports a camera change when variations in color and intensity between frames exceed a given threshold. Those methods are tested directly on the broadcast videos provided in SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace.
5. \emph{CALF (det.)}~\cite{cioppa2020context}. Since we can see the camera shot boundary detection as a spotting task, we recondition the best spotting method CALF by removing the segmentation module to focus on detection. Following a grid search optimization, we use 24-second input chunks of ResNet features and allow at most 9 detections per chunk.
\mysection{Results.} The leaderboard providing our benchmark results for these tasks is given in Table~\ref{tab:camera-shots-results}. We further compute the performances per transition type as the mAP for predicted spots grouped by the transition of their closest ground truth.
Regarding the segmentation, even with 5x more parameters, the basic model performs poorly compared with the adaptation of CALF. This indicates that simplistic architectures may not suffice for this task, and that more sophisticated designs can rapidly boost performances. For the boundary detection task, Histogram outperforms the other methods, yet it ranks only third on fading transitions where the deep learning-based CALF prevails. The learning capacity of CALF may explain the consistency of its performances across the transition types.
Intensity, Content, and Histogram are intrinsically more tailored for abrupt transitions. Intensity and Content are particularly bad on logos, while Histogram still reaches a high performance.
\input{table/camera_shots_table}
\subsection{Replay Grounding}
\mysection{Methods.}
Given the novelty of this task, there is no off-the-shelf method available.
We choose to adapt our optimized implementations of NetVLAD~\cite{Giancola_2018_CVPR_Workshops} and CALF~\cite{cioppa2020context} within a Siamese neural networks approach~\cite{Bromley1993Signature,Chicco2021Siamese,Koch2015SiameseNN}.
As input for the networks, we provide the ResNet features representations of a fixed-size video chunk and a replay shot. We either repeat or shorten the latter at both sides so that it has the same duration as the former. Ideally, for a chunk containing the action replayed (positive sample), the networks should output a high confidence score along with a localization prediction for spotting the action within the chunk. Otherwise (negative sample), they should only provide a low confidence score, and spotting predictions will be ignored. Negative samples are sampled either among chunks containing an action of the same class as the action replayed (hard negative), or among chunks randomly located within the whole video (random negative).
The hard negatives ensure that the network learns to spot the correct actions without simply identifying their class, while the random negatives bring some diversity in the negative mining.
We test two sampling strategies. At each epoch, for each replay shot, we select: \emph{(S1)} only 1 sample: a positive with probability 0.5, or a hard or random negative each with probability 0.25; \emph{(S2)} 5 samples: 1 positive, 2 hard and 2 random negatives. For both S1 and S2, the positive is a chunk randomly shifted around the action timestamp.
The adaptations specific to each method are the following.
1. \emph{NetVLAD}~\cite{Giancola_2018_CVPR_Workshops}. We use NetVLAD to pool temporally the replay shot and the video chunk separately, but with shared weights. We compare the features obtained for the shot with those of the chunk through a cosine similarity loss, zeroed out when smaller than 0.4 to help the networks focus on improving their worst scores. In parallel, we feed the video features to a 2-layer MLP acting as spotting module to regress the spotting prediction within the chunk.
2. \emph{CALF}~\cite{cioppa2020context}. We feed the replay shot and a video chunk to the shared frame feature extractor. Then, we concatenate the feature vectors along the temporal dimension, and give the resulting tensor to the remaining modules of the network. We set the number of classes to 1 in the segmentation module to provide per-frame insight. The spotting module outputs the confidence score on the presence of the replayed action in the chunk. We further set its number of detections to 1 as one action is replayed and might be spotted in the chunk. This architecture is represented in Figure~\ref{fig:replay-grounding}.
For these methods, at test time, we slice the video associated with the replay in chunks. We obtain at most one grounding prediction per chunk, all of which are kept when computing the Average-AP metric.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/Figure_Grounding-3.png}
\caption{\textbf{Replay grounding pipeline} of our adaptation of CALF. In a Siamese network approach, the replay shot and the video chunk share a \textcolor{newanthoorangespotting}{\textbf{frame features extractor}}. Their features are concatenated and fed to the \textcolor{newanthobluespotting}{\textbf{segmentation module}}. The \textcolor{newanthogreenspotting}{\textbf{grounding module}} outputs a confidence score on the presence or absence of the action in the replay shot, and an action spotting prediction.}
\label{fig:replay-grounding}
\end{figure}
\mysection{Results.}
The leaderboard providing our benchmark results for action spotting is given in Table~\ref{tab:Replay Grounding} for video chunks of different sizes. NetVLAD with S1 performs poorly, so no result is reported.
Our adaptation of CALF achieves the best performance, with a chunk size of 60 seconds and with S2 as sampling strategy. Its demonstrated ability to aggregate the temporal context may explain this success. All the methods yield their best results with chunk sizes around 60 seconds, which presumably provides the most appropriate compromise between not enough and too much temporal context for an efficient replay grounding. An example of result from CALF is given in Figure~\ref{fig:result-replay-grounding}, showing that it can correctly learn to link a replay with its action without necessarily spotting all the actions of the same class. This underlines both the feasibility and the difficulty of our novel task. For a more relevant visualization experience, we invite the reader to consult our \textbf{video in supplementary material}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/qualitative_replay-7.png}
\caption{\textbf{Replay grounding result} of CALF adapted. We display the \textcolor{newanthoredreplay}{\textbf{replay shot}} of a goal, its \textcolor{newanthobluespotting}{\textbf{ground truth}} spot, the \textcolor{newanthopinkreplay}{\textbf{other goals}}, the temporal \textcolor{newanthoorangespotting}{\textbf{segmentation output}}, and the \textcolor{newanthogreenspotting}{\textbf{grounding predictions}}. The replayed goal is correctly spotted, two goals are rightly avoided, but two false positive predictions are also spotted, incidentally when other goals occurred. An insightful visualization can be appreciated in our \textbf{video in supplementary material}.}
\label{fig:result-replay-grounding}
\end{figure}
\begin{table}[t]
\small
\caption{\textbf{Leaderboard for replay grounding} (Average-AP \%), along with sampling strategy during training.}
\centering
\setlength{\tabcolsep}{5pt}
\resizebox{\linewidth}{!}{
\begin{tabular}{l|c|c|c|c|c|c|c}
& \multicolumn{7}{c}{Video chunk size (seconds)} \\
Method & 30 & 40 & 50 & 60 & 120 & 180 & 240 \\ \midrule
NetV.~\cite{Giancola_2018_CVPR_Workshops}+S2 & 23.9 & 22.9 &24.3 & 22.4 & 7.5 & -- & -- \\
CALF\cite{cioppa2020context}+S1 & 16.7 & 19.6 & 28.0 &32.3 & 32.0 & 26.9 & 22.0 \\
CALF\cite{cioppa2020context}+S2 & 8.2 & 14.7 & 28.9 &\B41.8 & 40.3 & 27.2 & 14.4
\end{tabular}}
\label{tab:Replay Grounding}
\end{table}
\section{Broadcast Video Understanding Tasks}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/main-structured-3.png}
\caption{\textbf{Tasks overview.}
We define a 17-class \textcolor{newanthogreen}{\textbf{action spotting}} task, a 13-class \textcolor{newanthoblue}{\textbf{camera shot segmentation}} and \textcolor{newanthoblue}{\textbf{boundary detection}} tasks, and a novel \textcolor{newanthored}{\textbf{replay grounding}} task, with their associated performance metrics. They respectively focus on \textcolor{newanthogreen}{\textbf{understanding the content}} of broadcast soccer games, addressing broadcast \textcolor{newanthoblue}{\textbf{video editing tasks}}, and \textcolor{newanthored}{\textbf{retrieving salient moments}} of the game.}
\label{fig:tasks}
\end{figure*}
We propose a comprehensive set of tasks to move computer vision towards a better understanding of broadcast soccer videos and alleviate the editing burden of video producers. More importantly, these tasks have broader implications as they can easily be transposed to other domains. This makes SoccerNet-v2 an ideal playground for developing novel ideas and implementing innovative solutions in the general field of video understanding.
In this work, we define three main tasks on SoccerNet-v2: action spotting, camera shot segmentation with boundary detection, and replay grounding, which are illustrated in Figure~\ref{fig:tasks}. They are further motivated and detailed hereafter.
\mysection{Action spotting.}
In order to understand the salient actions of a broadcast soccer game, SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace introduces the task of action spotting, which consists in finding all the actions occurring in the videos. Beyond soccer understanding, this task addresses the more general problem of retrieving moments with a specific semantic meaning in long untrimmed videos. As such, we foresee moment spotting applications in \eg video surveillance or video indexing.
In this task, the actions are anchored with a single timestamp, contrary to the task of activity localization~\cite{caba2015activitynet}, where activities are delimited with start and stop timestamps.
We assess the action spotting performance of an algorithm with the Average-mAP metric, defined as follows. A predicted action spot is positive if it falls within a given tolerance $\delta$ of a ground-truth timestamp from the same class. The Average Precision (AP) based on PR curves is computed then averaged over the classes (mAP), after what the Average-mAP is the AUC of the mAP computed at different tolerances $\delta$
ranging from 5 to 60 seconds.
\mysection{Camera shot segmentation and boundary detection.}
\label{subsec-Camera shot segmentation and boundary detection}
Selecting the proper camera at the right moment is the crucial task of the broadcast producer to trigger the strongest emotions on the viewer during a live game. Hence, identifying camera shots not only provides a better understanding of the editing process but is also a major step towards automating the broadcast production. This task naturally generalizes to any sports broadcasts but could also prove interesting for \eg cultural events or movies summarization.
The task of camera shot temporal segmentation consists in classifying each frame of the videos among our 13 camera types. We use the
mIoU metric to evaluate this temporal segmentation. Concurrently, we define a task of camera shot boundary detection, where the objective is to find the timestamps of the transitions between the camera shots. For the evaluation, we use the spotting mAP metric with a single tolerance $\delta$ of 1 second as transitions
are precisely localized and happen within short durations.
\mysection{Replay grounding.}
Our novel replay grounding task consists in retrieving the timestamp of the action shown in a given replay shot within the whole game.
Grounding a replay with its action confers it an estimation of importance, which is otherwise difficult to assess. Derived applications may be further built on top of this task, \eg automatic highlight production, as the most replayed actions are usually the most relevant.
Linking broadcast editing to meaningful content within the video not only bridges our previous tasks, but it can also be applied to any domain focusing on salient moments retrieval.
We use the Average-AP metric to assess the performances on our replay grounding task, computed as described above for the spotting task but without the need of averaging over the classes.
We choose this metric as replay grounding can be seen as class-independent action spotting conditioned by the replay sequence.
\section{SoccerNet-v2 Dataset}
\label{sec:Dataset}
\input{table/dataset}
\mysection{Overview.} We compare SoccerNet-v2 with the relevant video understanding datasets that propose localization tasks in Table~\ref{tab:DatasetsComparison}.
SoccerNet-v2 stands out as one of the largest overall, and the largest for soccer videos by far.
In particular, we manually annotated \texttildelow 300k timestamps, temporally anchored in the 764 hours of video of the 500 complete games of SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace.
The vocabulary of our proposed classes is centered on the soccer game and soccer broadcast domains, hence it is well-defined and consistent across games. Such regularity makes SoccerNet-v2 the largest dataset in term of events instances per class, thus enabling deep supervised learning at scale.
As shown in Figure~\ref{fig:datasets}, SoccerNet-v2 provides the most dense annotations w.r.t. its soccer counterparts, and flirts with the largest fine-grained generic datasets in density and size.
We hired 33 annotators for the annotation process, all frequent observers of soccer, for a total of \texttildelow 1600 hours of annotations. The quality of the annotations was validated by observing a large consensus between our annotators on identical games at the start and at the end of their annotation process. More details are provided in supplementary material.
The annotations are divided in 3 categories: actions, camera shots, and replays, discussed hereafter.
\mysection{Actions.} We identify 17 types of actions from the most important in soccer, listed in Figure~\ref{fig:all-histos1}.
Following~\cite{Giancola_2018_CVPR_Workshops}, we annotate each action of the 500 games of SoccerNet with a single timestamp, defined by well-established soccer rules. For instance, for a corner, we annotate the last frame of the shot, \ie showing the last contact between the player's foot and the ball. We provide the annotation guidelines in supplementary material.
In total, we annotate 110,458 actions, on average 221 actions per game, or 1 action every 25 seconds.
SoccerNet-v2 is a significant extension of the actions of SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace, with 16x more timestamps and 14 extra classes.
We represent the distribution of the actions in Figure~\ref{fig:all-histos1}.
The natural imbalance of the data corresponds to the distribution of real-life broadcasts, making SoccerNet-v2 valuable for generalization and industrial deployment.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/ComparisonDataset-v2}
\caption{\textbf{Datasets comparison.} The areas of the tiles represent the number of annotations per dataset. SoccerNet-v2 (SN-v2) extends the initial SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace (SN-v1) with more annotations and tasks, and it focuses on untrimmed broadcast soccer videos.}
\label{fig:datasets}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figures/histogram_actions_sns_log_2.pdf}
\caption{\textbf{SoccerNet-v2 actions.} Log-scale distribution of our \textcolor{newjacobblue}{\textbf{shown}} and \textcolor{newjacoborange}{\textbf{unshown}} actions among the 17 classes, and \textcolor{newanthogray}{\textbf{proportion}} that each class represents. The dataset is unbalanced, with some of the most important actions in the less abundant classes.
}
\label{fig:all-histos1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figures/histogram_cameras_sns_log_3.pdf}
\caption{
\textbf{Camera shots.} Log-scale distribution of our camera shot timestamps among the classes
in terms of instances (top) and video duration (bottom), separated in \textcolor{newjacoborange}{\textbf{replays}} and \textcolor{newjacobblue}{\textbf{live or other}} sequences, and \textcolor{newanthogray}{\textbf{percentage}} of timestamps that each bar represents.
}
\label{fig:all-histos2}
\end{figure}
Additionally, we enrich each timestamp with a novel binary visibility tag that states whether the associated action is \emph{shown} in the broadcast video or \emph{unshown}, in which case the action must be inferred by the viewer. For example, this happens when the producer shows a replay of a shot off target that lasts past the clearance shot of the goalkeeper: the viewer knows that the clearance has been made despite it was not shown on the TV broadcast. Spotting unshown actions is challenging because it requires a fine understanding of the game, beyond frame-based analysis, as it forces to consider the temporal context around the actions.
We annotate the timestamps of unshown actions
with the best possible temporal interpolation.
They represent 18\% of the actions (see Figure~\ref{fig:all-histos1}), hence form a large set of actions whose spotting requires a sharp understanding of soccer.
Finally, to remain consistent with SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace, we annotate the team that performs each action as either \emph{home} or \emph{away}, but leave further analysis on that regard for future work.
\mysection{Cameras.} We annotate a total of 158,493 camera change timestamps, 116,687 of which are comprehensive for a subset of 200 games, the others delimiting replay shots in the remaining games (see hereafter). For the fully annotated games, this represents an average of 583 camera transitions per game, or 1 transition every 9 seconds. Those timestamps contain the type of camera shot that has been shown, among the most common 13 possibilities listed in Figure~\ref{fig:all-histos2}.
We display their distribution in terms of number of occurrences and total duration. The class imbalance underpins a difficulty of this dataset, yet it represents a distribution consistent with broadcasts used in practical applications.
Besides, different types of transitions occur from one camera shot to the next, which we append to each timestamp. These can be abrupt changes between two cameras (71.4\%), fading transitions between the frames (14.2\%), or logo transitions (14.2\%).
Logos constitute an unusual type of transition compared with abrupt or fading, which are common in videos in the wild or in movies, yet they are widely used in sports broadcasts. They pose an interesting camera shot detection challenge, as each logo is different and algorithms must adapt to a wide variety thereof. For logo and fading camera changes, we locate the timestamps as precisely as possible at the middle of the transition, while we annotate the last frame before an abrupt change.
Eventually, we indicate whether the camera shot happens live (86.7\%) with respect to the game, or shows a replay of an action (10.9\%), or another type of replay (2.4\%). The distribution in Figure~\ref{fig:all-histos2} provides per-class proportions of replay camera shots and groups other replays and live shots.
\mysection{Replays.} For the 500 games of SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace, we bound each video shot showing a replay of an action with two timestamps, annotated in the same way as for the camera shot changes. For each replay shot, we refer the timestamp of the action replayed. When several replays of the same action are shown consecutively with different views, we annotate all the replay shots separately. This gives one replay shot per type of view, all of which are linked to the same action. In total, 32,932 replay shots are associated with their corresponding action, which represents an average of 66 replay shots per game, for an average replay shot duration of 6.8 seconds.
Retrieving a replayed action is challenging because typically, 1 to 3 replays of the action are shown from different viewpoints hardly ever found in the original live broadcast video. This encourages a more general video understanding rather than an exact frame comparison.
\section{Conclusion}
\label{sec:Conclusion}
We release SoccerNet-v2, the largest soccer-related set of annotations, anchored on top of the original SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace's 500 untrimmed broadcast games.
With our \texttildelow 300k annotations, we further extend the tasks of action spotting, camera shot segmentation and boundary detection, and we define the novel task of replay grounding.
We propose and discuss several benchmark results for all of them. In addition, we provide codes to reproduce our experiments, and we will host public leaderboards to drive research in this field.
With SoccerNet-v2, we aim at pushing computer vision closer to automatic solutions for holistic broadcast soccer video understanding, and believe that it is the ideal dataset to explore new tasks and methods for more generic video understanding and production tasks.
\section{Related Work}
\label{sec:SOTA}
\mysection{Video understanding datasets.}
Many video datasets have been made available in the last decade, proposing challenging tasks around
human action understanding~\cite{gorelick2007actions,schuldt2004recognizing}, with applications in
movies~\cite{Kuehne11,marszalek2009actions,sigurdsson2016hollywood},
sports~\cite{KarpathyCVPR14,niebles2010modeling,rodriguez2008action},
cooking~\cite{damen2020epic,kuehne2014language,rohrbach2012database}, and large-scale generic video classification in the wild~\cite{abu2016youtube,kay2017kinetics,soomro2012ucf101}.
While early efforts focused on trimmed video classification, more recent datasets provide fine-grained annotations of longer videos at a
temporal~\cite{caba2015activitynet,THUMOS14,sigurdsson2016hollywood,yeung2018every,zhao2019hacs} or spatio-temporal level~\cite{gu2018ava,mettes2016spot,rodriguez2008action,weinzaepfel2016human}.
In particular, THUMOS14~\cite{THUMOS14} is the first benchmark for temporal activity localization, introducing 413 untrimmed videos, for a total of 24 hours and 6,363 temporally anchored activities, originally split into 20 classes then extended to 65 classes in MultiTHUMOS~\cite{yeung2018every}.
ActivityNet~\cite{caba2015activitynet} gathers the first large-scale dataset for activity understanding, with 849 hours of untrimmed videos, temporally annotated with 30,791 anchored activities split into 200 classes. Every year, an ActivityNet competition is hosted highlighting a variety of tasks with hundreds of submissions~\cite{ghanem2018activitynet,ghanem2017activitynet}.
More recent datasets consider videos at an atomic level, with fine-grained temporal annotations from short snippets of longer videos~\cite{gu2018ava,monfort2019moments,zhao2019hacs}.
In particular, Multi-Moments in Time~\cite{monfort2019multi} provides 2M action labels for 1M short clips of 3s, classified into 313 annotated action classes.
Something-Something~\cite{goyal2017something} collects more than 100k videos annotated with 147 classes of daily human-object interactions.
Breakfast~\cite{kuehne2014language} and MPII-Cooking~2 \cite{rohrbach2016recognizing} provide annotations for individual steps of various cooking activities.
EPIC-KITCHENS \cite{damen2020epic} scales up those approaches by providing 55 hours of cooking footage, annotated with around 40k action clips of 147 different classes.
\mysection{Soccer-related datasets.}
SoccerNet~\cite{Giancola_2018_CVPR_Workshops}\xspace is the first large-scale soccer video dataset, with 500 games from major European leagues and 6k annotations.
It provides complete games rather than trimmed segments, with a distribution faithful to
official TV broadcasts.
However, SoccerNet only focuses on 3 action classes (goals, cards, substitutions), making the task too simplistic and of moderate interest.
SoccerDB~\cite{Jiang2020SoccerDB} extends SoccerNet by adding 7 classes and proposes bounding box annotations for the players.
SoccerDB is composed of about half of SoccerNet's videos, in addition to $76$ games from the Chinese Super League and the 2018 World Cup. However, SoccerDB misses a complete set of possible actions in soccer and editing annotations to allow for a full understanding of the production of TV broadcasts.
Pappalardo \etal~\cite{Pappalardo2019Apublic} released a large-scale dataset of soccer events, localized in time and space with tags and instance information. However, they focus on player and team statistics rather than video understanding, as they do not release any broadcast video.
We address the limitations of these datasets by annotating all the interesting actions that occur during the 500 SoccerNet games. Also, we provide valuable annotations for video editing, and we connect camera shots with actions to allow for salient moments retrieval
through our novel replay grounding task.
\mysection{Action spotting.} Giancola \etal~\cite{Giancola_2018_CVPR_Workshops} define the task of action spotting in SoccerNet as finding the anchors of soccer events in a video and provide baselines based on temporal pooling. Rongved \etal~\cite{rongved-ism2020} focus on applying a 3D ResNet directly to the video frames in a 5-second sliding window fashion. Vanderplaetse \etal~\cite{Vanderplaetse2020Improved} combine visual and audio features in a multimodal approach. Cioppa \etal~\cite{cioppa2020context} introduce a context-aware loss function to model the temporal context surrounding the actions. Similarly, Vats \etal~\cite{vats2020event} use a multi-tower CNN to process information at various temporal scales to account for the uncertainty of the action locations.
We build upon those works to provide benchmark results on our extended action spotting task.
\mysection{Camera shot segmentation and boundary detection.}
Camera shot boundaries are typically detected by differences between consecutive frame features, using either pixels~\cite{Boreczky1996Comparison}, histograms~\cite{otsuji1994projection}, motion~\cite{zabih1995feature} or deep features~\cite{Abdulhussain2018MethodsAC}.
In soccer, Hu \etal~\cite{Hu2007Enhanced} combine motion vectors and a filtration scheme to improve color-based methods.
Lefèvre \etal~\cite{Lefvre2007EfficientAR} consider adaptive thresholds and features from a hue-saturation color space.
Jackman~\cite{Jackman2019FootballSD} uses popular 2D and 3D CNNs but detects many false positives, as it appears difficult to efficiently handle the processing of the temporal domain. However, these works are fine-tuned for only a few games. Regarding the classification of the camera type, Tong \etal~\cite{Tong2008Shot} first detect logos to select non-replay camera shots, further classified as long, medium, close-up or out-of-field views based on color and texture features. Conversely, Wang \etal~\cite{Wang2005SoccerRD} classify camera shots for the task of replay detection. Sarkar \etal~\cite{Sarkar2020Shot} classify each frame in the classes of~\cite{Tong2008Shot} based on field features and player dimensions. Kolekar \etal~\cite{Kolekar2015Bayesian} use audio features to detect exciting moments, further classified in camera shot classes for highlight generation.
In this paper, we offer a unified and enlarged corpus of annotations that allows for a thorough understanding of the video editing process.
\mysection{Replay grounding.}
In soccer, multiple works focus on detecting replays~\cite{Sarkar2020Shot, Wang2005SoccerRD,xu2011robust,yang2008statistical,zhao2006highlight},
using either logo transitions or slow-motion detection, but grounding the replays with their action in the broadcast has been mostly overlooked.
Babaguchi \etal~\cite{babaguchi2000linking} tackle replay linking in American football but use a heuristic approach that can hardly generalize to other sports.
Ouyng \etal~\cite{ouyang2005replay} introduce a video abstraction task to find similarities between multiple cameras in various sports videos, yet their method requires camera parameters and is tested on a restricted dataset.
Replay grounding can be likened to action similarity retrieval, as in~\cite{hashemi2016view,junejo2008cross} for action recognition. Jain \etal~\cite{jain2020action} use a Siamese structure to compare the features of two actions, and Roy \etal~\cite{roy2018action} also quantify their similarity.
We propose a task of replay grounding to connect replay shots with salient moments of broadcast videos, which could find further uses in action retrieval and highlight production.
| {
"attr-fineweb-edu": 1.871094,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeHzxK0iCl7UGXxVF | \section{Introduction}\label{sec-int}
In this paper, we generalize the results of~\cite{MZh} to a much
larger class of billiards. They are similar to Bunimovich stadium
billiards (see~\cite{B1}), but the semicircles are
replaced by almost arbitrary
curves. That is, those curves are not completely arbitrary, but the
assumptions on them is very mild. An example of such curves is shown
in Figure~\ref{fig0}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=140truemm]{fig0.eps}
\caption{Generalized Bunimovich stadium.}\label{fig0}
\end{center}
\end{figure}
We consider billiard maps (not the flows) for two-dimensional billiard
tables. Thus, the phase space of a billiard is the product of the
boundary of the billiard table and the interval $[-\pi/2,\pi/2]$ of
angles of reflection. This phase space will be denoted as $\mathcal{M}$. We
will use the variables $(r,\phi)$, where $r$ parametrizes the table
boundary by the arc length, and $\phi$ is the angle of reflection.
Those billiards have the natural measure; it is
$c\cos\phi\;dr\;d\phi$, where $c$ is the normalizing constant. This
measure is invariant for the billiard map.
However, we will not be using this measure, but rather investigate our
system as a topological one. The first problem one encounters with
this approach is that the map can be discontinuous, or even not
defined at certain points. In particular, if we want to define
topological entropy of the system, we may use one of several methods,
but we cannot be sure that all of them will give the same result.
To go around this problem, similarly as in~\cite{MZh}, we consider a
compact subset of the phase space, invariant for the billiard map, on
which the map is continuous. Thus, the topological entropy of the
billiard map, no matter how defined, is larger than or equal to the
topological entropy of the map restricted to this subset.
Positive topological entropy is recognized as one of the forms of
chaos. In fact, topological entropy even measures how large this chaos
is. Hence, whenever we prove that the topological entropy is positive,
we can claim that the system is chaotic from the topological point of
view.
We will be using similar methods as in~\cite{MZh}. However, the class
of billiards to which our results can be applied, is much larger. The
class of Bunimovich stadium billiards, up to similarities, depends on
one positive parameter only. Our class is enormously larger, although
we keep the assumption that two parts of the billiard boundary are
parallel segments of straight lines. Nevertheless, some of our proofs
are simpler than those in~\cite{MZh}.
\section{Assumptions}\label{sec-ass}
We will think about the billiard table positioned as in
Figure~\ref{fig0}. Thus, we will use the terms \emph{horizontal,
vertical, lower, upper, left, right}. While we are working with the
billiard map, we will also look at the billiard flow. Namely, we will
consider \emph{trajectory lines}, that is, line segments between two
consecutive reflections from the table boundary. For such a trajectory
line (we consider it really as a line, not a vector) we define its
\emph{argument} (as an argument of a complex number), which is the
angle between the trajectory line and a horizontal line. For
definiteness, we take the angle from $(-\pi/2,\pi/2]$. We will be also
speaking about the arguments of lines in the plane. Moreover, for
$x\in\mathcal{M}$, we define the argument of $x$ as the argument of of the
trajectory line joining $x$ with its image.
We will assume that the boundary of billiard table is the union of
four curves, $\Gamma_1$, $\Gamma_2$, $\Gamma_3'$ and $\Gamma_4'$. The
curves $\Gamma_1$ and $\Gamma_2$ are horizontal segments of straight
lines, and $\Gamma_2$ is obtained from $\Gamma_1$ by a vertical
translation. The curve $\Gamma_3'$ joins the left endpoints of
$\Gamma_1$ and $\Gamma_2$, while $\Gamma_4'$ joins the right endpoints
of $\Gamma_1$ and $\Gamma_2$ (see Figure~\ref{fig0}). We will consider
all four curves with endpoints, so they are compact.
\begin{definition} For $\varepsilon\ge 0$, we will call a point $p\in\Gamma_i'$ ($i\in\{3,4\}$)
\emph{$\varepsilon$-free} if any forward trajectory of the flow (here we mean
the full forward trajectory, not just the trajectory line), beginning
at $p$ with a trajectory line with argument whose absolute value is
less than or equal to $\varepsilon$, does not collide with $\Gamma_i'$ before
it collides with $\Gamma_{7-i}'$.
Furthermore, we will call a subarc $\Gamma_i\subset\Gamma_i'$
\emph{$\varepsilon$-free} (see Figure~\ref{fig2}) if:
\begin{enumerate}[label=(\alph*)]
\item $\Gamma_i$ is of class $C^1$;
\item Every point of $\Gamma_i$ is $\varepsilon$-free;
\item There are points $p_{i+},p_{i-}\in\Gamma_i$ such that the
argument of the line normal to $\Gamma_i$ is larger than or equal to
$\varepsilon$ at $p_{i+}$ and less than or equal to $-\varepsilon$ at $p_{i-}$
(see Figure~\ref{fig2});
\item $\Gamma_i$ is disjoint from $\Gamma_1\cup\Gamma_2$.
\end{enumerate}
Clearly, if $\Gamma_i$ is $\varepsilon$-free then it is also $\delta$-free
for all $\delta\in(0,\varepsilon)$.
\end{definition}
Our last assumption is that there is $\varepsilon>0$ and $\varepsilon$-free subarcs
$\Gamma_i\subset\Gamma_i'$ for $i=3,4$, such that
$\Gamma_3\cup\Gamma_4$ is disjoint from $\Gamma_1\cup\Gamma_2$. We
will denote the class of billiard tables satisfying all those
assumptions by $\mathcal{H}(\varepsilon)$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=140truemm]{fig2.eps}
\caption{Curves $\Gamma_i$, $i=1,2,3,4$.}\label{fig2}
\end{center}
\end{figure}
Observe that there are two simple situations when we know that there
is $\varepsilon>0$ such that $\Gamma_i'$ has an $\varepsilon$-free subarc. One is
when there is a $0$-free point $p_i\in\Gamma_i'$ such that there is a
neighborhood of $p_i$ where $\Gamma_i$ is of class $C^1$ and the
curvature of $\Gamma_i$ at $p_i$ exists and is non-zero (see
Figure~\ref{fig1}). The other one is when $\Gamma_i'$ is the graph of
a non-constant function $x=f(y)$ of class $C^1$ (then we take a
neighborhood of a point where $f$ attains its extremum; this
neighborhood may be large if the extremum is attained on an interval),
like $\Gamma_3'$ (but not $\Gamma_4'$) in Figure~\ref{fig0}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=140truemm]{fig1b.eps}
\caption{Points $p_3$ and $p_4$.}\label{fig1}
\end{center}
\end{figure}
We forget about the other parts of the curves $\Gamma_i'$ and look
only at $\Gamma_i$, $i=1,2,3,4$ (see Figure~\ref{fig2}).
Let us mention that since we will be using only those four pieces of
the boundary of the billiard table, it does not matter whether the
rest of the boundary is smooth or not. If it is not smooth, we can
include it (times $[-\pi/2,\pi/2]$) into the set of singular points,
where the billiard map is not defined.
\section{Coding}\label{sec-cod}
We consider a billiard table from the class $\mathcal{H}(\varepsilon)$. Since
transforming the table by homothety does not change the entropy, we
may assume that the distance between $\Gamma_1$ and $\Gamma_2$ is 1.
Now we can introduce a new characteristic of our billiard table. We
will say that a billiard table from the class $\mathcal{H}(\varepsilon)$ is in the
class $\mathcal{H}(\varepsilon,\ell)$ if the horizontal distance between $\Gamma_3$
and $\Gamma_4$ is at least $\ell$. We can think about $\ell$ as a big
number (it will go to infinity).
We start with a trivial geometrical fact, that follows immediately
from the rule of reflection. We include the assumption
that the absolute values of the arguments are smaller than $\pi/6$
in order to be sure that the absolute value of the argument of $T_2$
is smaller than $\pi/2$.
\begin{lemma}\label{l-trivial}
If $T_1$ and $T_2$ are incoming and outgoing parts of a trajectory
reflecting at $q$ and the argument of the line normal to the boundary
of the billiard at $q$ is $\alpha$, and $|\alpha|,|\arg(T_1)|<\pi/6$,
then $\arg(T_2)=2\alpha-\arg(T_1)$.
\end{lemma}
We consider only trajectories that reflect from the curves $\Gamma_i$,
$i=1,2,3,4$. In order to have control over this subsystem, we fix an
integer $N>1$ and denote by ${\mathcal K}_{\ell,N}$ the space of points whose
(discrete) trajectories go only through $\Gamma_i$, $i=1,2,3,4$ and
have no $N+1$ consecutive collisions with the straight segments.
We can unfold the billiard table by using reflections from the
straight segments (see Figure~\ref{fig3}). The liftings of
trajectories (of the flow) consist of segments between points of
liftings of $\Gamma_3$ and $\Gamma_4$. In ${\mathcal K}_{\ell,N}$ they go at most $N$
levels up or down.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=140truemm]{fig3.eps}
\caption{Five levels of the unfolding. Only $\Gamma_3$ and $\Gamma_4$
are shown instead of $\Gamma_3'$ and $\Gamma_4'$.}\label{fig3}
\end{center}
\end{figure}
Now for a moment we start working on the lifted billiard. That is, we
consider only $\Gamma_3$ and $\Gamma_4$, but at all levels, as pieces
of the boundary from which the trajectories of the flow can reflect.
We denote those pieces by $\Gamma_{i,k}$, where $i\in\{3,4\}$ and
$k\in{\mathbb Z}$. Clearly, flow trajectories from some points $(r,\phi)$ will
not have more collisions, so the lifted billiard map $F$ will be not
defined at such points. We denote by $\widetilde{\mathcal{M}}$ the product of the union
of all sets $\Gamma_{i,k}$ and the interval $[\pi/2,\pi/2]$.
Now we specify how large $\ell$ should be for given $N,\varepsilon$ in order
to get nice properties of the billiard map restricted to ${\mathcal K}_{\ell,N}$.
Assume that our billiard table belongs to $\mathcal{H}(\varepsilon,\ell)$ and fix
$i\in\{3,4\}$, $k\in{\mathbb Z}$. Call a continuous map $\gamma:[a,b]\to\widetilde{\mathcal{M}}$,
given by
\[
\gamma(t)=(\gamma_r(t),\gamma_\phi(t)),
\]
an \emph{$(i,k,\varepsilon)$-curve} if $\gamma_r([a,b])=\Gamma_{i,k}$ and for
every $t\in[a,b]$ the absolute value of the argument of
the trajectory line \emph{incoming to} $\gamma(t)$ is
at most $\varepsilon$. We can think of $\gamma$ as a bundle of trajectories
of a flow incoming to $\Gamma_{i,k}$. In order to be able to use
Lemma~\ref{l-trivial}, we will always assume that $\varepsilon<\pi/6$.
\begin{lemma}\label{angles}
Assume that the billiard table belongs to $\mathcal{H}(\varepsilon,\ell)$ and fix
$N\ge 0$, $i\in\{3,4\}$, $k\in{\mathbb Z}$, and $j\in\{-N,-N+1,\dots,N-1,N\}$.
Assume that
\begin{equation}\label{e-1}
\ell\ge\frac{N+1}{\tan\varepsilon}
\end{equation}
Then every $(i,k,\varepsilon)$-curve $\gamma$ has a subcurve whose image
under $F$ (that is, $F\circ\gamma|_{[a',b']}$ for some subinterval
$[a',b']\subset[a,b]$) is a $(7-i,k+j,\varepsilon)$-curve.
\end{lemma}
\begin{proof}
There are points $c_-,c_+\in[a,b]$ such that $\gamma_r(c_-)$ is a
lifting of $p_{i-}$ and $\gamma_r(c_+)$ is a lifting of $p_{i+}$.
Then, by Lemma~\ref{l-trivial}, the lifted trajectory line outgoing
from $\gamma(c_-)$ (respectively, $\gamma(c_+)$) has argument smaller
than $-\varepsilon$ (respectively, larger than $\varepsilon$).
Since the direction of the line normal to $\Gamma_{i,k}$
at the point $\gamma_r(t)$ varies continuously with $t$, the
argument of the lifted trajectory line outgoing from $\gamma(t)$
also varies continuously with $t$. Therefore, there is a subinterval
$[a'',b'']\subset [a,b]$ such that at one of the points $a'',b''$
this argument is $-\varepsilon$, at the other one is $\varepsilon$, and in between
is in $[-\varepsilon,\varepsilon]$. When the bundle of lifted trajectory lines
starting at $\gamma([a'',b''])$ reaches liftings of $\Gamma_{7-i}$,
it collides with all points of $\Gamma_{7-i,k+j}$ whenever
$j+1\le\ell\tan\varepsilon$. By~\eqref{e-1}, this includes all $j$ with
${j}\le N$. Therefore, there is a subinterval
$[a',b']\subset[a'',b'']$ such that
$(F\circ\gamma)_r([a',b'])=\Gamma_{7-i,k+j}$. The arguments of the
lifted trajectory lines incoming to $(F\circ\gamma)([a',b'])$ are in
$[-\varepsilon,\varepsilon]$, so we get a $(7-i,k+j,\varepsilon)$-curve.
\end{proof}
Using this lemma inductively we get immediately the next lemma.
\begin{lemma}\label{l-iti}
Assume that the billiard table belongs to $\mathcal{H}(\varepsilon,\ell)$ and fix
$N\ge 0$ such that~\eqref{e-1} is satisfied. Then for every finite
sequence
\[
(k_{-j},\dots,k_{-1},k_0,k_1,\dots,k_j)
\]
of integers with absolute values at most $N$ there is a trajectory
piece in the lifted billiard going between lifting of $\Gamma_3$ and
$\Gamma_4$ with the differences of levels
$k_{-j},\dots,k_{-1},k_0,k_1,\dots,k_j$.
\end{lemma}
Note that in the above lemma we are talking about trajectory pieces of
length $2j+1$, without requiring that those pieces can be extended
backward or forward to a full trajectory.
\begin{proposition}\label{p-iti}
Under the assumption of Lemma~\ref{l-iti}, for every two-sided
sequence
\[
(\dots,k_{-2},k_{-1},k_0,k_1,k_2,\dots)
\]
of integers with absolute values at most $N$ there is a trajectory in
the lifted billiard going between liftings of $\Gamma_3$ and
$\Gamma_4$ with the differences of levels
$\dots,k_{-2},k_{-1},k_0,k_1,k_2,\dots$.
\end{proposition}
\begin{proof}
For every finite sequence $(k_{-j},\dots,k_{-1},k_0,k_1,\dots,k_j)$
the set of points of $\Gamma_3\times[-\pi/2,\pi/2]$ or
$\Gamma_4\times[-\pi/2,\pi/2]$ whose trajectories from time $-j$ to
$j$ exist and satisfy Lemma~\ref{l-iti} is compact and nonempty. As
$j$ goes to infinity, we get a nested sequence of compact sets. Its
intersection is the set of points whose trajectories behave in the way
we demand, and it is nonempty.
\end{proof}
Consider the following subshift of finite type
$(\Sigma_{\ell,N},\sigma)$. The states are
\[
-N,-N+1,\dots,-1,0,1,\dots,N-1,N,
\]
and the transitions are: from 0 to 0, 1 and $-1$, from $i$ to $i+1$
and 0 if $1\le i\le N-1$, from $N$ to 0, from $-i$ to $-i-1$ and 0 if
$1\le i\le N-1$, and from $-N$ to 0. Each trajectory of a point from
${\mathcal K}_{\ell,N}$ can be coded by assigning the symbol 0 to
$\Gamma_3\cup\Gamma_4$ and for the parts between two zeros either
$1,2,\dots,j$ if the the first point is in $\Gamma_1$, or
$-1,-2,\dots,-j$ if the first point is in $\Gamma_2$. This defines a
map from ${\mathcal K}_{\ell,N}$ to $\Sigma_{\ell,N}$. This map is continuous, because
the preimage of every cylinder is open (this follows immediately from
the fact that the straight pieces of our trajectories of the billiard
flow intersect the arcs $\Gamma_i$, $i=1,2,3,4$, only at the endpoints
of those pieces, and that the arcs are disjoint). It is a surjection
by Proposition~\ref{p-iti}. Therefore it is a semiconjugacy, and
therefore, the topological entropy of the billiard map restricted to
${\mathcal K}_{\ell,N}$ is larger than or equal to the topological entropy of
$(\Sigma_{\ell,N},\sigma)$.
\section{Computation of topological entropy}\label{sec-cote}
In the preceding section we obtained a subshift of finite type. Now
we have to compute its topological entropy. If the alphabet
of a subshift of finite type is $\{1,2,\dots,n\}$, then we
can write the \emph{adjacency matrix} $M=(m_{ij})_{i,j=1}^n$, where
$m_{ij}=1$ if there is a transition from $i$ to $j$ and $m_{ij}=0$
otherwise. Then the topological entropy of our subshift is the
logarithm of the spectral radius of $M$ (see~\cite{K, ALM}).
In the case of large, but not too complicated, matrices, in order to
compute the spectral radius one can use the \emph{rome method}
(see~\cite{BGMY, ALM}). For the adjacency matrices of
$(\Sigma_{\ell,N},\sigma)$ this method is especially simple. Namely,
if we look at the paths given by transitions, we see that 0 is a rome:
all paths lead to it. Then we only have to identify the lengths of all
paths from 0 to 0 that do not go through 0 except at the beginning and
the end. The spectral radius of the adjacency matrix is then the
largest zero of the function $\sum x^{-p_i}-1$, where the sum is over
all such paths and $p_i$ is the length of the $i$-th path.
\begin{lemma}\label{l-ent}
Topological entropy of the system $(\Sigma_{\ell,N},\sigma)$ is the
logarithm of the largest root of the equation
\begin{equation}\label{eq0}
x^2-2x-1=-2x^{-N}.
\end{equation}
\end{lemma}
\begin{proof}
The paths that we mentioned before the lemma, are: one path of length
1 (from 0 directly to itself), and two paths of length $2,3,\dots,N+1$
each. Therefore, our entropy is the logarithm of the largest zero of
the function $2(x^{-(N+1)}+\dots+x^{-3}+x^{-2})+x^{-1}-1$. We have
\[
x(1-x)\big(2(x^{-(N+1)}+\dots +x^{-3}+x^{-2})+x^{-1}-1\big)=
(x^2-2x-1)+2x^{-N},
\]
so our entropy is the logarithm of the largest root of
equation~\eqref{eq0}.
\end{proof}
\begin{corollary}\label{c-ent}
Assume that the billiard table belongs to $\mathcal{H}(\varepsilon,\ell)$ and fix
$N\ge 0$ such that~\eqref{e-1} is satisfied. Then the topological
entropy of the billiard map restricted to ${\mathcal K}_{\ell,N}$ is larger than or
equal to the logarithm of the largest root of equation~\eqref{eq0}.
\end{corollary}
A particular case of this corollary gives us a sufficient condition
for positive topological entropy. Namely, notice that the largest root
of the equation $x^2-2x-1=-2x^{-1}$ is $2$.
\begin{corollary}\label{c-ent1}
Assume that the billiard table belongs to $\mathcal{H}(\varepsilon,\ell)$ and
$\ell\tan\varepsilon\ge 2$. Then the topological entropy of the billiard map
is at least $\log2$, so the map is chaotic in topological sense.
\end{corollary}
{
It is interesting how this estimate works for the classical Bunimovich
stadium billiard. In fact, for the estimate we will improve a little comparing to
the above Corollary.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=50truemm]{fig10.eps}
\caption{Computations for the stadium billiard.}\label{fig10}
\end{center}
\end{figure}
\begin{proposition}\label{st}
If the rectangular part of a stadium has the length/width ratio larger
than $\sqrt3\approx 1.732$ (see Figure~\ref{fig4}), the billiard map
has topological entropy at least $\log 2$.
\end{proposition}
\begin{proof}
We can take $\varepsilon$ as close to $\pi/6$ as we want (see
Figure~\ref{fig10}), so we get the assumption in the corollary
$\ell>2\sqrt3$. However, the factor 2 (in general, $N+1$
in~\eqref{e-1}) was taken to get an estimate that works for all
possible choices of $\Gamma_i$, $i=3,4$. For our concrete choice it is
possible to replace it by the vertical size of
$\Gamma_{i,0}\cup\Gamma_{i,1}$ (or $\Gamma_{i,0}\cup\Gamma_{i,-1}$,
bit it is the same in our case). This number is not 2, but $\frac32$.
Thus, we really get $\ell>\frac32\sqrt3$. If $\ell'$ is the length of
the rectangular part of the stadium, then
$\ell=\ell'+2\cdot\frac{\sqrt3}4=\ell'+\frac12\sqrt3$.
This gives us $\ell'>\sqrt3$.
\end{proof}
}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=140truemm]{fig4c.eps}
\caption{Stadium billiard with topological entropy at least
$\log2$.}\label{fig4}
\end{center}
\end{figure}
Now we can prove the main result of this paper.
\begin{theorem}\label{main}
For the billiard tables from the class $\mathcal{H}$
with the shapes of $\Gamma_3$ and $\Gamma_4$ fixed, the lower limit of
the topological entropy of the generalized Bunimovich stadium
billiard, as its length $\ell$ goes to infinity, is at least
$\log(1+\sqrt2)$.
\end{theorem}
\begin{proof}
In view of Corollary~\ref{c-ent} and the fact that the largest root
of the equation $x^2-2x-1=0$ is $1+\sqrt2$, we only have to prove that
the largest root of the equation~\eqref{eq0} converges to the largest
root of the equation $x^2-2x-1=0$ as $N\to\infty$. However, this
follows from the fact that in the neighborhood of $1+\sqrt2$ the
right-hand side of~\eqref{eq0} goes uniformly to 0 as $N\to\infty$.
\end{proof}
\section{Generalized semistadium billiards}
In a similar way we can investigate generalized semistadium billiards.
They are like generalized stadium billiards, but one of the caps
$\Gamma_3',\Gamma_4'$ is a vertical straight line segment. The other
one contains an $\varepsilon$-free subarc. This class contains, in
particular, Bunimovich's Mushroom billiards (see~\cite{B2}), see
Figure~\ref{fig6}. We will be talking about the classes $\mathcal{H}_{1/2}$,
$\mathcal{H}_{1/2}(\varepsilon)$ and $\mathcal{H}_{1/2}(\varepsilon,\ell)$ of billiard tables.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=140truemm]{fig6.eps}
\caption{A mushroom.}\label{fig6}
\end{center}
\end{figure}
When we construct a lifting, we add the reflection from the flat
vertical cap. In such a way we obtain the same picture as in
Section~\ref{sec-cod}, except that there is an additional vertical
line through the middle of the picture, and we have to count the flow
trajectory crossing it as an additional reflection (see
Figure~\ref{fig7}). Note that since we will be working with the lifted
billiard, in the computations we can take $2\ell$ instead of $\ell$.
In particular, inequality~\eqref{e-1} will be now replaced by
\begin{equation}\label{e-2}
\ell\ge\frac{N+1}{2\tan\varepsilon}
\end{equation}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=140truemm]{fig7.eps}
\caption{Unfolding.}\label{fig7}
\end{center}
\end{figure}
Computation of the topological entropy is this time a little more
complicated. We cannot claim that after coding we are obtaining a
subshift of finite type. This is due to the fact that if $\Gamma_i'$
is a vertical segment, we would have to take $\Gamma_i=\Gamma_i'$, and
$\Gamma_i$ would not be disjoint from $\Gamma_1$ and $\Gamma_2$. The
second reason is that the moment when the reflection from the vertical
segment occurs depends on the argument of the trajectory line.
The formula for the topological entropy of the subshift of finite type
comes from counting of number of cylinders of length $n$ and then
taking the exponential growth rate of this number as $n$ goes to
infinity. Here we can try do exactly the same, but the problem occurs
with the growth rate, since we have additional reflections from the
vertical segment. This means that the cylinders of length $n$ from
Section~\ref{sec-cod} correspond not to time $n$, but to some
larger time. How much larger, depends on the cylinder. However, there
cannot be two consecutive reflections from the vertical segment, so
this time is not larger than $2n$, and by extending the trajectory we
may assume that it is equal to $2n$ (maybe there will be more
cylinders, but we need only a lower estimate). Thus, if the number of
cylinders (which we count in Section~\ref{sec-cod}) of length $n$ is
$a_n$, instead of taking the limit of $\frac1n\log a_n$ we take the
limit of $\frac1{2n}\log a_n$, that is, the half of the limit from
Section~\ref{sec-cod}. In such a way we get the following results.
\begin{proposition}\label{p-ent}
Assume that the billiard table belongs to $\mathcal{H}_{1/2}(\varepsilon,\ell)$ and
fix $N\ge 0$ such that~\eqref{e-2} is satisfied. Then the topological
entropy of the billiard map restricted to ${\mathcal K}_{\ell,N}$ is larger than or
equal to one half of the logarithm of the largest root of
equation~\eqref{eq0}.
\end{proposition}
\begin{proposition}\label{p-ent1}
Assume that the billiard table belongs to $\mathcal{H}_{1/2}(\varepsilon,\ell)$ and
$\ell\tan\varepsilon\ge 1$. Then the topological entropy of the billiard map
is at least $\frac12\log2$, so the map is chaotic in topological
sense.
\end{proposition}
\begin{theorem}\label{mainn}
For the billiard tables from the class $\mathcal{H}_{1/2}$ with the shape of
$\Gamma_3$ or $\Gamma_4$ (the one that is not the vertical segment)
fixed, the lower limit of the topological entropy of the generalized
Bunimovich stadium billiard, as its length $\ell$ goes to infinity, is
at least $\frac12\log(1+\sqrt2)$.
\end{theorem}
{
We can apply Proposition~\ref{p-ent1} to the Bunimovich mushroom
billiard in order to get entropy at least $\frac12\log2$. As for the
stadium, we need to make some computations, and again, we will make a
slight improvement in the estimates. The interior of the
mushroom billiard consist of a rectangle (the stalk) and a half-disk
(the cap). According to our notation, the stalk is of vertical size 1;
denote its horizontal size by $\ell'$. Moreover, denote the radius of
the cap by $t$.
\begin{proposition}\label{mu}
If $\ell'>\frac12\sqrt{16t^2-1}$ then the topological entropy of the
mushroom billiard is at least $\frac12\log2$.
\end{proposition}
\begin{proof}
Look at Figure~\ref{fig9}, where the largest possible $\varepsilon$ is used.
We have $t\sin\varepsilon=1/4$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=60truemm]{fig9.eps}
\caption{Computations for a mushroom.}\label{fig9}
\end{center}
\end{figure}
Therefore, $\tan\varepsilon=1/\sqrt{16t^2-1}$. Similarly as for the stadium,
when we use~\eqref{e-2} with $N=1$, we may replace $N+1$ by $\frac32$.
Taking into account that we need a strict inequality, we get
$\ell>\frac34\sqrt{16t^2-1}$. However,
$\ell=\ell'+t\cos\varepsilon=\ell'+\frac14\sqrt{16t^2-1}$, so our condition
is $\ell'>\frac12\sqrt{16t^2-1}$.
\end{proof}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=140truemm]{fig8c.eps}
\caption{A mushroom with topological entropy at least
$\frac12\log2$.}\label{fig8}
\end{center}
\end{figure}
Observe that the assumption of Proposition~\ref{mu} is satisfied if
the length of the stalk is equal to or larger than the diameter of the
cap (see Figure~\ref{fig8}).
}
| {
"attr-fineweb-edu": 1.793945,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbG44eIXgu2RlnLvz |
\subsection{Processing the \acrshort{vista}\xspace Trip Table}\label{sec:1-setup}
The first step in building the activity-based virtual population for Greater Melbourne is to process the raw anonymized \acrfull{vista} 2012-18 data\footnote{\url{https://transport.vic.gov.au/about/data-and-research/vista}} made openly available by the Department of Transport. These data are provided in several \acrfull{csv} files.
For this study we use what is known in the \acrshort{vista}\xspace dataset as the Trip Table (provided in \kk{T\_VISTA1218\_V1.csv}).
The Trip Table contains anonymized data for 174270 trips, representing 49453 persons from 21941 households.
\subsubsection{Understanding \acrshort{vista}\xspace Trip Table data}
Table~\ref{tab:vista-trips-sample} shows a sample of the Trip Table data for a household (id: \kk{Y12H0000104}) of three members (ids: \kk{Y12H0000104P01}, \kk{Y12H0000104P02}, \kk{Y12H0000104P03}).
Here \kk{PERSID} is the person ID number, \kk{ORIGPURP1} is Origin Purpose (Summary), \kk{DESTPURP1} is Purpose at End of Trip (Summary), \kk{STARTIME} is Time of Starting Trip (in minutes, from midnight), \kk{ARRTIME} is Time of Ending Trip (in minutes, from midnight), and \kk{WDTRIPWGT} is Trip weight for an `Average weekday' of the combined 2012-18 dataset, using the Australian Standard Geographical Classification (ASGC)\footnote{\url{https://www.abs.gov.au/websitedbs/D3310114.nsf/home/Australian+Standard+Geographical+Classification+(ASGC)}}.
For brevity, we focus the following discussion on an average weekday, but the technique is applied in precisely the same way for the `Average weekend day' data rows, given by the corresponding \kk{WEJTEWGT} column.
\begin{table}[h]
\centering
\label{tab:vista-trips-sample}
\caption{Trips of an example household \kk{Y12H0000104} from the \acrshort{vista}\xspace Trip Table}
{\footnotesize
\begin{tabular}{|llllll|}
\hline
\kk{PERSID} & \kk{ORIGPURP1} & \kk{DESTPURP1} & \kk{STARTIME} & \kk{ARRTIME} & \kk{WDTRIPWGT}\\
\hline\hline
Y12H0000104P01& At Home & Work Related & 420 & 485 & 83.77\\
Y12H0000104P01& Work Related & Go Home & 990 & 1065 & 83.77\\
Y12H0000104P02& At Home & Work Related & 540 & 555 & 86.51\\
Y12H0000104P02& Work Related & Buy Something& 558 & 565 & 86.51\\
Y12H0000104P02& Buy Something& Go Home & 570 & 575 & 86.51\\
Y12H0000104P02& At Home & Buy Something& 900 & 905 & 86.51\\
Y12H0000104P02& Buy Something& Go Home & 910 & 915 & 86.51\\
Y12H0000104P03& At Home & Work Related & 450 & 480 & 131.96\\
Y12H0000104P03& Work Related & Go Home & 990 & 1020 & 131.96\\
\hline
\end{tabular}
}
\end{table}
Table~\ref{tab:vista-trips-sample} shows the complete set of Trip Table attributes that our algorithm uses to generate \acrshort{vista}\xspace-like activity/trip chains.
An important point to note here is that we completely disregard all geospatial information from the records, and focus only on the sequence of activities and trips.
This is because our intent is to generate location-agnostic \acrshort{vista}\xspace-like activity/trip chains initially, and then in subsequent steps of the algorithm place these activities and trips in the context of the geographical home location assigned to the virtual person.
Also note that no information about the mode of transportation is retained at this stage.
Again, this will be introduced in the context of the home location later in the process.
\begin{figure}[h]
\centering
\input{figs/vista-trip}
\caption{Ordered sequence of activities (circles) and trips (arrows) for anonymous person \kk{Y12H0000104P02} in the \acrshort{vista}\xspace Trip Table}
\label{fig:vista-person-trips}
\end{figure}
Figure~\ref{fig:vista-person-trips} gives a visual representation of the sequence of activities and trips of the example person \kk{Y12H0000104P02} of Table~\ref{tab:vista-trips-sample} with each row of the table represented by a numbered arrow in the figure.
Person \kk{Y12H0000104P02}'s day could be summarised as: left home at 9am (540 minutes past midnight) and 15 minutes later performed a quick work related activity that lasted three minutes (maybe to the local post office?); went back home via a quick seven minute stop to buy something (a morning coffee perhaps?); stayed at home from 9:35am (575 minutes past midnight) to 3pm (900 minutes past midnight); did another quick 15 minute round trip to the shops; then stayed home for the rest of the day.
The trips of sample person \kk{Y12H0000104P02} highlight some important choices that must be made when interpreting the data.
For instance, did the person go to different shops (as we suggest in Figure~\ref{fig:vista-person-trips}) or the same one?
Does the same assumption apply for all kinds of trips? In general, we apply the following rules to multiple trips for the same kind of activity during the day.
\begin{itemize}
\item All trips that start and end at a home related activity (\kk{ORIGPURP1} or \kk{DESTPURP1} contains the string `Home') are assumed to be associated with the same home location.
\item All sub-tours (sequences of activities starting and ending at home, such as the morning and afternoon sub-tours of person \kk{Y12H0000104P02}) that contain multiple work-related trips are assumed to be associated with a single work location, however the work locations between two sub-tours are allowed to be different.
\item All other trips (including shopping related trips) are assumed to be potentially associated with different locations, even if performed within the same sub-tour.
\end{itemize}
\subsubsection{Extracting daily activities from Trip Table}\label{sec:activity-table}
Each row of Table~\ref{tab:vista-trips-sample} gives the start and end time of a single trip, and it is easy to see that the difference between the start time of one trip and the end time of the \textit{preceding} trip of the person is in fact the duration of the activity between those two trips.
The first trip in the chain does not have a preceding activity of course, but here the concluding activity can safely be assumed to have a start time of midnight.
This knowledge can therefore be used to transform the trips-table into a corresponding \textit{activity-table}.
Table~\ref{tab:vista-activities-sample} shows this transformation for the trips of our sample household.
\begin{table}[h]
\centering
\label{tab:vista-activities-sample}
\caption{Activities of example household \kk{Y12H0000104} derived from Trip Table}
{\footnotesize
\begin{tabular}{|lllll|}
\hline
\kk{PERSID} & \kk{ACTIVITY} & \kk{START.TIME} & \kk{END.TIME} & \kk{WDTRIPWGT}\\
\hline\hline
Y12H0000104P01 & At Home & 0 & 420 & 83.77\\
Y12H0000104P01 & Work Related & 485 & 990 & 83.77\\
Y12H0000104P01 & Go Home & 1065 & 1439 & 83.77\\
Y12H0000104P02 & At Home & 0 & 540 & 86.51\\
Y12H0000104P02 & Work Related & 555 & 558 & 86.51\\
Y12H0000104P02 & Buy Something & 565 & 570 & 86.51\\
Y12H0000104P02 & At Home & 575 & 900 & 86.51\\
Y12H0000104P02 & Buy Something & 905 & 910 & 86.51\\
Y12H0000104P02 & Go Home & 915 & 1439 & 86.51\\
Y12H0000104P03 & At Home & 0 & 450 & 131.96\\
Y12H0000104P03 & Work Related & 480 & 990 & 131.96\\
Y12H0000104P03 & Go Home & 1020 & 1439 & 131.96\\
\hline
\end{tabular}
}
\end{table}
The activities table thus produced for the entire Trip Table then provides the raw input that forms the basis for our activity-based plan generation algorithm.
\subsubsection{Simplifying activity labels}
The number of unique activity labels present in the activity-table derived in the previous step is reduced by grouping related labels into simpler tags.
We also rename some labels for clarity.
Table~\ref{tab:vista-labels-replacement} shows the specific text replacements that we perform in the \kk{ACTIVITY} column of our activity-table from the previous step.
The resulting activity-table has $\mathcal{A}$\xspace unique activity types, being: Home, Mode Change, Other, Personal, Pickup/Dropoff/Deliver, Shop, Social/Recreational, Study, With Someone, Work.
\begin{table}[h]
\centering
\caption{Label simplification performed on the Trip Table derived activity names}
\label{tab:vista-labels-replacement}
{\footnotesize
\begin{tabular}{|l|l|}
\hline
\textbf{Original Trip Table label} & \textbf{Replacement label}\\
\hline\hline
At Home ; Go Home ; Unknown Purpose (at start of day) & Home\\
Social ; Recreational & Social/Recreational\\
Pick-up or Drop-off Someone ; Pick-up or Deliver Something & Pickup/Dropoff/Deliver\\
Other Purpose ; Not Stated & Other\\
Personal Business & Personal\\
Work Related & Work\\
Education & Study\\
Buy Something & Shop\\
Change Mode & Mode Change\\
Accompany Someone & With Someone\\
\hline
\end{tabular}
}
\end{table}
\subsubsection{Calculating activity start/end time distributions in $\mathcal{T}$\xspace discrete time bins}
The final step in the \acrshort{vista}\xspace data processing is to calculate the start and end time distributions for each activity type throughout the day--where the day is split into $\mathcal{T}$\xspace discrete time bins of equal size.
The parameter $\mathcal{T}$\xspace gives a way of easily configuring the desired precision of the daily plan generation step of the algorithm (Section~\ref{sec:3-plan}).
Higher values of $\mathcal{T}$\xspace allow the algorithm to seek higher precision in the generated activity start/end times but can lead to more variance in error, while lower values seek coarser precision which is easier to achieve resulting is lower error.
What value gives a good balance between accuracy and error can be determined from experimentation.
In this work, we use $\mathcal{T}$\xspace$=48$, i.e., we break up the day into 48 time bins of 30 minutes each.
\begin{figure}[h]
\centering
\input{figs/matrix}
\caption{Structure of matrix $\mathcal{D}$\xspace for storing start(end) time distributions for $\vert\mathcal{A}\vert$ activities against $\mathcal{T}$\xspace time bins of the day}
\label{fig:matrix}
\end{figure}
Calculation of the activities' start(end) time distribution is done by first counting, for each activity, the number of instances of activity start(end) in every time bin.
To do this, we create a matrix $\mathcal{D}$\xspace (being $\mathcal{D}_{s}$\xspace for start time distributions and $\mathcal{D}_{e}$\xspace for end time distributions) with $\mathcal{A}$\xspace rows of unique activity types and $\mathcal{T}$\xspace columns for every time bin (Figure~\ref{fig:matrix}). Then for every row $r$ in the activity-table, we update the value of the corresponding cell in $\mathcal{D}$\xspace, given by the $\mathcal{D}$\xspace-row that matches the \kk{ACTIVITY} label in $r$ and the $\mathcal{D}$\xspace-column corresponding to the time bin for the start(end) time of the activity in $r$.
The value of this determined cell is then incremented by the value of the \kk{WDTRIPWGT} column in $r$, which gives the frequency of this activity in the full population (remembering that the \acrshort{vista}\xspace Trip Table represents a 1\% sample).
We save this output in \acrshort{csv}\xspace format, to be used by subsequent steps of the algorithm.
Note that the end-time distribution table $\mathcal{D}_{e}$\xspace saved has $\mathcal{A}$\xspace$\times$$\mathcal{T}$\xspace rows, since we save the end time for every activity type (Table~\ref{tab:vista-labels-replacement}) for every start time bin. In other words, for end times, we store $\mathcal{T}$\xspace matrices of the type shown in Figure~\ref{fig:matrix}.
Figure~\ref{fig:vista-activity-time-bins} shows a consolidated view of the simplified activities of the Melbourne population across the day split into $\mathcal{T}$\xspace$=48$ discrete time bin distributions, computed separately for the weekday and weekend rows of the activity-table we derived from Trip Table. Each split bar shows the proportion of the population performing the different activities in $\mathcal{A}$\xspace during the corresponding time bin.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/vista-activity-time-bins.png}
\caption{Simplified \acrshort{vista}\xspace Trip Table derived activities in $\mathcal{T}$\xspace$=48$ discrete time bins}
\label{fig:vista-activity-time-bins}
\end{figure}
\subsubsection{Generating \acrshort{vista}\xspace population cohorts}\label{sec:cohorts}
The process explained so far in Section~\ref{sec:1-setup} shows how the \acrshort{vista}\xspace Trip Table, bar weekend trips, is used to compute distributions of activities in discrete time bins of the day. We now describe a slight modification to the process, to account for differences in activity profiles across population subgroups, or \textit{cohorts}.
It is well understood that the kinds of activities people do can be shaped by individual, social, and physical factors~\cite{bautista2020urban,grue2020exploring}.
Not all of these will be known and/or related variables available in the \acrshort{vista}\xspace data. Nevertheless, it is clear that some steps can be taken to classify observed behaviours into groups given the variables we do have.
We implemented a simple classification based on the demographic attributes of gender and age, to find distinct groupings that exhibit significantly distinct trip patterns. Specifically, participants were broken into five year age groups, with the exception of groups 0-14 and 65 and over. Probabilities for the activities Work, Study, Shop, Personal, and Social/Recreational were then calculated for each of these 24 groups based on weekday trips. Hierarchical clustering was then applied to the dataset using Ward's method, producing the dendrogram shown in Figure \ref{fig:dendrogram}.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/dendrogram.pdf}
\caption{Dendrogram of hierarchical cluster analysis. Grey rectangles indicate clusters.}
\label{fig:dendrogram}
\end{figure}
The gap statistic of the clustering process indicated that five unique groupings would be optimal. The output of the classification is the gender and age range of those identified groups. Our final step then was to filter the \acrshort{vista}\xspace Trip Table records on those attributes and store the result as partial tables, one per group. The process described so far in Section~\ref{sec:1-setup} was therefore applied to each partial trip table separately rather than the \acrshort{vista}\xspace Trip Table directly--as suggested earlier to keep the explanation simple--giving activity distribution tables by time of day per subgroup.
\subsection{Creating a representative sample of census-like persons}\label{sec:2-sample}
This step of the algorithm is concerned with allocating the right kinds of persons to the right statistical areas in Greater Melbourne as per Australian Census 2016.
The output of this step is a list of uniquely identified synthetic persons, each described by a valid street address representing their home location, and demographic attributes of age and gender that, when aggregated, match the demographic distributions reported in the census at the \gls{sa2} level.
A full virtual population for Greater Melbourne based on ABS Census 2016 was created by \cite{wickramasinghe2020building}. Their output population is made available through their GitHub code repository\footnote{\url{https://github.com/agentsoz/synthetic-population}}. It consists of a relational database of unique persons in unique households assigned to known street addresses. For convenience, their database is supplied in two CSV files containing persons and households respectively, separately for each of the 307 \gls{sa2} areas in Greater Melbourne.
We obtain our census-based synthetic individuals by simply sampling a desired number of persons from \cite{wickramasinghe2020building}'s population files. So, for instance, to build a 10\% sample of the Greater Melbourne population, we randomly sample 10\% persons from each of the 307 \gls{sa2} level persons CSV files provided.
This gives us our base census-like individuals at home locations in Greater Melbourne.
Subsequent steps assign \acrshort{vista}\xspace-like trips to these persons, representing the kinds of trips persons of their demographic makeup in the given SA2 undertake in their day as per \acrshort{vista}\xspace data.
\subsection{Matching census-like persons to VISTA cohorts}\label{sec:4-match}
In this step of the algorithm we assign, to the census-like persons from the last step (Section~\ref{sec:2-sample}) and based on their demographic profile, one of the \acrshort{vista}\xspace groups, or cohorts, calculated previously (Section~\ref{sec:cohorts}). This is required so that in the subsequent step we can apply group-appropriate trip generation models for assigning representative trips and activities to those individuals.
The \textit{matching} of persons to cohorts itself is a straightforward process. Since each cohort is fully defined by a gender and age range, then the cohort of a person can be determined by a simple lookup table. The output of this step is the addition of a new attribute to each person, indicating the cohort they belong to.
In the final output of the algorithm, as shown in Table~\ref{tab:diary}, this step is responsible for populating column \kk{AgentId}, that gives a unique identifier for a synthetic census-like person sampled in the earlier step (Section~\ref{sec:2-sample}), and was matched to the \acrshort{vista}\xspace-like activity chain given by column \kk{PlanId}.
\subsection{Generating VISTA-like daily trips}\label{sec:3-plan}
The algorithm described here does not discuss groups, however the reader should keep in mind that the process being described is applied separately to the cohorts identified in Section~\ref{sec:cohorts}.
Algorithm~\ref{alg:plan-generation} gives the pseudo-code of our algorithm for generating VISTA-like activity chains. The objective of the algorithm is to generate a sequence of $\mathcal{N}$\xspace activity chains in such a way that, when taken together, the list $\mathcal{C}$\xspace of generated activity chains achieves the target activity start(end) time distributions given by the $\mathcal{D}$\xspace-matrices $\mathcal{D}_{s}$\xspace and $\mathcal{D}_{e}$\xspace. Here, the rows of the $\mathcal{D}$\xspace-matrices give the activities and the columns give their distribution over the time bins of the day.
The general intuition behind the algorithm is to repeatedly revise the desired distributions by taking the difference $\Delta$ between the presently achieved distributions ($\acute{\mathcal{D}_{s}}$\xspace and $\acute{\mathcal{D}_{e}}$\xspace) and the target distributions ($\mathcal{D}_{s}$\xspace and $\mathcal{D}_{e}$\xspace) so that over-represented activities in a given time bin are less likely to be generated in subsequent iterations while under-represented activities become more likely. This allows for dynamic on-the-fly revision so that the algorithm is continuously looking to correct towards the moving target distributions with every new activity chain it generates. This approach works well in adapting output to the target $\mathcal{D}$\xspace-matrices, and the generation error decreases asymptotically as the number $\mathcal{N}$\xspace of generated activity chains increases.
\input{figs/algo-step3}
We describe in Algorithm~\ref{alg:plan-generation} the steps for matching to the target start time distribution matrix $\mathcal{D}_{s}$\xspace and note that the steps are the same for matching to the end time distribution matrix $\mathcal{D}_{e}$\xspace.
The process starts by initialising an empty list $\mathcal{C}$\xspace for storing the activity chains to generate, a corresponding empty matrix $\acute{\mathcal{D}_{s}}$\xspace for recording the start-time distributions of the activities in $\mathcal{C}$\xspace, and another empty matrix $\Delta$ for storing the difference from the target distributions in $\mathcal{D}_{s}$\xspace (lines 1--3). Both $\acute{\mathcal{D}_{s}}$\xspace and $\Delta$ have the same dimensions as $\mathcal{D}_{s}$\xspace as shown in Figure~\ref{fig:matrix}. The following steps (lines 4--34) are then repeated once per activity chain, to generate $\mathcal{N}$\xspace activity chains.
Prior to generating a new activity chain, the difference matrix $\Delta$ is updated to reflect the current deviation from the desired distribution $\mathcal{D}_{s}$\xspace (lines 5--9), ensuring that zero-value cells in $\mathcal{D}_{s}$\xspace are also zero-value in $\Delta$, and normalising the row vectors to lie in the range [0,1]. This means that any zero-value cells in $\mathcal{D}_{s}$\xspace are either so because the activity (row) does not occur in that time bin (column), or the proportion of the given activity in the given time bin for the population generated thus far either perfectly matches the desired or is over-represented. On the other hand, values tending to $1.0$ indicate increasing levels of under-representation.
The algorithm then traverses the time bins sequentially from beginning of the day to end (line 10, 12) as follows.
We first check if the proportion of activities, across all activities, that start in the current time bin, is already at or above the desired level, and if found to be so we skip to the next time bin (lines 13--17).
Otherwise we extract from $\Delta$, a vector $\delta$ to give, for the current time step, the difference from desired for all activities (line 18).
If $\delta$ is zero because it is also zero in the desired distribution $\mathcal{D}_{s}$\xspace then we skip to the next time bin (lines 19--20), else we use $\delta$ to probabilistically select a corresponding activity and set its start time to the current time bin (lines 22--23).
This procedure results in a higher chance of selection of under-represented activities. The end time for the selected activity is chosen probabilistically from the probabilities of the given activity ending in the remaining time bins of the day, when starting in the current time bin (lines 24--26).
The generated activity with its allocated start and end bins is added to the trip chain $\Phi$ (line 27) and we skip to the activity end time bin (line 28) to continue building the chain.
The trip chain $\Phi$ thus generated is compressed by collapsing consecutive blocks of the same activity into a single activity that starts in the time bin of the first occurrence and finishes in the time bin of the last (line 31). If $\Phi$ was empty to begin with, i.e., no activity was generated in the preceding loop, then we just assign a $Home$ activity lasting the whole day (line 30). The trip chain $\Phi$ is then added to our list $\mathcal{C}$\xspace (line 32). Finally, the counts of the generated activities en $\Phi$ are incremented in $\acute{\mathcal{D}_{s}}$\xspace, to be used do update the difference matrix $\Delta$ before generating the next activity chain.
In the final output of the algorithm, an example of which is given in Table~\ref{tab:diary}, this step is responsible for populating columns \kk{PlanId} (a unique identifier for the generated activity chain), \kk{Activity} (the sequence of activities in the chain), \kk{StartBin} (the start time bin for each activity, being an integer between 1 and 48, representing the 30-minute blocks of the day), and \kk{EndBin} (the corresponding end time bins for the activities).
\subsection{Assigning statistical areas (SA1) to activities}\label{sec:5-locate}
At this point, each agent now has a home \gls{sa1} and a list of activities they conduct throughout the day.
Candidate locations for these activities were selected from the endpoint nodes of non-highway edges (i.e., edges with a speed of 60km/h or less and accessible by all modes) of the OSM-derived transport network generated by (cite network paper here).
It is important to note that these non-highway edges were already densified, with additional nodes added every 500 meters.
These locations were selected to ensure that any activity location would be reachable from the network without any non-network travel movement (i.e. bushwhacking).
These locations were then classified according to the land use category of the mesh block they lie within.
In order to facilitate matching location types to activity types, the mesh block categories were simplified into five types: Home, Work, Education, Commercial, and Park (see Table \ref{tab:vista-category-reclass}).
\begin{table}[h]
\centering
\label{tab:vista-category-reclass}
\caption{VISTA category reclassification}
{\footnotesize
\begin{tabular}{|p{0.20\linewidth} p{0.26\linewidth} p{0.38\linewidth}|}
\hline
\kk{Location categories} & \kk{Meshblock categories} & \kk{\acrshort{vista}\xspace activities} \\
\hline\hline
Home & Residential, Other, Primary Production & Home \\
Work & Commercial, Education, Hospital/Medical, Industrial, Primary Production & Other, Pickup/Dropoff/Deliver, With Someone, Work \\
Education & Education & Other, Pickup/Dropoff/Deliver, Study, With Someone \\
Commercial & Commercial & Other, Personal, Pickup/Dropoff/Deliver, Shop, Social/Recreational, With Someone \\
Park & Parkland, Water & Other, Pickup/Dropoff/Deliver, Social/Recreational, With Someone \\
\hline
\end{tabular}
}
\end{table}
Transport mode is then assigned sequentially to each trip, along with \gls{sa1} region to non-home activities. The specifics detailed in Algorithm \ref{alg:locate}, but broadly, transport mode is selected first. This then allows for region selection based on the likely travel distance for that mode, as well as the relative attractiveness of potential destinations for the chosen trip purpose.
\input{figs/algo-step5}
\subsubsection{Mode selection}\label{sec_modeSelection}
Transport mode selection is taken care of by the getMode function, which selects from the possibilities of walk, bike, pt (public transit), or car.
This function takes the current region as input to ensure that local variation in mode choice is present in the agents' behavior.
Specifically, some modes, such as walking and public transit are more popular towards the inner city, whereas driving is preferred by residents of the outer suburbs.
The first run of the function for each agent sets their primary mode, which is the initial mode used when an agent leaves home.
Primary mode is used as an input of the getMode function to ensure vehicle use (i.e., car or bike) is appropriate. Specifically, if a vehicle is not initially used by an agent, then it is not possible to select one at any other point throughout the day.
It is however possible for agents to switch from a vehicle to walking or public transit.
To prevent agents leaving vehicles stranded, they must return to that region so they may use the vehicle to return home.
Additionally, walking and public transit my be freely switched between, so long as the final trip home utilizes public transit.
The mode choice probabilities used by the getMode function were generated for each \gls{sa1} region by analyzing the proportions of modes chosen by the participants of the \acrshort{vista}\xspace travel survey.
For this, the full travel survey was used, meaning that origin (\kk{ORIGLONG}, \kk{ORIGLAT}) and destination (\kk{DESTLONG}, \kk{DESTLAT}) coordinates were available.
Trips were filtered to weekdays (using \kk{TRAVDOW}) within the Greater Melbourne region and were recorded with their origin location, survey weight (\kk{CW\_WDTRIPWGT\_SA3}), and transport mode (\kk{LINKMODE}, which was reclassified to match the virtual population.
Survey weight is a number that indicates how representative each entry is of the Victorian population during a weekday, and was used in calibrating accurate mode choice proportions.
To determine mode the proportions, Kernel Density Estimation (KDE) was calculated at each candidate location for each transport mode, using a weighted Gaussian kernel with a bandwidth of 750 meters, to be aggregated up to \gls{sa1} and converted to a percentage.
Destination locations were used instead of creating a density raster as it is much faster to compute, and only produces density calculations at places agents are able to travel.
Selecting an appropriate bandwidth for density calculations is important as smaller bandwidths show greater local variation but have fewer points to use, and larger bandwidths use more points, but show more general trends.
In this case, 750 meters was chosen based on calibration. Specifically, a variety of potential bandwidths were chosen, with their mode choice percentages aggregated to \gls{sa3} regions.
These percentages were then compared to values obtained by aggregating the weighted \gls{vista} mode choice proportions to \gls{sa3} region in order to select the bandwidth with the best fit.
\gls{sa3} regions were chosen for calibration as that is the statistical granularity the \gls{vista} travel survey was weighted at.
\subsubsection{Destination selection}\label{sec_destinationSelection}
Selection of a destination region is handled by the getRegion function, which uses the current region, the location type of the destination region, the current mode, and the number of trips remaining until the home region is reached (hop count).
Because trips with multiple stages tend to move further away from the home region, the remaining leg home can be unnaturally long.
Hop count was used to filter candidate destination regions down to ones where it would be likely to for that mode, based on the number of trips remaining to get home assuming 95th percentile trip lengths based on their distance distribution.
This was to ensure that the final trip home will not be unreasonably long, which can be a particular issue for walking trips.
The getRegion function is primarily a trade-off between selecting a destination that is a likely distance for the mode selected, and the attraction of the destination.
Figure \ref{fig_selectingNextRegion} illustrates how the distance distribution and destination type probabilities are combined to create the probabilities needed to select the next region.
\begin{figure}[H]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.45\columnwidth]{figs/1_distance.png} &
\includegraphics[width=0.45\columnwidth]{figs/4_global_distance.png}\\
(a) & (b)\\
\includegraphics[width=0.45\columnwidth]{figs/2_attraction.png} &
\includegraphics[width=0.45\columnwidth]{figs/5_global_attraction.png}\\
(c) & (d)\\
\includegraphics[width=0.45\columnwidth]{figs/3_hop.png} &
\includegraphics[width=0.45\columnwidth]{figs/6_combined.png}\\
(e) & (f)\\
\end{tabular}
\caption{Selecting next region for a cycling trip from home (circle) to work (triangle) showing: region selection probability (Pr) for local and global distance distributions (a and b), region selection probability (Pr) for local and global destination attraction (c and d), number of trips (hop count) that would be reasonably required to reach home (e), and combined region likelihood (f).}
\label{fig_selectingNextRegion}
\end{figure}
In order to provide a set of distances for the algorithm to choose from, an origin-destination matrix was calculated between the population-weighted centroids of \gls{abs} \gls{sa1} regions using the OSM-derived transport network.
Population-weighted centroids were used as they are more representative of regions with uneven population distributions, and were calculated using the centroids of the \gls{abs} meshblocks with their 2016 census population.
The \gls{sa1} centroids were then snapped to the nearest non-highway node and the shortest distance was calculated between all locations to populate the OD matrix.
To calculate a distance distribution for each \gls{sa1} region, the population-weighted centroid was used as a base, selecting the closest 500 \gls{vista} trips for each transport mode.
A weighted log normal distribution was then calculated for each transport mode and centroid, recording their log-mean and standard deviation.
The log-normal distribution was chosen as it better fit the distances than a normal distribution as there are no negative distances and can better exhibit a sharp peak at low distances regardless of transport mode.
While it would be preferable to use a distance-based bandwidth as in the previous section to select trips to build the distributions, accurate representation of distributions requires more data than density-based measures.
The closest 500 trips were chosen instead as only driving, with its 67,769 trips, was able to build representative distributions, whereas there were only 14,621 walking, 7,166 public transport, and 1,515 cycling trips.
These distributions were calculated at the population-weighted centroids instead of the destination locations as the variation within the \gls{sa1} regions was not significant enough to warrant the additional computation costs.
Calculating the distance probabilities was performed by first filtering the OD-matrix to the current \gls{sa1} region, providing a set of distances to all other regions.
The log-mean and standard deviation for the chosen transport mode for the region was then used to filter potential destinations to only those with distances within the 5th and 95th percentile of the log-normal distribution, ensuring unlikely distances would not be selected.
Potential destinations where the attraction probability was zero were also filtered to ensure that there would be a suitable destination location present.
Probabilities were then calculated for the remaining destinations based on their distances and the log-mean and standard deviation for the nominated transport mode.
These were then normalized so that their total equaled one.
Because there are far fewer short distances than longer distances, the probabilities were further normalized by binning distances into 500 meter categories and then dividing each probability by the number of distances in that category.
Destination attraction was calculated similar to mode probability, with the probabilities retrieved for the nominated destination category. Kernel Density Estimation (KDE) was again calculated at each candidate location for each destination category, using a weighted Gaussian kernel with a bandwidth of 3,200m for work, 300m for park, 200m for education, and 20,000m for commercial destinations. These were then aggregated up to \gls{sa1} and converted normalized so that the total attraction value equaled one for each category.
Bandwidth selection was again based on calibration, where a variety of bandwidths were chosen, with the mode choice percentages aggregated to \gls{sa3} regions.
These percentages were then compared to values obtained by aggregating the weighted \gls{vista} trips by destination category to \gls{sa3} region in order to select the bandwidth with the best fit for each destination category.
While these criteria ensure that regions are selected that are locally representative, it is also important to ensure that the virtual population is representative for all of Greater Melbourne.
To account for this, distance distributions and destination attraction were tallied at the \gls{sa3} level, so that the destination region could also be selected based on how well it would improve the fit of the overall distributions.
For example, the distances of previous cycling trips have been on average longer than the expected distribution.
To account for this, in Figure \ref{fig_selectingNextRegion} the global distance probability is more likely to choose distances that are shorter than the local distance probability.
This is also the case for destination attraction, where a higher probability has been placed on the inner city as an insufficient number of work trips have been arriving there.
The final combined probability was then calculated by filtering the candidate regions to those within the hop count, and then adding the local distance and destination attraction probabilities to the global distance and destination attraction probabilities, normalising the probabilities so that their total equaled one.
From this point, a location could then be selected, as illustrated in Figure \ref{fig_selectingNextRegion}.
In the final output of the algorithm (Table~\ref{tab:diary}), this step is responsible for populating columns
\kk{SA1} (\gls{sa1} where this activity will take place), \kk{LocationType} (the type of location within the \gls{sa1} where the activity should take place, to be assigned in the subsequent step), \kk{Mode} (the travel mode by which the person will arrive at that activity; not applicable for the first activity of the day), and \kk{Distance} (the distance between the current and preceding \gls{sa1} regions derived from the OD-matrix).
\subsection{Assigning locations to activities in statistical areas}\label{sec:6-place}
Now that an SA1 region has been assigned for every stop, location coordinates can be assigned based on the SA1 region and stop category.
It is important to note that all home stops for an agent must share the same location.
Locations are drawn from the set of candidate locations, which are points on the transport network that agents may move between.
For the same destination category and SA1 region, certain locations will always be more popular than others.
For example, an office tower will have more employees, and therefore be a more attractive work destination, than single building.
To account for this, addresses were selected from Vicmap Address, a geocoded database of property locations supplied by the Victorian government (\href{https://discover.data.vic.gov.au/dataset/address-vicmap-address}{Vicmap Address}) containing 2,932,530 addresses within the Greater Melbourne region.
These were then assigned a land use category based on the meshblock they were located within.
In cases of meshblocks without any addresses within their boundaries, a single address was assigned at their geometries' centroid.
In order to reduce the number of unique locations, the addresses were then snapped to the set of candidate locations mentioned in Section \ref{sec:5-locate}, which are points on the transport network that agents may move between.
The address counts were then used to create a selection probability by normalizing their number by SA1 region and destination category so that their total equaled one.
This probability was then used to assign locations to each stop.
In the final output of the algorithm (Table~\ref{tab:diary}), this step is responsible for populating columns
\kk{X} and \kk{Y}, representing the spatial coordinates of the location of activities, in the coordinate reference system of the input spatial data.
\subsection{Assigning start and end times to activities}\label{sec:7-time}
The final step of the algorithm converts the start and end times of activities, allocated initially at the coarse granularity of 30-min time bins (in Section~\ref{sec:3-plan}), into actual times of day. The main considerations in doing so are to
\begin{inparaenum}[(a)]
\item add some random noise so that start/end times are sufficiently dispersed within the 30-min duration of each time bin; and
\item ensure that start/end times are ordered correctly so that activities do not end before their start, and activities do not start before the previous ones ends. This latter constraint becomes important where several starts and ends are being scheduled in the same time bin.
\end{inparaenum}
The method for achieving the desired time schedule is relatively straightforward. We first extract, for each person, the ordered vector of time bins corresponding to the sequence of start and end times for all activities. This is always a vector of even length, since every activity is represented by two sequential elements corresponding to its start and end. Further, this vector has values that increase monotonically, representing the progression time as sequential activities start and end in the person's day. Next, this vector of time bin indices is converted to a time of day in seconds past midnight. We do this by taking the known start time of each time bin included in the vector and adding a randomly generated noise offset of a maximum duration of 30 minutes. This gives, across the population, start/end times evenly distributed within the time bins of activities. The obtained vector of times is then sorted in increasing order of numbers. This is necessary because the addition of random noise in the previous step can result in an out of order sequence of numbers within time bins where more than one activity is starting and/or ending. Sorting the vector rectifies any such issues and guarantees that the final ordering of start/end times are plausible and not mathematically impossible. Finally the time values, which represent offsets in seconds past midnight, are converted to a more convenient \textsc{\textbf{hh:mm:ss}} 24-hour format.
In the final output of the algorithm (Table~\ref{tab:diary}), this step is responsible for populating columns
\kk{StartTime} and \kk{EndTime}, representing the start and end times of activities in the day.
\section{Discussion}\label{sec_discussion}
In this paper, we presented an algorithm for creating a virtual population suitable for use in use in \glspl{abm} using a combination of machine learning, probabilistic, and gravity-based approaches.
While this work specifically focused on the Greater Melbourne region, our method is completely open and replicable, requiring only publicly available data.
In our case, we made use of the \gls{vista} travel survey and population demographics from the \gls{abs} Census, but such datasets are available in many regions.
The first innovation produced by our hybrid model was to dispense with the cloning of preexisting activity chains from the travel survey and instead generate individual activity chains for every agent, tailored to their cohort.
While cloning activity chains is compatible with splitting a travel survey population into cohorts based on behavior, it requires that there are sufficient trips within each cohort.
Specifically, there is a limit on what sort of cohorts can be generated, as when there are fewer travel survey participants in a cohort, any anomalous behavior is at higher risk of being duplicated.
This is a particular issue for active transport such as walking and cycling, as these trips are typically
underrepresented when compared to driving and public transport.
By moving from simply cloning trips, to converting them into distributions that any number of representative trips may be generated from, we ensure that our method does not rely on the accuracy and replication of individual trips.
The second innovation presented in this work was to add a spatial context to the selection of destinations by agents. Specifically, local probabilities for mode choice, trip distance, and destination attraction were generated and calibrated for each of the 10,289 \gls{sa1} regions within Greater Melbourne.
While this approach ensures that local variation is represented in the virtual population, ensuring that the distance distributions of trip lengths and the activity-based attraction of destination locations are both accurately represented is a balancing act.
By altering the weights of the models contributing to destination selection, it is trivial to create a virtual population that almost perfectly fits either the distance distributions or the destination activities in a local or global context.
Ultimately, the weights chosen are a compromise ensuring that each of these factors are sufficiently accurate to allow agents to better consider trip length and destination location.
Specifically, destination attraction was given a higher weighting than the distance distributions to ensure that the sufficient agents were traveling into the \gls{cbd}, which has caused more trips in the larger distances.
Our final innovation was to incorporate a hop-count measure to filter candidate destination regions.
By taking into account the number of trips remaining for an agent, we can ensure they do not select a destination that would be unreasonable to return home from, given their transport mode.
This is a particular issue for activity chains with several trips as they tend to move further away from the home region, which can potentially cause the final trip home to be unnaturally long.
Shorter transport modes, such as walking and cycling, are more susceptible to this, as their distance distributions are much shorter than public transport or driving.
In addition to ensuring that there are fewer anomalies in the distance distributions, eliminating unnaturally long trips home improves the results of simulations using the virtual population, such as MATSim.
Specifically, distance is a key factor for most mode choice algorithms, meaning that a walking activity chain with a long trip home will score poorly on its final trip and have to change modes for the entire chain.
If enough of these anomalies are present, this would disproportionately reduce the number of agents utilizing active transport modes such as walking and cycling.
In conclusion, the process presented in this work was able to successfully generate a virtual population with the demographic characteristics of the \gls{abs} Census and travel behavior of the \gls{vista} travel survey, in terms of distance distribution, mode choice, and destination choice.
\section{Introduction}
\label{sec_introduction}
\glspl{abm} have been extensively used in both private and public sectors to simulate network wide movement relating to mode choices and transportation \citep{milakis_what_2014,infrastructure_victoria_automated_2018,zhang2018integrated,bekhor2011integration,knapen2021activity,kpmg__arup_model_2017}.
In these examples, individual agents' travel behaviors are studied within an \gls{abm} simulation to assess the impact of policies and test scenarios on mode choices, travel itineraries and traffic flows.
In doing so, these models provide much needed evidence for understanding transport systems and land uses and for fine-turning policies to support of better planning and decision making \citep{miller2021activity}, prior to the implementation of interventions.
\glspl{abm} can also be used to evaluate competing policies, for example for moderating road network congestion and traffic flows; major issues recognized globally by transport planners and governments impacting the livability and sustainability of growing cities \citep{victoria_state_government_plan_2014,auckland_council_auckland_2018,city_of_toronto_official_2015,city_of_portland_b_o_p_a_s_portland_2009}.
In this sense, \glspl{abm} provide policymakers with a virtual laboratory to enhance their decision-making.
\glspl{abm} to date have focused on transport flows with limited attention given to active modes of transport \citep{ziemke_bicycle_2018,kaziyeva_simulating_2021} and the benefits they confer.
Active transport - walking, cycling and public transport - is health-promoting because it involves physical activity, which reduces the risk of noncommunicable preventable disease \citep{giles-corti_city_2016}.
However, active transport has co-benefits across multiple sectors.
Indeed, active transport is viewed as both a solution to network-wide road congestion as well as being a more environmental, sustainable and healthy mode of transport conferring health and environmental co-benefits including reducing green-house gases \citep{watts2015health}.
When using an activity-based model for simulating individual-level active transport behavior it is necessary for agents to be assigned individual level demographic information such as age \citep{chang2013factors,haustein2012mobility}, sex \citep{cheng2017improving} or household characteristics such as income \citep{ko2019exploring,cui2019accessibility,allen2020planning} or the presence of children \citep{ o2004constraints} since these attributes are associated with transport mode choices and consequently travel behavior \citep{ha2020unraveling,ding2017exploring,manaugh2015importance,cervero2002built}.
Some simulation models include agent attributes such as car ownership or access \citep{liu_development_2020,scherr_towards_2020}, income for modeling the impact of fuel prices; possession of a concession card \citep{infrastructure_victoria_automated_2018}; or, whether an agent is delivering something or dropping someone off \citep{horl2020open}.
Whilst each of are thought to function as proxies for demographic attributes, they do not represent a demographic profile which likely influences travel mode choices explicitly.
Indeed, some \glspl{abm} therefore incorporate agent attributes through the simulation modeling process itself through the inclusion of econometric logit or nlogit models to estimate mode choice and car ownership \citep{horl2021simulation}.
In an activity-based modeling environment, it is therefore important that key components of transport mode choices and subsequent travel behavior are included in detail such as individual demographic attributes and features of the home- and work-related environments in which individuals circulate.
This is important if the simulation is to accurately model behavior, as they are likely to impact transport mode choices. Information on timing and trip segments or the activities that individual undertake is also important, since this reflects the behavior being simulated by the \gls{abm}.
Whilst such information can be obtained from travel survey diaries it is nonetheless necessary to develop a process that does not replicate or clone agents from an existing sample of real life individuals, but instead generates new versions of them according to demographic, activity-based, location and trip attributes that are most likely to be present in an area, given the underlying survey or population data from which the agents are derived.
For clarity, we refer to these new agents as synthetic agents and collectively as a synthetic population.
One issue in deriving a synthetic population is that data on individuals and their travel behaviors can be expensive to collect or when data exists, may be aggregated or anonymized to protect the privacy of the individuals.
To overcome this, various techniques for creating a synthetic population with demographic information from limited existing data sources have been developed.
For example, \cite{wang_improved_2021} divide the process of creating a synthetic population into three main components: (i) generating agents with demographics; (ii) assigning activity patterns, and (iii) assigning locations to activities.
For generating \textbf{\textit{agent demographics}}, \cite{rahman_methodological_2010} classified approaches to generate synthetic agents with demographic characteristics into two main categories: synthetic reconstruction and re-weighting, with re- weighting being the more recent one \citep{hermes_review_2012}.
Synthetic reconstruction typically uses a list of agents and their basic demographics with home location derived from data sources such as a census and adds additional demographic attributes of interest to this initial list based on conditional probabilities and a sequential attribute adding process \citep{williamson_evaluation_2013}.
In re-weighting, rather than creating synthetic individuals, each observation from the travel survey is assigned a weight indicating how representative that observation is of each area.
For example, an observation might represent multiple individuals in one area and no one in another area.
These weights are calculated and adjusted so that the distribution of the synthetic population matches that from the observed data \citep{williamson_evaluation_2013,hermes_review_2012}.
\textbf{\textit{Assigning activity patterns}}, also referred to as an activity chain or itinerary, is where each agent is assigned a series of activities related to their travel behavior and timing for each trip segment of a journey between an origin and a destination (i.e., start time and duration for each trip segment) \citep{wang_improved_2021,lum_two-stage_2016}.
These activity chains are typically generated through sampling from either: a set of conditional probabilities based on travellers' attributes such as occupation \citep{he_evaluation_2020} or the demographic attributes \citep{balac_synthetic_2021}; based on econometrics and statistical models such as in CEMDAP \citep{bhat2004comprehensive} or from randomly selecting the activity chains from existing data \citep{felbermair_generating_2020}.
For \textbf{\textit{activity location assignment}}, gravity models are commonly used.
Gravity models select activity locations according to an inversely proportional distance from the origin or anchor locations (e.g., home/work), along with origin and destination matrices and random assignment \citep{lum_two-stage_2016}.
\cite{nurul_habib_comprehensive_2018} proposed a model where activities, timing, and location were jointly assigned based random utility maximization theory.
Using these three components, \cite{sallard_synthetic_2020} generated a synthetic population for the city of Sao Paulo.
For home-related activities, they assigned a random residential location to each household.
For work-related activities, location assignment was based on the origin–destination work trip counts with travel distances extracted from a travel survey.
\cite{sallard_synthetic_2020} divided education trips into different groups based on the home location, gender, and age of each survey respondent.
The education destination location for each agent was then assigned based on the trip distance density function of its group.
Finally, secondary activity locations (e.g., leisure, shopping, other) were assigned using a gravity model in a way that they reach realistic travel distances.
A similar process was followed in \cite{balac_synthetic_2021} to assign secondary activity locations.
\cite{ziemke_matsim_2019} used the econometric model CEMDAP to create activity patterns and an initial location assignment, that then used MATSim agent-based traffic simulation toolkit to adjust these assigned locations in a way that the resulting traffic best matched the observed data.
Another widely used travel demand and schedule generator, \gls{tasha}, was developed by \citep{roorda_validation_2008}.
They used demographics from the Greater Toronto transport survey to develop joint probability functions for activity type, demographics, household structure and trip schedules.
An additional probabilistic approach was applied to select time and durations for each activity.
The resulting 262 distributions were used to generate activity chains for each individual.
Inputs into \gls{tasha} include home and work locations, whilst other activities were assigned using entropy models based on distance, employment and population density and land use measures such as shopping centre floor space \citep{roorda_validation_2008}.
A more recent approach that has improved synthetic population generation with greater accuracy and flexibility is Machine Learning \citep{koushik2020machine}.
Using a hybrid framework, \citep{hesam2021framework}, combined Machine Learning with econometric techniques to create activity chains and travel diaries using a cohort based synthetic pseudo panel engine to model.
Similarly, \citep{allahviranloo2017modeling}, used a k-means clustering algorithm to group activities according to trip attributes to synthesise activity chains.
In this paper, we have proposed an algorithm for creating a virtual population for Greater Melbourne area using a combination of machine learning, probabilistic, and gravity-based approaches.
We combine these techniques in a hybrid model with three primary innovations: 1. when assigning activity patterns, we generate individual activity chains for every agent, tailored to their cohort; 2. when selecting destinations, we aim to strike a balance between the distance-decay of trip lengths and the activity-based attraction of destination locations; and 3. we take into account the number of trips remaining for an agent so as to ensure they do not select a destination that would be unreasonable to return home from.
In this way, our method does not rely on the accuracy and replication of individual travel survey participants, only that the surveys are demographically representative in aggregate.
Additionally, by selecting destinations in a way that considers trip length and destination location, our model aims to provide a greater spatial context to agent behavior.
In addressing these issues, this research developed an open-source process that generates a virtual population of agents for use in activity-based models that are compatible for use with commonly used activity-based modeling software such as MATSim.
To do this, we use publicly available data from metropolitan Melbourne, Australia.
Briefly, our process creates a virtual population of agents with demographic characteristics and activity chains derived from publicly available data from the \gls{vista} and from the \gls{abs} Census data drawing on location and mode attributes.
In the next section we present the methods.
We begin by detailing the activity chains undertaken by individuals using trip table data for weekday travel behavior from the VISTA survey in Section \ref{sec:1-setup}.
Section \ref{sec:2-sample} develops a representative sample of demographic attributes based on ABS census data.
Section \ref{sec:3-plan} generates trips based on the VISTA data matching activity chains to their time distributions.
Section \ref{sec:4-match} matches VISTA activity chains to Census-chains, and Sections \ref{sec:5-locate} and \ref{sec:6-place} details how locational and spatial information is assigned to the synthetic agents.
Section \ref{sec:7-time} assigns timing to the agents.
Results are presented in section \ref{sec_results} and Section \ref{sec_discussion} discusses key findings.
\section{Method}\label{sec:method}
\input{0-method-intro}
\input{1-setup}
\input{2-sample}
\input{3-match}
\input{4-plan}
\input{5-locate}
\input{6-place}
\input{7-time}
\input{results}
\input{discussion}
\bibliographystyle{apalike}
\section{Results}\label{sec_results}
This section compares a 10\% sample size population generated by our process to the real-world observations of the \gls{vista} travel survey in order to determine how well our model reflects the travel survey.
In order to do so, the distance distributions (Section \ref{sec_distance_distributions}), destination attraction (Section \ref{sec_destination_attraction}), and mode choice (Section \ref{sec_mode_choice}) of the synthetic population were analyzed.
Additionally Section
\ref{sec_sample_size_accuracy} compares synthetic populations of varying sizes to determine the effect of sample size on accuracy.
\subsection{Distance distributions}\label{sec_distance_distributions}
Given that the synthetic population created by this work provides locations for each agent's destination, but not routing information, distances will instead be calculated based on the distance between the \gls{sa1} regions provided by the OD-matrix of Section \ref{sec:5-locate} (i.e., the shortest path distance along the road network between the population-weighted centroids of the two \gls{sa1} regions).
To ensure consistency, the distances from the \gls{vista} trips dataset were also replaced with the distance between SA1 regions.
Figure \ref{fig:distance-histograms} shows the weighted expected distance distributions plotted alongside the actual distance distributions for the four transport modes.
Fitted log-normal distributions are also plotted as dashed lines.
In general, the actual distributions match the expected distributions closely, although the actual distributions appear to have larger values in the longer distances.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/analysis-distance-histograms.pdf}
\caption{Distance histograms. Dashed lines represent the fitted log-normal distributions.}
\label{fig:distance-histograms}
\end{figure}
In order to determine if the spatial variation of the distance distributions was captured, actual and weighted expected distance distributions were aggregated to the 40 \gls{sa3} regions comprising Greater Melbourne.
Figure \ref{fig:distance-distributions} shows the log-normal and standard deviation of these distance distributions.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/analysis-expected-versus-actual-distances-SA3.pdf}
\caption{Expected versus actual distance distributions aggregated to \gls{sa3} regions.}
\label{fig:distance-distributions}
\end{figure}
Cycling appeared to have the largest variation, which is to be expected given that it is based on only 1,515 cycling trips.
While walking had similar expected and actual values, there was little positive correlation.
This is reasonable given that how far people are willing to walk likely has little to no spatial variation.
In contrast, public transport and driving show far stronger positive correlation, meaning that the spatial variation of these modes is being captured.
Additionally, the actual log-mean was on average smaller then the expected log-mean for driving and public transport, whereas the log-standard deviation was larger.
This was consistent with Figure \ref{fig:distance-histograms} given these modes tended to have a lower peak and longer tail in their histograms.
\subsection{Destination attraction}\label{sec_destination_attraction}
In order to determine if the spatial variation of the destination activities were captured, Figure \ref{fig:destination-attraction} compares actual and weighted expected destination probabilities aggregated to \gls{sa3} regions.
It is important to note that the spatial variation of home locations was not plotted as the number of home locations for each \gls{sa1} is a value that is explicitly used when generating the synthetic population.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/analysis-destination-attraction-sa3.pdf}
\caption{Expected versus actual destination probabilities aggregated to \gls{sa3} regions.}
\label{fig:destination-attraction}
\end{figure}
It is important to note that for commercial, park, and work activities, one of the \gls{sa3} regions has a much larger chance of being selected as it is the region containing Melbourne's \gls{cbd}.
In general, \gls{sa3} regions had similar expected and actual probabilities regardless of destination type, indicating a good fit for destination selection.
The park destination type did however display larger variation, likely due to being a comparatively unpopular activity and therefore having fewer trips.
This was also the case to a lesser extent for the education activities.
Additionally, all destination types displayed a large variance in probabilities along with a positive correlation, indicating that the spatial variation is being represented.
\subsection{Mode choice}\label{sec_mode_choice}
In order to determine if the spatial variation of the transport mode choice was captured, Figure \ref{fig:mode-choice} compares actual and weighted expected mode choice probabilities that were aggregated to \gls{sa3} regions.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/analysis-mode-choice-sa3.pdf}
\caption{Expected versus actual mode choice probabilities aggregated to \gls{sa3} regions.}
\label{fig:mode-choice}
\end{figure}
Walking, puplic transport, and driving are represented accurately, with very similar expected and actual probabilities baring one outlier.
The variance and positive correlation indicate that the spatial variation is being represented.
The outlier present for these three modes is again the \gls{sa3} region containing the \gls{cbd}.
Specifically, driving has been overrepresented in the \gls{cbd}, causing the other regions to be underrepresented.
Likewise, this has caused walking and public transport to be underrepresented in the \gls{cbd}, and therefore overrepresented in the other regions.
Cycling is a comparatively unpopular transport mode but expected and actual values are similar.
There is also a moderate amount of variance in probabilities and a positive correlation, indicating that the spatial variation is at least being represented, although not as accurately as the other modes.
\subsection{Sample size accuracy}\label{sec_sample_size_accuracy}
So far, the results have been calculated using a 10\% sample population, as that is a common size used in \gls{abm} simulations. However, it is important to determine at what sample sizes a synthetic population will be representative of the underlying \gls{vista} travel survey.
\begin{table}[h]
\centering
\label{tab:sample-size-table}
\caption{Average difference between actual and expected results for various sample sizes.}
{\footnotesize
\begin{tabular}{llrrrr}
& & \multicolumn{4}{c}{Population sample size} \\ \cline{3-6}
& & \textbf{0.1\%} & \textbf{1\%} & \textbf{5\%} & \textbf{10\%} \\ \hline
\multirow{4}{*}{Destination attraction} & Commercial & 0.52\% & 0.17\% & 0.06\% & 0.04\% \\
& Education & 0.63\% & 0.25\% & 0.14\% & 0.12\% \\
& Park & 0.72\% & 0.34\% & 0.29\% & 0.29\% \\
& Work & 0.67\% & 0.32\% & 0.29\% & 0.28\% \\ \hline
\multirow{4}{*}{Mode choice} & Walking & 7.27\% & 8.68\% & 7.06\% & 6.07\% \\
& Cycling & 1.15\% & 0.88\% & 0.79\% & 0.73\% \\
& Public transport & 5.59\% & 5.96\% & 5.26\% & 4.73\% \\
& Driving & 12.24\% & 13.94\% & 11.64\% & 10.24\% \\ \hline
\end{tabular}
}
\end{table}
Table \ref{tab:sample-size-table} shows the average difference between the actual results and weighted expected results aggregated \gls{sa3} regions for both destination attraction and mode choice. In general, there is a clear trend towards accuracy with increasing sample size. This is to be expected as the sample stage (Section \ref{sec:3-plan}) generates activity chains that better fit the \gls{vista} travel survey.
Likewise the global distance distribution and destination attraction of the locate stage (Section \ref{sec:5-locate}) also fit their choices to the distributions of the travel survey.
For both of these sections, each new trip represents a chance to better fit their distributions, larger sample sizes should produce increasingly representative results.
It is interesting to note that walking, cycling, and driving are less accurate for the 1\% sample than the 0.1\% sample, suggesting that the results are not stable at these sample sizes, and that larger sample sizes should be used if a representative sample is required.
Additionally, the gains in accuracy diminish with increasing sample size, with there being little accuracy gain between the 5\% and 10\% samples.
| {
"attr-fineweb-edu": 1.928711,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbmvxK03BfL1dT_Uo | \section*{
\section*{Methods}
\subsection*{Data}
\subsubsection*{Traffic data}
To study our tasks, we employ the travel smart card dataset (collected by Shanghai Public Transportation Card Co.Ltd) in Shanghai, China during 30 days in April 2015. Items (such as card number, transaction date, transaction time, station name, and transaction amount) are recorded by the subway system. The dataset contains information of 313 subway stations, 11 million individuals and 123 million events of trips. If a person checks in or out the subway stations with smart card, when and where this event will be recorded automatically by the subway system. A case study of this dataset can be found in the dataset website \cite{coltd2015}.
Previous studies \cite{long2015finding,chatterjee2015studies} focus on the existence of different community snapshots for the weekdays and weekends. However, in reality, there mye be still holidays in a month, which may result in different human patterns triggered by abnormal passengers' mobility behaviors. In our study, there are two holidays needed to be considered. One is Qing-ming Day which lasts from April 4 to April 6. Another is Labor Day for three days off from May 1.
We consider the contact matrix to reveal the community snapshots in working days without holidays. Specifically, the \textbf{flow matrix} of stations is collected for each hour during a day by identifying passengers' starting and ending subway stations for every trip within the time range of Hour, organized by the period contact matrices of stations. There are mainly two periods of peak hour (e.g., morning peak hour and evening peak hour) in the subway system of Shanghai \cite{sun2015research}. Thus we mainly consider the period community snapshots of four periods in a day, shown in Tab.\ref{table2}.
The total passenger volume for each hour in a working day is shown in Fig. \ref{figCurvePassenger}. There are two traffic peaks (morning and evening). To cover the morning and evening peaks, the period of morning is set from 5:30 to 09:59, and the evening from 16:00 to 20:59. Both are for around 5 hours. As for the left two time gaps, they are assigned to the periods of morning/afternoon and night.
For each period, only when the starting time and end time of a trip all belong to the time range of a period, the trip is used for this period.
\subsubsection*{Spatial facilities' data}
We use the Baidu APIs \cite{baidu_Shanghai} to get the spatial facilities around each station within 1km. There are more than 2 million spatial facilities collected with detail business information (such as name, business type, address and so on).
To reflect the living comfort around each station \cite{liu2014residential}, different kinds of spatial facilities are categoried into three types (entertainment, shopping, and food) according to their business type and then taken as features to quantify individuals' perception of traveling comfort.
Specifically, the type of entertainment denotes facilities' business type of entertainment.
The type of shopping represents facilities' business type of shopping.
Moreover, the type of food represents facilities' business type of food, but not including the related small business (such as the cake shop, teahouse, and bar).
\subsubsection*{Microblog data}
Corresponding to the studied period of traffic data, we use the Weibo APIs\cite{Weibo_Shanghai} to get the nearby microblogs around each station. There are around 1 million microblogs collected, containing around 15 thousand nominal keywords filtered by the Chinese word segmentation methods \cite{nlpcn}.
As for the emotion strength of each keyword, we map them into the emotion dictionary of Chinese words \cite{yu2008constructing}.
Furthermore, to find the topics behind these microblogs, the Topic Expertise Model, an improved Latent Dirichlet allocation (LDA) topic model for the detection of keywords, is used here \cite{yang2013cqarank}, taking the groups of microblogs around stations as files.
Finally, we can get the topic distribution in each station and the word distribution in each topic.
\begin{figure}[h]
\centering \includegraphics[scale=0.3]{CurvePassenger} \caption{\textbf{The temporal flows in working days.} The total passenger volume of all working days is shown for each hour. There are two traffic peaks (morning and evening). To cover the morning and evening peaks, the period of morning is chosen from 5:30 to 09:59, and the evening from 16:00 to 20:59. Both are for around 5 hours. As for the left two time gaps, they are assigned to the periods of morning/ and night.
}
\label{figCurvePassenger}
\end{figure}
\begin{table}[!ht]
\centering \caption{Periods involved in a working day}
\label{table2} %
\begin{tabular}{ll}
\hline
\textbf{Period Name} & \textbf{Period Section}\tabularnewline
\hline
Morning & 5:30-09:59 a.m.\tabularnewline
Morning/Afternoon & 10:00-15:59\tabularnewline
Evening & 16:00-20:59\tabularnewline
Night & 21:00-23:59\tabularnewline
\hline
\end{tabular}
\end{table}
\begin{algorithm}
\caption{Key steps in the analysis of community snapshots } \label{alg1}
\textbf{Input}: Flow matrices of day and periods for each day
\textbf{Output}: Categories of daily community snapshots
\begin{itemize}
\item \textbf{Step} \textbf{1: Detecting the community snapshots for each
period in a day}
\begin{itemize}
\item \textbf{For} each working day $d_{i}$ \textbf{do}
\begin{enumerate}
\item \textbf{For} each period $pd_{i}$ \textbf{do}
\begin{enumerate}
\item Detect communities based on period flow matrix
\item Take the snapshot of communities as period community snapshot
\end{enumerate}
\item Take the combination of all period community snapshots as periods-day
community snapshot
\end{enumerate}
\end{itemize}
\item \textbf{{Step} {2: Making consensus clustering for the community snapshots of all periods}}
\begin{itemize}
\item Update each stations' periods-day community identification in periods-day community
patterns
\end{itemize}
\item \textbf{Step} \textbf{3: Measuring variability of community snapshots}
\begin{itemize}
\item Compute the correlation matrix for periods-day community snapshots
\end{itemize}
\item \textbf{Step 4: Clustering the community snapshots based on the correlation
matrices}
\begin{itemize}
\item Cluster the periods-day community snapshots into different categories
\end{itemize}
\end{itemize}
\end{algorithm}
\subsection*{Community snapshots mining}
To perform the community snapshot analysis for each periods in a day, we address the technical challenges through community detection methods. As depicted in Alg. \ref{alg1}, the community snapshot analysis constitutes 4 steps. For each day, there are different temporal scales of flow matrices, representing the directed mobility flows between any two stations in certain temporal scale. Here, we mainly consider the temporal scale of periods in a day. Thus a day has one form of expressions, with different periods of flow matrices. Through community detection and consensus analysis on them, we can get the matrix of community identification for each station, as community matrix for periods-day community snapshot . With the help of clustering methods, we can get the categories of periods-day community snapshots. Next, we describe these steps in more details.
\subsubsection*{Community snapshots detecting and consensus clustering}
In this study, we use community detection method (CDM) to perform the analysis of community snapshots in UTN with the period contact matrix of stations as input. CDM offers the capability to divide the network into groups with dense inter-connections and sparser intra-connections, to probe the underlying structure of complex networks and find useful information from them \cite{sobolevsky2014general}. CDM is well suited for our study to examine the human mobility patterns in UTN by observing the forming and changing process of major communities.
There are many types of CDM techniques, such as greedy agglomerative optimization by Newman \cite{PhysRevE.69.066133} and faster Clauset-Newman\_Moore heuristic \cite{PhysRevE.70.066111} and so on. The major distinguished difference between them lies in the objective function, which is used for partitioning. Most CDMs are based on modularity as objective function\cite{sobolevsky2014general}. There are also some other objective functions used to motivate the CDMs, such as description code length, block model likelihood measure, and surprise. Our study aims to find the community snapshot in UTN in general, which is more suitable to the widely used objective function. Therefore, we employ a high-quality modularity based CDM (called Combo)\cite{sobolevsky2014general} as the adopted analysis technique for day community snapshots. As for the analysis of community snapshots, the regularity of mobility patterns in the subway system varies according to different temporal scales in a day \cite{zhong2016variability}. Thus, the kind of hierarchical CDMs with modularity optimization is suitable to detect the communities, with supplying different angularities of community identification.
Our study aims to find suitable community identifications for each stations in every period. Therefore, a high-qualify Hierarchical CDM (Louvain method \cite{rosvall2008maps}) is employed for the period community analysis.
As for the consensus clustering, the consensus of temporal networks has been well studied to track the evolution of node in the dynamic communities\cite{lancichinetti2012consensus}. Many community detection algorithms (e.g., Louvain method ) have been studied combined with consensus clustering. Thus we choose the consensus clustering method \cite{lancichinetti2012consensus}, which have been studied combined with many algorithms (e.g., Louvain method) for the consensus clustering for the community snapshots of all days.
\subsubsection*{Variability measure of daily community snapshots.}
The dynamics of daily community snapshots during the working days (without holidays) in the studied month is represented as a succession of snapshots of periods-day community snapshots by the above community detection methods. For each form of community snapshots, to identify the regularity and find whether two days have the same community snapshot, we need to measure the variability between them. Regarding human mobility patterns in the subway system, the degree of regularity between any two days is measured through the correlation of their temporal vectors of features from the normalized covariance matrix \cite{zhong2016variability}. Following this way, we use the nominal correlation to measure variability. There are many types of nominal correlation, such as Pearson contingency coefficient, Spearman Rho, and Cohen\textquoteright s Kappa. The daily community snapshots show variability, which is more suitable to use the nominal correlation based on information theory concepts. Thus we take the probability-distribution based Pearson contingency coefficient as the nominal correlation in this work.
\subsubsection*{Clustering the daily community snapshots}
With the above variability matrix of daily community snapshots, the next step is to cluster the patterns, to identify the different categories of daily community snapshots. One disadvantage of the existing popular clustering algorithms (e.g., hierarchical clustering algorithms and k-means algorithms) is that they are largely heuristic and not based on formal models. While model-based clustering is an alternative \cite{banfield1993model,melnykov2010finite}, initialized by hierarchical clustering for parameterized Gaussian mixture models. Therefore, we employ the model-based clustering method to cluster the daily patterns.
\subsection*{Spatial and temporal models analysis of the where and when questions}
\todo{ the algorithm frame are used to introduce the steps of methods vivid. }
\textcolor{black}{We further take a dual-perspective view on passengers' activity patterns to investigate the models of individuals traveling decision making from both spatial and temporal scales by answering where and when individuals go for entertainment in the evening.}
In this respect, we introduce an individual-based model, as described in Fig. \ref{Alg_framework}.
Specifically, by using the microblogs to reflect the \textbf{heterogeneous spatial popularity} $X_j^i$ (which attracts individuals in station $i$ by the $j$-th topic to spend their vacation ) for each topic in every station, the \textbf{spatial} model is studied through a location-based mobility model with considering individuals' relevant perceptions of popularity and distance .
Moreover, the \textbf{temporal} model of individual time schedules of activities is also studied by difference equations of temporal flows with modeling the dynamic evolving process of balancing traveling comfort and delay cost (which are correlated with environment and distance). Next, we describe them in more details. Next, we describe them in more details.
\begin{figure}[h]
\centering \includegraphics[scale=0.35]{Alg_framework} \caption{\textbf{The proposed model of individuals traveling decision making.} We consider an individuals traveling decision making in dual-perspective view of spatial and temporal models.
In doing so, two models are constructed to describe where and when an individual travel under various social influence with considering individuals' perceptions.
\textcolor{black}{
In the \textbf{spatial} model of the \textbf{where} question, we apply the microblog topics to characterize the spatial popularity (denoted by $X_j^i$). Then to get the spatial attraction, the model of spatial decision making is constructed by combining individuals' popularity perception (denoted by $\theta_j$) with spatial popularity.
Here, $p^{s\rightarrow i}$ denotes to the spatial probability, which is the probability for an individual choosing station $i$ as destination.
And we also integrate the \textbf{temporal} model of the \textbf{when} question with the influence of spatial facilities by characterizing people's perceptions of traveling discomfort (denoted by $\tau_i$) and delay cost (denoted by $\mu_i$). Here $p_t^i$ refers to the temporal probability, which is the probability for an individual choosing time $t$ for leaving. }
}
\label{Alg_framework}
\end{figure}
\subsubsection*{Spatial model analysis to explore the where question}
Let $X_{j}^{i}$ denote the popularity of the $j$-th topic in the $i$-th station. To be simple, we take the appearing probabilities of words (which are nearby the $i$-th station) in the $j$-th topic as $X_{j}^{i}$. Mathematically, $X_{j}^{i}$ can be written as the following form:
\begin{equation}
X_{j}^{i}=\sum_{\omega=1}^{N_{i}} p_{j}^{\omega}
\end{equation}
Let $p_{j}^{w}$ be the probability of the $w$-th word in the $j$-th topic. $N_{i}$ refers to the size of words nearby the $i$-th station.
Then we simply assume that the attraction of a destination station is inversely proportional to the spatial popularity of topics. Specifically, the relative attraction $A^{s \rightarrow i}$ of the destination station $i$ to passengers at the origin station $s$ is described as :
\begin{equation}
A^{s \rightarrow i}=o_{i}\frac{1}{1+e^{-(\sum _{j=1}^{M} \Theta _{j} X_{j}^{i} + \Theta _{d} d_{sj}+ \epsilon ) }}
\end{equation}
Let $\epsilon$ and $o_{i}$ denote the relative residual error and the total opportunities of destination $i$ respectively.
$\Theta_{d}$ refers the normalized impact of distance $d_{sj}$ between station $s$ and $j$.
Suppose there are $M$ categories of topics, and let $\Theta_{j}$ describe the impact of $X_{j}^{i}$ as the normalized emotion strength $E_{j}$ of the $j$-th topic.
Both $\Theta_{d}$ and $\Theta_{j}$ are normalized with the upper bound $ub$ and the lower bound $lb$. For instance, mathematically, $\Theta_{j}$ can be written as the following form:
\begin{equation}
\Theta _{j} = \frac{ E_{j}-lb }{ub-lb}
\end{equation}
Let $e_{k}$ be the $k$-th word's emotion strength. And $E_{j}$ can be taken as the sum of words' emotion strength in the $j$-th topic ( which contains $D_{j}$ words):
\begin{equation}
E_{i} = \sum_{k=1}^{D_{j}} (e_{k})
\end{equation}
Further, with assuming that the traveling probability $p^{s \rightarrow i}$ from origin $s$ to destination $i$ is proportional to the attraction of $i$, $p^{s \rightarrow i}$ can be described as:
\begin{equation}
p^{s \rightarrow i}=\frac{A^{s \rightarrow i}}{\sum_{k=1}^{N_s}A^{s \rightarrow {k}}}
\end{equation}
where $N_s$ is the set of all stations, expect the $i$-th station.
If a individual in station $s$, the probability of he/she chooses the station $i$ as destination follows the the traveling probability $p^{s \rightarrow i}$ .
\subsubsection*{ Temporal model analysis to explore the when question}
\textcolor{black}{When an individual begins his/her trip, he/she will be not only influenced by traveling comfort and the already waiting time, but also the trip's distance and the food facilities nearby the starting station.}
Let $Y_{t}^{i}$ denote the traveling volume from the $i$-th station at time $t$.
We assume $Y_{t}^{i}$ is effected by individuals' perceptions of traveling discomfort and delay cost in the temporal model. Specifically,
(1) The traveling discomfort measures the passengers' feeling of comfort and is correlated with the traffic volume \cite{de2015discomfort}. To be simple, we use $Y_{t-1}^{i}$ to represent the traveling discomfort.
(2)The delay cost is related the waiting time $ \Delta t $ , which is the time gap between the beginning time and the current time.
Thus $Y_{t}^{i}$ is modeled as:
\begin{equation}
Y_{t}^{i}=N_{i}(\tau_i \Delta t - \mu_i Y_{t-1}^i + C_i)
\end{equation}
$\mu_{i}$ refers to individuals' perception of the traveling discomfort in the $i$-th station, correlated with the nearby economic/social buildings. And let $\tau_i$ refer individuals' perception of the delay cost, correlated with the distances between stations. $N_i$ denotes the total volume of passengers in station $i$ in the evening. Thus individual's temporal choice at time $t$ is with probability $p_t^i=Y_{t}^{i}/N_{i}$.
Here, we use the generalized additive model to describe the correlation among parameters (such as $\mu_i$ and $\tau_i$ ) and features (such as economic/social buildings and distances).
Specifically, $E(\mu_{i})$, as the expectation of $\mu_{i}$, is related with the features of economic/social buildings with a link function $g$ (the log functions) via the following structure:
\begin{equation}
g(E(\mu_i))= \sum_{k=1}^{K} f_k(x_k^i)
\end{equation}
where $x_k^i$ relates the $k$-th feature of economic/social buildings around station $i$. And the functions $f_k$ are smooth functions. The similar is $\tau_i$ with the predictor variable of distance.
Besides, let $P_0$ denote the initial value of $(\tau_i \Delta t - \mu_i Y_{t-1}^i + C_i)$ at the start time.
To be simple, the constant vales of $C_i$ and $P_0$ are assumed to correlate with all features (such as economic/social buildings and distances) by the generalized additive model with the identity functions as the link function $g$.
\section*{Introduction}
Urban transport plays an important role in shaping and reflecting the evolution of cities \cite{batty2013theory,batty2008size}. To create the desired social-economic outputs, urban transport planners use human mobility flows to understand the spatial-temporal interactions of people \cite{batty2013theory} on transportation systems.
An urban public transport network (UTN) consists of the mobility flows of an urban transportation system, where the nodes represent the public transit stand locations and the directional edges denote the mobility flows from one node to another \cite{rodrigue2009geography}. For example, in a subway network, each node denotes a subway station and each edge denotes the mobility flow between two nodes, weighted by the volume.
The mobility flows {dynamically} change over time as people's reasons for travelling change {activities} (e.g. work or entertainment) \cite{zhong2015measuring}.
To measure the {variability of spatial-temporal mobility flows}, {communities} in a UTN (a community is a set of densely interconnected nodes that have few connections to outside nodes \cite{newman2003structure}) are used to show the dynamic changes in UTN over time.
Furthermore, a {community snapshot} is a snapshot of the communities in a UTN at a single point in time (e.g. a period, day or year). Community snapshots have many applications in transport planning, e.g. in urban development analysis, experts use them to quantify the influence of urban development on transportation networks \cite{sun2015quantifying,zhong2014detecting}; in urban dynamics analysis, experts use them to measure the variability of human mobility patterns \cite{zhong2015measuring}; and in urban area analysis, experts use them to identify functional zones \cite{kim2014analysis}.
The community method is not the only means of observing dynamic mobility flows; {driven spatial-temporal models with high-order structures of activity patterns} are also valuable in many scenarios. For example, in public transit scheduling and pricing, ticket prices are optimised to ease traffic congestion by influencing passengers' driven models of temporal scheduling \cite{de2015discomfort}; in urban planning, cities can be planned according to valuable factors, i.e. factors that attract individuals' decisions to live in a or visit a particular place \cite{paldino2015urban}.
To further understand the dynamic decision-making processes that shape individuals' spatial-temporal movements, we study dynamic mobility flows by measuring the variability in a heterogeneous sample of community snapshots in a UTN and explore the models that drive these patterns.
We observe the dynamic mobility flows using community snapshots of different spatial stations over time. These flows exist in many public transit systems (e.g. shared bicycle systems \cite{austwick2013structure,borgnat2013dynamical}, public bus systems \cite{chatterjee2015studies,zhang2015evaluation} and taxi systems \cite{liu2015revealing,kang2013exploring}).
Here, we use a subway system, an important part of a public transport network \cite{rodrigue2009geography}, to demonstrate the use of community snapshots.
As current static or aggregated mobility measures are robust to perturbations and cannot clearly reveal the variability of temporal community structures \cite{zhong2015measuring}, communities patterns are commonly studied retrospectively to get a deeper understanding of how people's activities change over time. Here, we study changes in the use of a subway system by detecting and comparing community snapshots from different time periods \cite{zhong2015measuring,sun2015quantifying}.
Retrospective studies of community snapshots improve our ability to measure the mobility flows from the perspective of the network. However, they do not provide insight into the {spatial-temporal} models that drive the mobility dynamics. For the current day, we cannot determine where individuals would like to go, let alone when they would like to begin any of their activities.
To address this issue, we also study the patterns of passengers' movements between places \cite{diao2015inferring,gong2015inferring,alexander2015origin,jiang2012discovering} as a high-order spatial-temporal structure.
The examination of the activity patterns should answer two questions: {
(1) where do individuals travel to for their activities; and
(2) when do people start their activities? }
For example, in the evening, can our models {infer} where and when a person is likely to go for entertainment after work?
These studies will raise planners' awareness of human movements between stations over different time periods.
Specifically, the likelihood of an individual participating in an activity in a particular spatial-temporal space is associated with both the station's spatial characteristics (such as population density and number of retail venues) and temporal characteristics (such as the day of the week and time of day) \cite{diao2015inferring}.
Previous studies of the {where (spatial) }dimension of intra-urban spatial mobility in public transit have typically focused on regional populations (such as Beijing \cite{liang2013unraveling,yan2014universal}, Shenzhen \cite{yan2014universal}, London \cite{liang2013unraveling}, Chicago \cite{liang2013unraveling,yan2014universal}, Los Angeles \cite{liang2013unraveling} and Abidjan \cite{yan2014universal}) and travelling distance (e.g. in Seoul \cite{goh2012modification}).
However, the correlation between public transit and its surrounding economic and social environment is obvious (such as in Biscay \cite{mendiola2014link}), especially the subway system (e.g. in Sao Paulo\cite{haddad2015underground}).
Most of the existing studies focus on universal laws of human intra-urban mobility, but neglect the {influence of a heterogeneous spatial environment} on activities over periods of time, and thus ignore the dynamics of universal mobility laws \cite{goh2012modification,perkins2014theory}.
Thus,
the {where (spatial) }dimension can be studied by analysing individuals' spatial movements as revealed by measurements of stations' {popularity in a heterogeneous spatial environment}.
Static information (such as population) cannot reflect dynamic spatial popularity over periods of time. Some studies have used individual digital traces to detect the urban magnetism of different places (e.g. in New York City \cite{paldino2015urban}). For example, location-based {microblogs} such as Twitter \cite{preoctiuc2015analysis,preoctiuc2015studying} have been found to correlate with individuals' profiles, spatial-temporal behaviour and preferences\cite{yuan2013and}. Thus, in this study, each station's spatial popularity is measured by the volume of spatial microblogs associated with it.
The study uses a location-based mobility model based on the spatial microblogs of neighbouring stations to describe the {heterogeneous spatial popularity} of each station, i.e. its attractiveness to individuals.
Existing studies of the {when (temporal) }dimension of passengers' activities focus on factors in the traffic system such as trip fares, delay cost and travel distance \cite{mohring1972optimization}, and on travel discomfort or congestion \cite{kraus1991discomfort,de2015analyzing,de2015discomfort}). They use a variety of methods such as the equilibrium equation \cite{kraus1991discomfort,de2015analyzing,de2015discomfort} to measure the effect of these factors.
However, it is not enough to explain the dynamic {uncertainty} in individuals' scheduling decisions, especially under the influence of a particular {social and economic environment}.
A passenger's travelling objective is to find an equilibrium between travelling comfort and schedule delay cost \cite{de2015discomfort}. Thus, differential equations can be used to model individuals' decision-making processes regarding temporal activities that balance travelling comfort and delay cost.
More specifically, a station's perceived travelling comfort can be measured by the number of {business buildings} surrounding the station (for Seoul see \cite{bae2003impact} and for Shanghai see \cite{jiwei2006railway}).
In addition, the perception of delay cost correlates with the distance between two places \cite{de2015discomfort}. Thus distance can be used to infer individuals' perception of delay cost.
\textcolor{black}{
In this way, we study the {when (temporal)} dimension of flows by considering the balance between travelling discomfort and delay cost. To characterise individuals' perceptions, features such as spatial facilities are correlated with perceptions through a generalised additive model\cite{hastie1990generalized,wood2006generalized}.
}
In this study, we take the subway system of Shanghai as a case study. This dataset provides the detailed trace information of 11 million individuals over a one-month period, including check-in and check-out times for each subway trip. The aggregated data on the subway system are collected by the Shanghai Public Transportation Card Co. Ltd, and released by the organising committee of the Shanghai Open Data Apps \cite{coltd2015}.
To examine the stations' environments, we collect microblog data and information about the spatial facilities around each station from Baidu APIs \cite{baidu_Shanghai} and Weibo APIs\cite{Weibo_Shanghai} for the studied month.
\textcolor{black}{
In our case study, we {first} measure the variability of mobility flows by investigating the dynamic spatial-temporal community snapshots of the mobility flows.
The community snapshots taken at different periods (morning, morning/afternoon and evening) do not agree with each other; the evening snapshots are particularly distinct.
We further investigate the high-order structure of the evening activity patterns.
The findings show that most individuals return home after work. Activity patterns with more edges are less common.
{\textit{In addition}}, we use spatial and temporal models to examine the effects of social factors on activity patterns. Specifically, in our examination of {the where dimension}, we find that the city centre has a higher social influence. This influence is better described by the spatial model, which illustrates heterogeneous spatial popularity.
{\textit{Finally}}, in the exploration of {the when dimension}, we find that the individuals tend to start their trips earlier when they travel shorter distances. Interestingly, if there are more food-related facilities (but no more than 1563) near the starting station, people are more likely to slow down their trip to avoid travelling discomfort
}
Our results deepen the understanding of {spatial-temporal} mobility flows in urban public transport networks by helping to model and estimate the spatial-temporal mobility flows. Specifically, we highlight the effects of social influences as measured by microblogs and spatial facilities.
\begin{figure}[th]
\centering \includegraphics[scale=1.60]{Figure-1_Du2.eps}
\caption{{Snapshots of communities at different times, embedded in geographical regions.}
\textcolor{black}{Taking one working day as an example, these snapshots of communities at four different times of day \textit{{ (morning, morning/afternoon, evening and night)} show the structure of the subway system.}
The various communities are indicated by the colour of the nodes in the subway system.
A comparison of any two adjacent periods shows that the communities move, as shown by the shifting nodes, especially in the evening and at night. Due to the low passenger volume at night, we focus on the evening period. An extended evening snapshot, shown in the middle of the diagram, combines the eight administrative divisions of Shanghai \cite{wiki_Shanghai}.
The dynamic community snapshots reveal passengers' various decisions about where and when they travel for different activities over a one-day period. These data are useful for exploring the where and when dimensions of mobility flows.
The spatial map was created using OpenStreetMap online platform (\href{http://http://www.openstreetmap.org/}{http://http://www.openstreetmap.org/}) (© OpenStreetMap contributors) under the license of CC BY-SA (\href{http://www.openstreetmap.org/copyright}{http://www.openstreetmap.org/copyright}). More details of the licence can be found in \href{http://creativecommons.org/licenses/by-sa/2.0/}{http://creativecommons.org/licenses/by-sa/2.0/}.
Line graphs were drawn using Tableau Software for Desktop version 9.2.15 (\href{https://www.tableau.com/zh-cn/support/releases/9.2.15}{https://www.tableau.com/zh-cn/support/releases/9.2.15}).
The layouts were modified with Keynote version 6.6.2 (\href{http://www.apple.com/keynote/}{http://www.apple.com/keynote/}).
}
}
\label{fig4layers}
\end{figure}
\begin{figure}[bth]
\centering \includegraphics[scale=0.60,angle=90]{Figure-2_Du.eps} \caption{{Changes in communities over one working day. }
\textcolor{black}{ Using the same day as in Fig. \ref{fig4layers}, this figure illustrates the changes in the communities over the day. The colours indicate the station's different community classes. The x-axis is indexed by the four time periods (morning, morning/afternoon, evening and night) in a day. The numbers on the vertical axes denote the subway lines (as described in Tab. S1) in the relevant communities. There are 11 communities in the morning and night periods, but only 10 in the other periods. Clearly, the subway lines associated with the city centre tend to construct communities with other subway lines. For example, in the morning, 5 of the 6 communities with more than 2 subway lines include subway lines 10 and 13, which are associated with the city centre.
Furthermore, the snapshots of communities in any two adjacent periods differ from each other; the differences are especially strong between the evening and night.
}
}
\label{figAllu}
\end{figure}
\begin{figure}[th]
\centering \includegraphics[width=0.35\textwidth,angle=90]{Figure-3_Du.eps} \caption{{Overview of Activity Patterns} Each activity type is marked as either H (Home), W (Workplace) or E (Entertainment). W1 and W2 denote workplace activities. Although an individual may have only one job, he/she may also go to and from work through different stations. Similarly, E1 and E2 denote entertainment activities.
\textcolor{black}{
(a) We label each activity pattern according to the number of nodes and edges. For example, the activity pattern N2E2 denotes an activity pattern with two {N}odes and two {E}dges.
The percentages beside each pattern show the ratio of the number of individuals engaged in this kind of activity pattern to the total number of individuals.
The data on individuals' digital traces can be divided into six main activity patterns that each account for more than 1\% of the sample. }
}
\label{figActivityPatterns}
\end{figure}
\begin{figure}[th]
\centering \includegraphics[scale=0.60,angle=90]{Figure-4_Du.eps}
\caption{{ Activity Patterns.} The red and blue edges denote where and when passengers go for entertainment in the evening. Note that in the calculation of the when they travel for entertainment, we include the trip from the workplace to home, due to its similarity to the travel from the workplace to an entertainment venue. Both trips are the first activities after work. Taking the time of leaving work as the beginning time, we use these trips to estimate wait times.}
\label{fig:subfig}
\end{figure}
\section*{Results}
\subsection*{Mining community snapshots for activity patterns}
To characterise the properties of community snapshots, we start by analysing the daily community snapshots (which are the aggregate of the community snapshots of the four periods) for the studied month. Of the 18 working days, the daily community snapshots are the same for 17 days; the exception is the first working day after a holiday (Qing-ming Day).
The common daily community snapshots for each period are shown in Fig. \ref{fig4layers}. Each period is represented by a snapshot; the changing colours of the station nodes show the communities joining and splitting over the four periods.
\textcolor{black}{
To be more specific, Fig. \ref{figAllu} shows the community snapshots of the subway system that appear to represent 94\% of working days. The subway lines that make up each community are displayed. The relationships between subway lines and administrative divisions are given in Tab. S1.
The subway lines that pass through the city centre are likely to construct communities with other subway lines that are associated with residential, tourist and business areas. For example, in the morning, five of the six communities with more than two subway lines include subway lines 10 and 13, which cross the city centre, as described in Tab. S1.
Furthermore, the number of stations in each community class is relatively stable, with one more or less between adjacent periods. However, the snapshots of the communities over all four periods (morning, morning/afternoon, evening and night) do not conform to this pattern, as stations in the last period join other communities.
The mixing of communities is particularly obvious in the evening. For example, the black community (subway line 8) in the evening combines with the purple community (subway line 8, 10 and 12).
Thus, no single community snapshot accurately shows the interactions between stations throughout a working day.
The changes reveal passengers' various activities, and the decisions they make regarding where and when to travel. }
Thus we further study passengers' activities.
Due to the relatively small passenger volume in the night, as shown in Fig. S2, we mainly study evening behaviour; a snapshot of evening communities is shown in Fig. \ref{fig4layers}, which also has additional information on administrative divisions.
\textcolor{black}{
Our analysis focuses on workers who start the first trip before 10:00 and make other trips after 17:00, as their activity patterns reflect regular behaviour. The high-order structure of their activity patterns is described in Fig. \ref{figActivityPatterns}.
We label each activity pattern by their number of nodes and edges. For example, the pattern N2E2 denotes an activity pattern with two {N}odes and two {E}dges.
We find that the activity pattern of N2E2 is the most common, followed by N3E2 (with three nodes and two edges) and others.
Each activity pattern describes a daily activity scenario.
For example, the pattern N3E2 describes an individual who goes from home H to workplace W1, then back home after work from place W2.
The activity patterns in the subway system show that most (85\%) individuals tend to go home after work, following pattern N2E2. Only a few people (around 15\%) go to other places after work.
We conclude that an activity with more edges has less probability of occurring.
This is consistent with previous studies of other cities, such as Singapore \cite{jiang2015activity}, Paris \cite{schneider2013unravelling} and Chicago \cite{schneider2013unravelling}.
}
\subsection*{Exploring the where and when dimensions}
The community snapshot approach can improve our ability to measure the dynamics of mobility flows. However, the {spatial-temporal} models that drive the mobility dynamics are still unclear.
\textcolor{black}{
From morning to afternoon, individuals' spatial and temporal activities are stable, as they have fixed homes and workplaces. However, in the evening, travel to entertainment places is unstable. Thus we further analyse the evening activities by exploring the where and when dimensions, as shown by the coloured edges shown in Fig. \ref{fig:subfig} (a) and Fig. \ref{fig:subfig} (b), respectively . Next, we describe them separately.
}
\begin{figure}[t]
\centering \includegraphics[scale=0.60,angle=270]{Figure-5_Du.eps}
\caption{{Left: Map of emotions. Right: Map of food-related facilities }
(a) This is a map of emotions. The redder the colour, the more positive the emotions in the station's environment. (b) This is a map of food facilities. The bluer the colour, the greater the density of food-related facilities.
The spatial map was created using OpenStreetMap online platform (\href{http://http://www.openstreetmap.org/}{http://http://www.openstreetmap.org/}) (© OpenStreetMap contributors) under the license of CC BY-SA (\href{http://www.openstreetmap.org/copyright}{http://www.openstreetmap.org/copyright}). More details of the licence can be found in \href{http://creativecommons.org/licenses/by-sa/2.0/}{http://creativecommons.org/licenses/by-sa/2.0/}.
Line graphs were drawn using Tableau Software for Desktop version 9.2.15 (\href{https://www.tableau.com/zh-cn/support/releases/9.2.15}{https://www.tableau.com/zh-cn/support/releases/9.2.15}).
The layouts were modified with Keynote version 6.6.2 (\href{http://www.apple.com/keynote/}{http://www.apple.com/keynote/}).
}
\label{figMapEmotion}
\end{figure}
\begin{figure}[th]
\centering
\includegraphics[scale=0.50,angle=270]{Figure-6_Du.eps}
\caption{{Left: An example of two mobility flows. Right: Temporal series of mobility flows } (a) The two mobility flows have the same destination but two different origins with different environments. (b) The relevant temporal flows in the evening are shown with the same colours corresponding to the two samples of mobility flows.
The spatial map was created using OpenStreetMap online platform (\href{http://http://www.openstreetmap.org/}{http://http://www.openstreetmap.org/}) (© OpenStreetMap contributors) under the license of CC BY-SA (\href{http://www.openstreetmap.org/copyright}{http://www.openstreetmap.org/copyright}). More details of the licence can be found in \href{http://creativecommons.org/licenses/by-sa/2.0/}{http://creativecommons.org/licenses/by-sa/2.0/}.
Line graphs were drawn using Tableau Software for Desktop version 9.2.15 (\href{https://www.tableau.com/zh-cn/support/releases/9.2.15}{https://www.tableau.com/zh-cn/support/releases/9.2.15}).
The waves figures in (b) are drawn by Matlab version 2016a (\href{https://www.mathworks.com/}{http://www.apple.com/keynote/}).
The layouts were modified with Keynote version 6.6.2 (\href{http://www.apple.com/keynote/}{http://www.apple.com/keynote/}).
}
\label{fig_example2waves}
\end{figure}
\subsubsection*{Exploring the where dimension }
First, we analyse the emotional strength of each station in the eight administrative divisions of Shanghai. Specifically,
the distribution of emotion for each station is described in Fig. \ref{figMapEmotion} (a) which shows a heterogeneous spatial distribution.
The colours vary from blue to red. For each station, the redder the colour, the more positive the emotion.
We find that the stations in the city centre around the 'Jing'an' and 'Huangpu' divisions have higher positive emotions than the outer divisions.
Then, we apply the microblog-based spatial model to different groups of administrative divisions, as shown in Tab. \ref{tableGuassin}. The correlation weight is a value between 0 and 1. The larger the correlation, the more accurately the model describes the reality.
We find that the divisions near the city centre have higher correlations than divisions on the edge of the city. Specifically,
the model of the stations in the city centre has the highest correlation with reality.
In contrast, the gravity mobility model \cite{Alonso1976} is much less accurate, with correlations around 0.1 in all three groups shown in Tab. \ref{tableGuassin}.
\textcolor{black}{
These results suggest that considering the heterogeneous spatial popularity of stations in addition to the influence of distance can improve the description of individuals' spatial mobility flows.}
\begin{table}[!ht]
\centering \caption{{Correlations of the real mobility flows between stations with the predictions of the spatial model. } There are two groups ('Centre' and 'Outer') of administrative divisions. 'Centre' divisions are divisions 1 to 4 in Fig. \ref{fig4layers}, whereas 'Outer' divisions are 5 to 8 in Fig. \ref{fig4layers}. A third group, which combines the two groups, is labelled 'Centre\&Outer'.
}
\label{tableGuassin} %
\begin{tabular}{lll}
\toprule
& Pearson's correlations & 95\% confidence interval of Pearson's correlations \\ \midrule
Centre\&Outer & 0.32 & 0.31 $\sim$ 0.32 \\
Centre & 0.35 & 0.34 $\sim$ 0.36 \\
Outer & 0.30 & 0.29 $\sim$ 0.31 \\ \bottomrule
\end{tabular}
\end{table}
\subsubsection*{Exploring the when dimension }
\textcolor{black}{
To investigate the temporal mobility flow, especially in the evening, we further explore the underlying temporal model by considering the influence of the environment. }
The model examines the influence of different types of places by varying the facilities, such as the food-related facilities shown in Fig. \ref{figMapEmotion} (b).
Fig. \ref{fig_example2waves} presents two samples of mobility flows that each have two origins and one destination.
The two origins are in different environments with different facilities around the stations, which may play a role in individuals' temporal decision making.
To identify the meaningful characteristics of facilities,
we first use training samples (80\% of all of the samples)to analyse the correlations between the spatial facilities and the parameters in the temporal model. The samples are shown in the x-axis of Fig. \ref{figR_para4}.
A $p$-value for a model's smooth term that is less than or equal to 5\% or 1\% indicates that the chosen smooth term is significant.
\textcolor{black}{ Moreover, the {e}stimated {d}egree of {f}reedom (edf) for the smooth terms' significance is estimated. }
When the value of the edf is far from 1, the smooth item tends to reflect a nonlinear relationship.
In this study, keeping only the significant features ($p$-value $<$ 0.1), we find that only the food-related feature, with $P$-value 0.06, has an obvious correlation with $\mu$. The edf is far from 1, indicating a nonlinear relationship between individuals' perceptions of travelling discomfort and the presence of food-related facilities, as shown in Fig. \ref{figR_para4}.
Individuals near stations with a higher $\mu$ tend to have lower travelling discomfort when the number of food-related facilities is near 1563.
As the number of food-related facilities increases, $\mu$ first increases, then decreases, indicating that a moderate number of food-related facilities has the greatest influence.
As for the perception ($\tau$) of delay cost, distance plays an obvious role, with a $p$-value of $<$ 0.05. $\tau$'s edf of the smooth term on distance is near 1, indicating a linear relationship between perception of delay cost and distance. Specifically, as shown in Fig. \ref{figR_para4}, individuals near stations with higher $\tau$ values tend to start their travel earlier.
The other two parameters ($C$ and $P_0$) are only related to distance, with which they have a positive linear relationship.
In addition, applying the correlations learned by the training samples, we use the testing samples (20\% of all samples) to examine whether it is possible to simulate the temporal flows with the inferred parameters.
There are 17 pairs of stations in the testing sample. We mainly study their temporal flows in the evening after work (from 17:00 to 24:00). The simulation result is shown in Fig. S1.
As we see, the simulation based on the temporal model driven by the desire to balance travelling comfort and delay cost is much closer to the real temporal flows.
\textcolor{black}{
This result shows that our temporal model describes to some extent the real temporal mobility flows, and can take into account the influences of the environment on individuals' perceptions.}
\begin{figure}[h]
\centering \includegraphics[scale=0.50,angle=90]{Figure-7_Du.eps} \caption{{Relationship between parameters and model smooth terms.} Four parameters are analysed here. Specifically, $\mu_i$ denotes individuals' perception of travelling discomfort. A higher $\mu_i$ indicates a higher tendency of individuals to avoid travelling discomfort.
$\tau$ represents passengers' perceptions of delay cost. A higher $\tau$ indicates a higher tendency of individuals to begin their travel as quickly as possible.
$C_i$ is a constant, denoting the basic dynamic of individuals' temporal travelling.
$P_0^i$ refers to the initial probability of individuals' temporal travelling at the starting time.
The mean value of the model smooth term is plotted with solid lines and the confidence intervals are plotted with dashed lines.
In addition, on the x-axis, the $P$-value and edf of the significance of the model smooth terms are shown.
(a) The nonlinear relationship between $\mu_i$ and the number of food-related facilities, with the peak at (1563,0.3578). (b-d) The nearly linear relationship among parameters ($\tau_i$, $C_i$ and $P_0^i$ ) and distance. }
\label{figR_para4}
\end{figure}
\section*{Discussion}
To study the dynamics of mobility flows in urban transport it is important to estimate the spatial and temporal interactions of human mobility flows between stations. Understanding these flows helps planners to evaluate traffic congestion \cite{ceapa2012avoiding}, and avoid the dangers of overcrowding \cite{pearl2015crowd}).
Taking the subway system of Shanghai as a case study, we used community snapshots to investigate the {variability of spatial-temporal} mobility flows on working days. The results show that the flows change at different times of day, but the patterns are similar on each working day.
Recognising these dynamics will help to predict human movements between stations on spatial-temporal scales, which may help planners to efficiently schedule subway cars.
To further understand the models, we investigate the {high-order structure of activity patterns} in both the temporal and spatial dimensions. Specifically,
we use the patterns in passengers' travelling activities to determine passengers' lifestyle needs and then investigate (1) the {where (spatial)} dimension of mobility flows between subway stations by correlating microblog topics with spatial popularity and
(2) the {when (temporal)} dimension of individual schedules of activities by correlating spatial facilities and travelling distance with the perceptions of travelling discomfort and delay cost.
We argue that correlations between the stations and their environments may to some extent be explained by urban catalyst theory (first proposed in 1960 \cite{jacobs1961death} and fully developed in 2000 \cite{davis2009urban}).
According to this theory, railway stations can act as urban catalysts and have positive effects on their surroundings, but are also influenced by their surroundings \cite{geng2009effect,jiwei2006railway,papa2008rail}. Therefore, a boom in living/business buildings or popular locations around a subway station can increase its traffic flows, as appears to be the case in Shanghai\cite{jiwei2006railway}, Seoul\cite{bae2003impact}, Kun Dae Yeok\cite{lee1997accessibility} and Toronto \cite{dewees1976effect}.
Note that the nonlinear relationship between $\mu_i$ and food-related facilities, shown in Fig. \ref{figR_para4}, indicate that although the overall influence of food-related facilities is related to the quantity of facilities, more is not necessarily better. Once the number of food-related facilities in a specified spatial area exceeds a threshold, which may vary with the size of the area, many small but low-level food-related facilities may appear, thus decreasing the total influence of food-related facilities.
This study has some limitations.
Community snapshots of the Shanghai subway system are used as a case study, and the dataset contains information for only one month. Other patterns may be found with data covering a longer time period. However, the same methods analysis of could be used. In this study, we focus on community snapshots of the subway system; future studies could consider the similarity and differences of the patterns in other public transit systems.
\textcolor{black}{
{Furthermore}, in this study we only analyse the correlations of individual spatial-temporal perceptions with the environment. The causal relationships should be examined in future studies. In this study we characterise the environment by counting the number of various kinds of facilities, but do not consider their scales. Thus we do not sufficiently capture the correlation with entertainment and shopping facilities, as their effect on individuals' perceptions cannot be measured by simply counting the facilities. Better measures are needed to further analyse their effect.
}
\section*{Methods}
\subsection*{Data}
To study our tasks, we use the travel smart card dataset from Shanghai, China for the month of April 2015. Items (such as card number, transaction date, transaction time, station name and transaction amount) are recorded by the subway system. The dataset contains information about 313 subway stations, 11 million individuals and around 120 million trips. We define the morning period as from 5:30 to 09:59, and the evening period from 16:00 to 20:59. Both periods are approximately 5 hours. The two time periods between these periods are labelled morning/afternoon and night. A trip is only counted as belonging to a given period if both the starting time and end time occur within the same period.
More details can be found in the Supplement $\S 2.1$.
Baidu APIs \cite{baidu_Shanghai} is used to identify the 2 million spatial facilities within 1 km of a subway station. The spatial facilities are assigned to one of three categories (entertainment, shopping and food) according to their business type. We also use the Weibo APIs\cite{Weibo_Shanghai} to identify 1 million microblogs generated near the stations.
To identify the topics of these microblogs, we use the Topic Expertise Model, an improved Latent Dirichlet allocation (LDA) topic model for the detection of keywords \cite{yang2013cqarank}; we use the groups of microblogs generated near each station as files.
Finally, we determine the distribution of topics across the subway system using the word distribution in each topic. See Supplement $\S 2.1$ for details.
\subsection*{Mining community snapshots }
To address the technical challenges of the community snapshot analysis for each period, we use community detection methods. The community snapshot analysis constitutes four steps. For each day, there are different temporal scales of flow matrices, representing the directed mobility flows between any two stations at a certain temporal scale. In this study, we focus on the temporal scale of four periods in a day. Through community detection and consensus analysis of each period, we can build a matrix of community identification for each station, producing a community matrix for each period. With the help of clustering methods, we can identify the categories of these community snapshots. See Supplement $\S 2.2$ for details.
\subsection*{Spatial and temporal models for the where and when dimensions}
\textcolor{black}{We adopt a dual-perspective on passengers' activity patterns to investigate the spatial and temporal dimensions of individuals travelling decision making. Specifically, we examine where and when individuals go for entertainment in the evening, by introducing an individual-based model.}
Specifically, we use microblogs, which reflect individuals' relevant perceptions of popularity and distance, to determine the {heterogeneous spatial popularity} $X_j^i$ of each topic at every station. This results in a {spatial} model based on the location-based mobility model.
To study the {temporal} model of individual schedules of activities, we use the difference equations of temporal flows to model the dynamic evolution of balancing travelling comfort and delay cost, which are correlated with environment and distance.
Let $Y_{t}^{i}$ denote the travelling volume from the $i$-th station at time $t$.
Thus $Y_{t}^{i}$ is modelled as $
Y_{t}^{i}=N_{i}(\tau_i \Delta t - \mu_i Y_{t-1}^i + C_i)$.
Here let $\mu_{i}$ refers to individuals' perceptions of the travelling discomfort in the $i$-th station, correlated with the nearby economic/social buildings. $\tau_i$ refer to individuals' perceptions of the delay cost, correlated with the distances between stations. $N_i$ denotes the total volume of passengers in station $i$ in the evening. Thus individual's temporal choice at time $t$ has a probability of $p_t^i=Y_{t}^{i}/N_{i}$.
In addition, these parameters (such as $\mu_i$ and $\tau_i$ ) are further correlated with features (such as economic/social buildings and distances) by the generalised additive model.
See Supplement $\S 2.3$ for more details.
| {
"attr-fineweb-edu": 1.878906,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUc4Q4uBhhxQOlnIf4 | \section{Introduction}
Performance analysis of athletes in sports is increasingly driven by quantitative methods, with recording, monitoring and analyzing key performance parameters on a large scale.
This is facilitated by the availability of video and other sensory hardware as well as the opportunities for automation with recent advances in machine learning.
Video based method are of special interest, where athletes are tracked by one or multiple cameras to infer parameters and statistics in individual and team sports. This form of external monitoring does not rely on direct sensor instrumentation of athletes \cite{Fasel18, Ismail18}, which in turn could affect the athletes' performance or limit measurements to very specific training sites.
In this work we propose a vision-based system specifically for event detection in athlete motion.
The main objective is to detect points in time, where characteristic aspects of an athlete's motion occur.
Our focus lies on practical solutions that can be adopted to different sports, event definitions and visual environments.
In recent years, many approaches to visual (motion) event detection, or more general to action recognition, are full-stack vision models \cite{Giancola18, Luvizon18, Victor17}.
They directly try to solve the mapping from images or video clips to very specific motion or action related objectives.
The drawback of this end-to-end paradigm is that transferring and adapting to other domains usually requires a large amount of annotated image or video material and carefully tuning many parameters.
In contrast, we encourage the effective decoupling of the video-to-motion-event translation, by using temporal human pose sequences as a suitable intermediate motion representation.
This allows us to built upon existing state-of-the-art vision models, in this case for human pose estimation, as much as possible.
The compact pose sequences serve as the basis for much smaller and simpler event detection models.
We address the problem of detecting motion events for performance analytics in two domains: swimming as well as long and triple jump. In both cases the goal is to precisely extract timestamps of certain motion aspects that in turn can be used to infer timing and frequency parameters.
We purposely choose rather different domains to show the generality of our approach.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{qualitative}
\caption{Examples of event detections obtained with our method.
Left: Events detected in swim start recordings, including the jump-off, the dive-in and the start of dolphin kicks.
Right: Detections of begin and end of ground contact for a triple jump athlete.}
\end{center}
\label{fig:qualitative}
\end{figure*}
For swimming, we consider an instrumented swimming pool with synchronized cameras mounted above and under water at the side of the pool.
The four cameras cover the first 20 meters of the pool and are specifically used for swim start training.
They record the jump from the starting block, the dive-in into the water and the initial under water phase of a single athlete under test.
Due to the fixed viewpoints we have strong prior knowledge of what motion will occur where, \ie in which camera, and in which order.
At the same time, only a limited number of recordings with labeled temporal events is available.
For this scenario, temporal motion events can be detected from 2D pose sequences by directly extracting \emph{pose statistics} and deciding on event occurrences with simple \emph{heuristics}.
We show that despite the lack of data, domain specific knowledge can be employed to still obtain robust event detections.
For long- and triple jump, we use recordings of individual athletes with a single pannable camera.
The camera is positioned at the side of the running track and records the run-up, the jump(s) and the landing in the sand pit.
Compared to swimming, this scenario is rather complementary.
The recordings cover many different tracks during training and competitions, leading to varying viewpoints and scales.
Due to the pannable camera, the viewpoint is also changing during a video. Additionally, there is more variability in timing and location of the observed motion, depending on the camera tracking, the length of the run-up and the step frequency. We show how a moderately sized set of event-annotated recordings can be used to learn a \emph{CNN-based sequence-to-sequence translation}. The idea is to map estimated 2d pose sequences extracted from the recordings to a \emph{timing estimate of event occurrences}. We enhance this approach with a novel pose normalization and augmentation strategy for more robust and stable estimates. Figure~\ref{fig:qualitative} shows examples of video recordings and detected events.
\section{Related work}
A lot of prior literature focuses on the tasks of visual motion event detection and action segmentation in sports. We briefly review existing approaches based on their methodology, and in particular the type of motion representation they use, if any.
One possibility is to use semantically rich motion capture data to infer action type and duration of an athlete.
\cite{Vicente16} use 3D pose data from a RGB-D sensor to identify characteristic poses in martial arts with a probabilistic model.
Similarly, \cite{DeDios13} use motion capture data to segment the motion during general physical exercises into different actions.
Limited to regular monocular video recordings, \cite{Li10} describe the usage of low-level video segmentation features in high-diving. They fit a simple body model on the segmented video areas and use a probabilistic model to infer the most likely action.
\cite{Wu02} use estimated athlete velocities from motion segmentation for a high-level semantic classification of long jump actions.
More similar to our approach, \cite{Yagi18} use noisy 2D pose estimates of athletes in sprints. They align lane markers with the 2D poses to infer step frequencies, but with a much lower precision compared to our work.
\cite{Lienhart18} use pose similarity and temporal structure in pose sequences to segment athlete motion into sequential phases, but again with limited temporal precision.
Directly related to our work, \cite{Einfalt19} introduces a deep learning based framework for sparse event detection in 2D pose sequences. Parts of our work build and improve on that foundation.
Lastly, there are multiple approaches that omit an intermediate motion representation and directly infer events from video recordings. \cite{Giancola18} use temporally pooled CNN features directly from soccer broadcasts to classify player actions.
\cite{Sha14} use highly specific body part detectors for swimmers in hand-held camera recordings.
They extract video frames with certain body configurations that mark the start of a swimming stroke.
\cite{Hakozaki2018, Victor17, woinoski20} consider a similar task, but propose a video-based CNN architecture to detect swimming athletes in specific body configurations.
Our work shares the notion of event detection based on body configuration, but we aim at a more modular and flexible solution due to the intermediate pose representation.
\section{Method}
The main motivation behind our approach is to use the human pose as a compact description of an athlete's motion over time.
It decouples the highly specific objective of motion event detection in particular sports from the low-level problem of inferring information from video data directly.
Given a video of length $N$, our goal is to describe it with a pose sequence of the athlete of interest.
Each pose $p \in \mathbb{R}^{K \times 2}$ is the configuration of $K$ 2D keypoints in a specific video frame that describe characteristic points of the human body.
Depending on the body model, the $K$ keypoints usually coincide with joints of the human skeleton.
Each keypoint is represented by its image coordinates.
From a suitable camera viewpoint, such a sequence of 2D keypoints configurations captures the essence of motion of the human body.
In the following, we describe our proposed approach to track a single athlete and his pose over time and to map the resulting pose sequence to the desired motion events.
\subsection{Motion representation with pose estimates}
\label{sec:motion_representation}
In order to infer the pose of the athlete of interest in every video frame, we build upon the vast and successful work on human pose estimation in images.
The CNN architectures for human pose estimation that emerged over the last years have reached levels of accuracy that enable their direct usage in practical applications, including sports.
In this work we use a modified variant of Mask R-CNN \cite{He17}, fine-tuned on sampled and annotated video frames from our application domains.
The main advantage of Mask R-CNN lies in the single, end-to-end pipeline for multi-person detection and per-person pose estimation.
From a practical point of view it is easier to implement, train and embed only a single CNN into an application.
We use the common version of Mask R-CNN with a ResNet-101 \cite{He16} and Feature Pyramid Network (FPN) resolution aggregation \cite{Lin17}.
We additionally evaluate a high resolution variant of Mask R-CNN \cite{Einfalt19}, which estimates keypoints at double the usual resolution.
\subsection{Tracking and merging pose sequences}
Recordings from individual sports typically depict multiple persons: the athlete of interest, as well as other athletes and bystanders.
Our swimming recordings often show additional swimmers in the background. Videos from athletics show observers and other athletes surrounding the running track. We therefore need to track the athlete of interest by assigning the correct detection and pose to each video frame.
Compared to general pose tracking, \ie finding pose sequences for all people in a video, we only need to find a pose sequence for a single athlete. We therefore propose a generic and adaptable tracking strategy that can include domain knowledge about the expected pose and motion of the athlete, with a simple \emph{track, merge and rank} mechanism.
\subsubsection{Initial pose tracks}
We start by processing a video frame-by-frame with Mask R-CNN.
During fine-tuning to our application domains, we train Mask R-CNN to actively suppress the non-relevant persons.
However, this is not always possible, as the athlete of interest can be similar to other persons in a video frame with respect to appearance and scale. Therefore, we obtain up to $D$ person detections in every frame.
We denote the detection candidates at frame $t$ as $\mathbf{c}_t = \left \lbrace d_{t,1}, \dotsc, d_{t,D} \right \rbrace$.
Each detection is described by $d_{t,i} = \left ( x_{t,i}, y_{t,i}, w_{t,i}, h_{t,i}, s_{t,i} \right )$, with the center, width, height and detection score of the bounding box enclosing the detected person.
Each detected person has its corresponding pose estimate $p_{t,i}$.
Given $\mathbf{c}_t$ for every video frame, we want to find the detection (and pose) track belonging to the athlete of interest.
We start to build initial tracks by linking temporally adjacent detections that are highly likely to belong to the same person.
We employ an intersection over union (IoU) criterion, since we expect the changes of the athlete's position and scale from one frame to another to be small.
By greedily and iteratively linking detections throughout the video, we gain an initial set of detection tracks $\mathbf{T}_1, \dotsc, \mathbf{T}_L$.
Each track has start and end times $t1$ and $t2$ and consists of sequential detections, with $\mathbf{T}_j = \left ( d_{t1}, d_{t1 + 1}, \dotsc, d_{t2} \right )$, where
\begin{equation}
d_t \in \mathbf{c}_t \; \forall d_t \in \mathbf{T}_j
\end{equation}
and
\begin{equation}
\iou \left ( d_t, d_{t+1} \right ) > \tau_{\text{IoU}} \; \forall (d_t, d_{t+1}) \in \mathbf{T}_j.
\end{equation}
We use a very strict IoU threshold of $\tau_{\text{IoU}} = 0.75$, since we do not use backtracking to re-link individual detection later.
\subsubsection {Track merging}
Due to imperfect or missing detections we need to merge tracks that are divided by small detection gaps.
Two tracks $\mathbf{T}_i, \mathbf{T}_j$ can only be considered for merging, if they are temporally close, but disjoint, \ie $t_{1,j} - t_{2,i} \in [1, \tau_{\text{gap}}]$.
We apply up to three criteria whether to merge the tracks.
First, there has to be some spatial overlap between the detections at the end of $\mathbf{T}_i$ and the beginning of $\mathbf{T}_j$, \ie an IoU $>0$.
Second, both tracks should contain detections of similar scale.
Since the athlete moves approximately parallel to the image plane, his size should be roughly the same throughout a video, and more so in the two separate detection tracks:
\begin{equation}
\frac{\lvert \mathbf{T}_j \rvert}{\lvert \mathbf{T}_i \rvert} \cdot \frac{\sum_{d \in \mathbf{T}_i} d_w \cdot d_h}{\sum_{d \in \mathbf{T}_j} d_w \cdot d_h} \in [\frac{1}{\tau_\text{scale}}, \tau_\text{scale}]
\end{equation}
Lastly, in the case of swimming, we expect the athlete to always move in the same horizontal direction through the fixed camera views. Both tracks should therefore move in the same horizontal direction:
\begin{equation}
\sgn \left ( d_{t2, i, x} - d_{t1, i, x} \right ) = \sgn \left ( d_{t2, j, x} - d_{t1, j, x} \right )
\end{equation}
After greedily merging the initial tracks according to those criteria, we get our final track candidates $\mathbf{T}_1^\prime, \dotsc, \mathbf{T}_M^\prime$.
\subsubsection {Track ranking}
In order to select the detection track belonging to the athlete of interest, we rank all final track candidates according to the expected pose and motion of the athlete.
We impose four different rankings $r_k (\mathbf{T}_i^\prime) \rightarrow [1, M]$ on the tracks: (1) The largest bounding box, (2) the highest average detection score, (3) the longest track (long and triple jump only) and (4) the most horizontal movement (swimming only).
The final track is selected according to the best average ranking:
\begin{equation}
\mathbf{T}_{\text{final}} = \arg \min_{\mathbf{T}_i^{\prime}} \sum_{k=1}^{4} r_k \left ( \mathbf{T}_i^\prime \right )
\end{equation}
It also determines the pose sequence of the athlete. Figure~\ref{fig:tracking} depicts an example for track ranking in an under water swimming recording.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{tracking_example}
\end{center}
\caption{Swimmer detections after track merging in an under water camera. Track ranking selects the correct track (red). Suppressed tracks of background swimmers are superimposed in different colors (yellow $\rightarrow$ blue). Only every 15th detection is shown.
}
\label{fig:tracking}
\end{figure}
\subsection{Event detection on pose statistics}
The pose sequence allows us to extract timestamps that mark important motion events.
For swimming, we propose to directly identify events based on pose statistics, as the number of training examples is limited and hinders learning a pose-to-event model purely from data. We employ robust decision rules on an observed pose sequence that leverage the fixed temporal structure of the expected motion and the known and fixed camera setting.
In our case, we detect three different categories of events. (1) Position-based events occur when the athlete reaches a certain absolute position in a camera view. In our case, we detect the timestamps of the athlete reaching fixed horizontal distance markings in the calibrated under water cameras. They are used to measure the time for the athlete to cover the first five, ten and fifteen meters. The detection is simply based on the estimated head position surpassing the distance markings. (2) Presence-based events occur when a specific body part is visible for the first or last time in a camera view. Specifically, we detect the begin of the dive-in after jumping from the starting block. It is defined as the first timestamp where the athlete's head touches the water in the above water camera view. We identify it by a clear reduction in confidence of the head detection due to its invisibility. (3) Pose-based events are defined by the athlete or a subset of his body parts appearing in a certain pose configuration. In our specific scenario we detect the timestamp of last contact of the foot and the starting block as well as the first under water dolphin kick after dive-in.
The former can typically be inferred from the knee angle being maximal when the foot leaves the starting block
The dolphin kick can be inversely detected by the smallest knee angle after dive-in.
For robust event detections we require the above-mentioned pose characteristics (\eg low detection confidence or small knee angle) to be present for multiple frames to avoid missdetections due to single, erroneous pose estimates. Additionally, we enforce all event detection to be in the correct order and to appear in the correct camera view.
\subsection{Event detection via sequence-to-sequence translation}
In the domain of long- and triple jump recordings, our goal is to precisely detect stride related events.
Specifically, we want to detect every begin and end of ground contact of an athlete's foot on the running track.
Given a video of length $N$, we denote the set of event occurrences of type $c \in C$ as $\mathbf{e}_c = (e_{c,1}, \dotsc, e_{c,E})$.
Each occurrence $e_{c,i}$ is simply a video frame index. We do not explicitly distinguish between ground contact of the left and right foot, \ie $C = \lbrace \textit{step begin}, \textit{step end} \rbrace$.
In contrast to swimming, directly inferring these events from 2D pose sequences with simple decision rules is difficult due to varying camera viewpoints and the a priori unknown number of event occurrences \cite{Yagi18}. Instead, we use a set of annotated videos and the extracted pose sequences to train a CNN for event inference.
Based on prior work \cite{Dauphin17, Gehring17, Li18}, we adopt the notion of a temporal convolutional neural network that translates compact input sequences into a target objective. We build upon the concept of representing discrete event detection in human motion as a continuous translation task \cite{Einfalt19}. Given the input pose sequence $\mathbf{p}$, the objective is to predict a timing indicator $f_c(t)$ for every frame index $t$, that represents the duration from $t$ to the next event occurrence of type $c$:
\begin{equation}
f_c (t) = \min_{\substack{e_{c,i} \in \mathbf{e}_c \\ e_{c,i} \geq t}} \frac{e_{c,i} - t}{t_{\max}}.
\end{equation}
The duration is normalized by a constant $t_{\max}$ to ensure that the target objective is in $[0, 1]$. Analogously, a second objective function $b_c(t) \in [-1, 0]$ is defined to represent the backwards (negative) duration to the closest previous event of type $c$. Every event occurrence is identified by $f_c(t) = b_c(t) = 0$. This type of objective encoding circumvents the huge imbalance of event- and non-event examples. It provides a uniformly distributed label with semantical meaning to every frame index and every input pose.
\subsubsection{Network architecture}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.92\linewidth]{tcn}
\end{center}
\caption{Overview of our CNN architecture for translating pose sequences to event timing indicators. All convolutions except the last have $n=180$ kernels and do not use padding. The residual connection in each TCN block slices its input along the temporal axis to obtain matching dimensions.}
\label{fig:tcn}
\end{figure}
The proposed CNN architecture for learning this sequence translation task follows the generic temporal convolutional network architecture (TCN) \cite{Bai18}. It is designed to map a compact sequential input via repeated convolutions along the temporal axis to a target sequence. The network consists of $B$ sequential residual blocks. Each block consists of two convolutions, each followed by batch normalization, rectified linear activation and dropout. It is surrounded by a residual connection from block input to block output. Each block uses dilated convolutions, starting with a dilation rate of $d=1$ and increasing it with every block by a factor of two.
The output of the final block is mapped to the required output dimension with a non-temporal convolution of kernel size one. In our case, we train a single network to jointly predict the forward and backward timing indicators for both event types.
The network does not require the poses from an entire video to infer the frequently repeating stride related events. We limit the temporal receptive field of the network with $B=3$ blocks and convolution kernels of size $w=3$ along the temporal axis. This leads to a temporal receptive field of $s=29$ time steps. Additionally, the TCN blocks only use valid convolutions without any zero-padding \cite{Pavllo19}. Figure~\ref{fig:tcn} gives an overview of the architecture.
During training, we randomly crop pose sequences of size $s$ from the training videos. The output for these examples consists of only the predictions at the central sequence index $m=\lceil s/2 \rceil$. Additionally, sequences in a minibatch are sampled from different videos to avoid correlated batch statistics \cite{Pavllo19}. We train the network using a smooth-$L_1$ (or Huber) loss \cite{Girshick15, Huber73} on the difference between predicted timing indicators $\hat{f}_c(m), \hat{b}_c(m)$ and the ground truth. Figure~\ref{fig:sequence_translation} depicts the exemplary output on a triple jump video.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{net_output}
\end{center}
\caption{Raw output of our CNN-based pose sequence translation for the last seven \textit{step begin} events in a triple jump video. Events are extracted from the predicted timing indicators, with $\hat{f}_c(t) \approx \hat{b}_c(m) \approx 0$. The example matches the one in Figure~\ref{fig:qualitative}.}
\label{fig:sequence_translation}
\end{figure}
\subsubsection{Pose normalization}
\label{sec:pose_rep}
Due to its fully convolutional nature, the input to our network can be a sequence of pose estimates of variable length. A pose at time $t$ is represented as a 1D vector $\in \mathbb{R}^{3K}$. In contrast to \cite{Einfalt19} we include the detection score of each keypoint, as it contains information about detection uncertainty and possible keypoint occlusion. We mask keypoints with zeros if their score is below a minimal value $c_{\min}$.
The input poses are normalized in order to train the network on pose sequences with varying scales and different video resolutions. We analyze three different normalization strategies. Given a video and its pose sequence $\mathbf{p} = (p_1, \dotsc, p_N)$, we denote the normalized pose at time $t$ as $p_t^\prime$. Our first variant normalizes input poses on a global video level. We set
\begin{equation}
p^{\prime}_{t} = \norm \left ( p_t, \mathbf{p} \right ), \label{eq:global_norm}
\end{equation}
where
$norm ( p_t, \mathbf{p})$ min-max normalizes the image coordinates in $p_t$ to $[-1, 1]$ with respect to the observed coordinates in $\mathbf{p}$. This retains absolute motion of athlete and camera throughout the video, but is susceptible to single outlier keypoints. The second variant limits normalization to a minimal temporal surrounding equal to the receptive field of the network, with
\begin{equation}
p_t^{\prime} = \norm \left ( p_t, \left ( p_{t-m}, \dotsc, p_{t+m} \right ) \right ). \label{eq:local_norm}
\end{equation}
Due to its locality it is more robust and largely removes absolute motion on video level. However, since each pose is normalized with respect to its own, different temporal surrounding, adjacent normalized poses $p_t^\prime, p_{t+1}^\prime$ are no longer directly comparable. Finally, the third variant tries to combine the advantages of the former strategies with a sequence based normalization. We manually extract overlapping sub-sequences of size $s$ from the video and normalize all poses within equally. This effectively changes the operation of our network during inference. In order to compute the network output at a single time index $t$, we extract the surrounding sub-sequence and jointly normalize it:
\begin{equation}
p_i^{\prime} = \norm \left ( p_i, \left ( p_{t-m}, \dotsc, p_{t+m} \right ) \right ) \; \forall_{i=t-m:t+m} \label{eq:seq_norm}
\end{equation}
This retains a local normalization and keeps adjacent poses inside of a sub-sequence comparable. The drawback is that for each output, a differently normalized sub-sequences has to be processed. It removes the computational efficiency of a convolutional network.
However, the small size of the network still keeps processing time for an entire video pose sequence in the order of seconds.
\subsubsection{Pose sequence augmentation}
During training and inference, the input poses to our sequence translation network are estimates themselves, including missing, wrong or imprecise pose estimates. The network needs to learn to cope with imperfect data.
We therefore want to enhance the training data to reflect the variability and the error modes of pose estimates that the model encounters during inference.
Simply augmenting the training data by randomly perturbing input poses is not convincing.
Adding pose and motion agnostics noise might introduce error modes that are not present during test time.
Our proposal is to extract pose sequences from the training videos multiple times with slightly different pose estimation models. In our case we use original Mask R-CNN as well as its high spatial precision variant.
Additionally, we use multiple different checkpoints from the fine-tuning of both models, each leading to unique pose sequences with potentially different, but realistic error modes.
\section{Experimental setting}
We evaluate our approach to human pose estimation, athlete tracking and subsequent event detection on real world video recordings. For swimming, our dataset consists of 105 recordings of swim starts, each comprising of four synchronized camera at $50$ fps: one above water and three under water. All recordings are annotated with the event types $C=$ \{\textit{jump off}, \textit{dive-in}, \textit{first kick}, \textit{5m}, \textit{10m}, \textit{15m}\}. Each event occurs exactly once per recording. We use 23 recordings for optimization of tracking and event detection parameters.
For long and triple jump, we use 167 monocular recordings at 200 fps from various training and competition sites, of which 117 are used for training and validation. They are labeled with event occurrences of $C=\lbrace \textit{step begin}, \textit{step end}\rbrace$. Due to the repetitive motion, each event type occurs nine times per video on average.
\textbf{Extraction of 2D pose candidates}$\;$We use Mask R-CNN, pre-trained on COCO \cite{Coco14}, and separately fine-tune the model on sampled and annotated video frames from both domains. For swimming, we use 2500 frames, annotated with a standard $K=14$ body model. For long and triple jump, a total of 3500 frames are annotated with $K=20$ keypoints, specifically including the feet of the athlete. The Mask R-CNN model is fine-tuned with a batch size of $8$ for $140$ epochs, a base learning rate of $0.1$ and a reduction by $0.1$ after $120$ epochs. We process all videos and camera views frame-by-frame and extract the $D=3$ highest scoring athlete detections and their pose estimates.
\textbf{Athlete tracking}$\;$
Given the multiple detections per video frame, we apply our athlete tracking strategy to obtain a single detection and pose sequence per video. For swimming, each camera view is processed independently and therefore has its own pose sequence. We speed up pose inference and tracking by only processing a camera view if the athlete already appeared in the previous camera.
All tracking parameters $\tau$ are optimized with a grid search on the training videos. For long- and triple jump videos, tracking is optimized for athlete detection performance on the pose-annotated video frames. For swimming, we jointly optimize tracking parameters and the hand-crafted event detection decision rules directly for event detection performance.
\textbf{Long and triple jump event timing estimation}$\;$
The temporal convolutional network for event timing prediction in long and triple jump is trained on the inferred 2D pose sequences from all training videos They contain $2167$ step events, equally distributed among \textit{step begin} and \textit{step end}. We extract training sequences of length $s=29$ from the per-video pose sequences, leading to a total training set of 65k different input pose sequences. The network is trained with a batch size of $512$, dropout rate of $0.1$ and a base learning rate of 1e-2 using the Adam optimizer for 20 epochs or until convergence. The learning rate is reduced by $0.3$ after $10$ epochs. Discrete event occurrences are extracted as shown in Figure~\ref{fig:sequence_translation}.
\textbf{Evaluation protocols}$\;$
After extracting event predictions for each recording in the test set, we exclusively assign every prediction to its closest ground truth event. A prediction is correct if its absolute temporal distance to the assigned ground truth does not exceed a maximum frame distance $\Delta t$. We report detection performance at maximum frame distances of $\Delta t \in [1,3]$. We do not consider $\Delta t = 0$, since even annotations by humans often deviate by one frame. At the same time, event detection performance usually saturates at $\Delta t = 3$ despite the different frame rates in swimming and long and triple jump. Given a maximum frame distance, we report \textit{precision}, \textit{recall} and the combined $F_1$ score. Figure~\ref{fig:qualitative} shows qualitative examples of detected events.
We additionally measure pose estimation performance with the standard \textit{percentage of correct keypoints} (PCK) metric \cite{Sapp13} on a pose-annotated test set of 600 frames for swimming and 1000 frames for long and triple jump. For athlete detection, we report the standard \textit{average precision} (AP) metric \cite{Coco14} at a required bounding box IoU of $0.75$ on the same set of frames. For swimming, we also report the \textit{false positive rate} (FPR) of our tracking approach on a separate set of video frames that do not depict the athlete of interest. Note that all metrics are reported as a percentage.
\section{Results}
\subsection{Per-frame pose estimation}
Table~\ref{tab:pose_estimation} shows the pose estimation results on test set frames.
We report these results as reference for the pose estimation fidelity on which the subsequent tracking and event detection pipeline operates.
The table shows PCK results at thresholds of $0.05$, $0.1$ and $0.2$, which correspond to a very high, high and low spatial precision in keypoint estimates. With the original Mask R-CNN architecture we achieve PCK values of $70.0$, $88.8$ and $95.1$ for long and triple jump. Especially the value at PCK@$0.1$ indicates that the model produces reliable and precise keypoint estimates for the vast majority of test set keypoints. The high resolution variant of Mask R-CNN leads to another gain in PCK of up to $+2.4$ at the highest precision level.
For swimming, we achieve a base result of $50.7$, $78.0$ and $92.5$ for the respective PCK levels, with a notable drop in high precision keypoint estimation compared to the athletics videos. The main difference is the aquatic environment, the static cameras leading to truncated poses and the lower number of annotated training frames. Especially the underwater recordings are known to pose unique challenges like the visual clutter due to bubbles and low contrast \cite{zecha19} (see Figure~\ref{fig:tracking}). High resolution Mask R-CNN leads to a small gain in high precision PCK of up to $+1.1$, but otherwise seems to suffer from the same difficulties.
\begin{table}[]
\begin{center}
\begin{tabular}{@{}lcc@{}}
\toprule
& Swimming & \multicolumn{1}{l}{Long/triple jump} \\ \midrule
& \multicolumn{2}{c}{[email protected] / 0.1 / 0.2} \\ \midrule
\multicolumn{1}{l|}{Mask R-CNN} & 50.7 / 78.0 / \textbf{92.5} & 70.0 / 88.8 / \textbf{95.1} \\
\multicolumn{1}{l|}{+ high res.} & \textbf{51.8} /\textbf{ 78.4} / 92.4 & \textbf{72.4} / \textbf{89.1} / \textbf{95.1} \\ \bottomrule
\end{tabular}
\end{center}
\caption{Results on per-frame human pose estimation on test videos. We compare the original Mask R-CNN architecture and a high resolution variant.}
\label{tab:pose_estimation}
\end{table}
\subsection{Athlete tracking}
Table~\ref{tab:tracking} shows results on athlete bounding box detection.
The reported AP is measured on the same set of test set frames that is used in pose estimation evaluation.
These frames all are positive examples, \ie the athlete of interest is known to be seen.
We evaluate our tracking approach by processing the complete video recordings and filtering for the detections in those specific frames.
We compare this result to an optimistic baseline, where we directly apply Mask R-CNN to only those frames, avoiding the necessity for tracking.
For long and triple jump, tracking is on par with the optimistic baseline. Despite tracking being guaranteed to find a single, temporally consistent detection sequence, it is still able to retain the same recall. But with an average precision of $97.9$, the detection quality of Mask R-CNN alone is already very high for this domain. In contrast, the detection performance on swimming is considerably lower. Tracking slightly surpasses the baseline by $+0.7$ with an AP of $76.0$. It retains the recall of the optimistic baseline and also improves the suppression of irrelevant detections in the positive test set frames.
One main difference to the long and triple jump recordings is the multi-camera setup, where the athlete is usually only visible in one or two cameras at the same time.
Tracking therefore also needs to suppress detections in a large number of negative frames, where the athlete of interest is not visible.
Table~\ref{tab:tracking} shows the false positive rate (FPR) on our set of negative frames. With tracking we obtain false detections in $2.1$ percent of the negative frames, which is considerably lower than the $6.2$ FPR of the baseline.
\begin{table}[]
\begin{center}
\begin{tabular}{@{}lccc@{}}
\toprule
& \multicolumn{2}{c}{Swimming} & \multicolumn{1}{l}{Long/triple jump} \\ \midrule
& AP$_{0.75}$ & FPR & AP$_{0.75}$ \\ \midrule
\multicolumn{1}{l|}{Mask R-CNN} & 75.3 & 6.2 & \textbf{97.9} \\
\multicolumn{1}{l|}{+ tracking} & \textbf{76.0} & \textbf{2.1} & \textbf{97.9} \\ \bottomrule
\end{tabular}
\end{center}
\caption{Results on athlete detection in positive (AP) and negative (FPR) frames with our tracking strategy. Performance is compared to an optimistic baseline with per-frame Mask R-CNN results.}
\label{tab:tracking}
\end{table}
\subsection{Event detection in swimming}
Table~\ref{tab:simming_events} shows the results on event detection in swimming recordings when applying our hand-crafted decision rules on pose sequences obtained via Mask R-CNN and tracking. We only report the recall of event detections, as exactly one event is detected per type and recording.
At $\Delta t=1$ we already achieve a recall of at least $91.3$ for the jump-off, dive-in and the distance-based events. The majority of remaining event occurrences is also detected correctly when we allow a frame difference of $\Delta t=3$, with a recall of at least $97.1$. This shows that our approach of using hand-crafted decision rules on pose statistics is capable of precisely detecting the vast majority of those event types.
The only exception is the recall for the first dolphin kick, which saturates at $89.4$ even for frame differences $\Delta t > 3$. The respective decision rule thus sometimes generates false positives that are distant from the actual event occurrence. We observed that nearly all of those false positives are detections of a small knee angle during the second dolphin kick. The main cause for this seem to be the unstable detections for hip, knee and ankle keypoints during the first kick when large amounts of bubbles are in the water from the dive-in.
\begin{table}[]
\begin{center}
\begin{tabular}{@{}lcc@{}}
\toprule
\multicolumn{1}{r}{Recall at} & $\Delta t=1$ & \multicolumn{1}{l}{$\Delta t =3$} \\ \midrule
\multicolumn{1}{l|}{Jump-off} & 91.4 & 97.1 \\
\multicolumn{1}{l|}{Head dive-in} & 92.9 & 98.6 \\
\multicolumn{1}{l|}{First kick} & 84.8 & 89.4 \\
\multicolumn{1}{l|}{5m/10m/15m} & 91.3 & 99.0 \\ \bottomrule
\end{tabular}
\end{center}
\caption{Results on event detection in swimming recordings at different temporal precision levels $\Delta t$.}
\label{tab:simming_events}
\end{table}
\subsection{Event detection in long and triple jump}
Table~\ref{tab:athletics_events} shows the results on event detection in long and triple jump recordings.
Our base model operates on pose sequences obtained with high resolution Mask R-CNN and our temporal athlete tracking. Input poses are sequence normalized (Equation~\ref{eq:seq_norm}). No additional data augmentation is applied. The results on the event types \textit{step begin} and \textit{step end} are averaged, as they do not show distinct differences. The base model already achieves a $F_1$ score of $95.5$ even at the strictest evaluation level with $\Delta t = 1$. It correctly detects the vast majority of the step-related events, with true positives having a mean deviation of only $2ms$ from the ground truth. The $F_1$ score improves to $98.4$ at the more relaxed evaluation with $\Delta t = 3$. With a precision of $99.5$, the only remaining error modes are false negatives, \ie events that simply do not get detected no matter the temporal precision $\Delta t$.
We also compare our base model to a variant that uses pose estimates from a regular Mask R-CNN model without the high resolution extension. The loss in high precision pose estimation from Table~\ref{tab:pose_estimation} translates to a reduction of $-1.1$ in $F_1$ score at $\Delta t=1$. There are only marginal differences at lower temporal precision. This shows that despite regular Mask R-CNN already achieving very reliable pose estimates in this domain, additional improvements in keypoint precision can still be leveraged by our pose sequence model.
We additionally explore the effects of different pose normalizations, as proposed in Section~\ref{sec:pose_rep}. Table~\ref{tab:athletics_events} (mid) shows a large drop in $F_1$ score of up to $-5.8$ when input poses are normalized globally (Equation~\ref{eq:global_norm}). This clearly indicates that retaining information about absolute motion in a video hinders precise event detection in this domain.
Consequently, performance largely recovers when using local pose normalization from Equation~\ref{eq:local_norm}. But the fact that poses in an input sequence are all normalized differently still leads to loss of up to $-1.5$ in $F_1$ score compared to the base model.
It proves that the sequence-based normalization indeed combines the advantages of the other two normalization methods, leading to a highly suitable pose representation for event detection.
Finally, Table~\ref{tab:athletics_events} (bottom) shows the result when using pose sequence augmentation.
The CNN-based sequence translation is trained with pose sequences extracted with different pose estimation models. We use three different checkpoints during fine-tuning of original and high resolution Mask R-CNN.
This leads to our best performing model, with additional gains for all precision levels $\Delta t$ of up to $+1.2$ in $F_1$ score. The further improvement confirms that augmenting pose sequences with realistic error modes from various pose estimation models is a valid strategy.
If needed, it could even be extended to include pose estimates from entirely different models.
\begin{table}[]
\begin{center}
\begin{tabular}{@{}lccc@{}}
\toprule
\multicolumn{1}{r}{$F_1$ score at} & $\Delta t = 1$ & \multicolumn{1}{l}{$\Delta t = 2$} & \multicolumn{1}{l}{$\Delta t = 3$} \\ \midrule
\multicolumn{1}{l|}{Base model} & 95.5 & 98.1 & 98.4 \\
\multicolumn{1}{l|}{w/o high res.} & 94.4 & 98.3 & 98.6 \\ \midrule
\multicolumn{1}{l|}{w/ global norm.} & 88.6 & 94.5 & 96.6 \\
\multicolumn{1}{l|}{w/ local norm.} & 94.0 & 97.7 & 98.3 \\ \midrule
\multicolumn{1}{l|}{w/ pose augmentation} & \textbf{96.7} & \textbf{98.4} & \textbf{98.8} \\ \bottomrule
\end{tabular}
\end{center}
\caption{Results on event detection in long and triple jump, with different variants of our CNN-based event detection.}
\label{tab:athletics_events}
\end{table}
\section{Conclusion}
In this paper, we have presented a practical approach to motion event detection in athlete recordings.
It avoids the need to develop a complete end-to-end vision model for this highly domain-dependent task.
Instead, we build on the state-of-the art in human pose estimation to obtain a compact description of an athlete's motion in a video.
Our first contribution is a flexible tracking strategy for a single athlete of interest that suppresses hardly avoidable missdetections of other athletes and bystanders.
We showed how domain knowledge about appearance and motion can be leveraged to obtain consistent detection and pose sequences.
Our second contribution are two different approaches to event detection on pose sequences.
For swimming, we showed how robust decision rules on pose statistics can already achieve convincing results, despite limited data.
With a sufficient set of annotated event occurrences, we additionally showed how a CNN-based sequence translation can be used to learn event inference in the domain of long and triple jump.
We focused on finding appropriate pose normalization and augmentation strategies, leading to a highly reliable model with hardly any error modes.
Both approaches are not strictly limited to the specific domains we applied them to, but rather show the flexibility of human pose sequences as a foundation for motion event detection.
\textbf{Acknowledgments}$\;$
This work was funded by the Federal Institute for Sports Science based on a resolution of the German Bundestag.
We would like to thank the Olympic Training Centers Hamburg/Schleswig-Holstein and Hessen for collecting and providing the video data.
{\small
\bibliographystyle{ieee_fullname}
| {
"attr-fineweb-edu": 1.678711,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdBA5qX_BnPKL8BBY | \section{Introduction}
Player ranking is one of the most studied subjects in sports analytics \cite{Swartz}. In this paper we consider predicting success in the National Hockey League(NHL) from junior league data, with the goal of supporting draft decisions. The publicly available junior league data aggregate a season's performance into a single set of numbers for each player. Our method can be applied to any data of this type, for example also to basketball NBA draft data(\url{www.basketball-reference.com/draft/}). Since our goal is to support draft decisions by teams, we ensure that the results of our data analysis method can be easily explained to and interpreted by sports experts.
Previous approaches for analyzing hockey draft data take a regression approach or a similarity-based approach. Regression approaches build a predictive model that takes as input a set of player features, such as demographics (age, height, weight) and junior league performance metrics (goals scored, plus-minus), and output a predicted success metric (e.g. number of games played in the professional league). The current state-of-the-art is a generalized additive model \cite{Schuckers2016}. Cohort-based approaches divide players into groups of comparables and predict future success based on a player's cohort. For example, the PCS model \cite{PCS} clusters players according to age, height, and scoring rates. One advantage of the cohort model is that predictions can be explained by reference to similar known players, which many domain experts find intuitive. For this reason, several commercial sports analytics systems, such as Sony's Hawk-Eye system, identify groups of comparables for each player. Our aim in this paper is to describe a new model for draft data that achieves the best of both approaches, regression-based and similarity-based.
Our method uses a model tree \cite{Friedman00, GUIDE}. Each node in the tree defines a new yes/no question, until a leaf is reached. Depending on the answers to the questions, each player is assigned a group corresponding to a leaf. The tree builds a different regression model for each leaf node. Figure 1 shows an example model tree. A model tree offers several advantages.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.0\textwidth]{LMT_tree.png}
\caption{Logistic Regression Model Trees for the ${2004, 2005, 2006}$ cohort in NHL. The tree was built using the LogitBoost algorithm implemented in the LMT package of the Weka Program \cite{Weka1, Hall2}.}
\end{center}
\end{figure}
\begin{itemize}
\item {Compared to a single regression model, the tree defines an ensemble of regression models, based on non-linear thresholds. This increases the expressive power and predictive accuracy of the model. The tree can represent complex interactions between player features and player groups. For example, if the data indicate that players from different junior leagues are sufficiently different to warrant building distinct models, the tree can introduce a split to distinguish different leagues.}
\item {Compare to a similarity-based model, tree construction learns groups of players from the data, without requiring the analyst to specify a similarity metric. Because tree learning selects splits that increase predictive accuracy, the learned distinctions between the groups are guaranteed to be predictively relevant to future NHL success. Also, the tree creates a model, not a single prediction, for each group, which allows it to differentiate players from the same group.}
\end{itemize}
A natural approach would be to build a linear regression tree to predict NHL success, which could be measured by the number of games a draft pick plays in the NHL. However, only about half the draft picks ever play a game in the NHL \cite{Tingling}. As observed by \cite{Schuckers2016}, this creates a zero-inflation problem that limits the predictive power of linear regression. We propose a novel solution to the zero-inflation problem, which applies logistic regression to predict whether a player will play at least one game in the NHL. We learn a logistic regression model tree, and rank players by the probability that the logistic regression model tree assigns to them playing at least one game. Intuitively, if we can be confident that a player will play at least one NHL game, we can also expect the player to play many NHL games. Empirically, we found that on the NHL draft data, the logistic regression tree produces a much more accurate player ranking than the linear regression tree.
Following \cite{Schuckers2016}, we evaluate the logistic regression ranking by comparing it to ranking players by their future success, measured as the number of NHL games they play after 7 years. The correlation of the logistic regression ranking with future success is competitive with that achieved by the generalized additive model of \cite{Schuckers2016}. We show in case studies that the logistic model tree adds information to the NHL's Central Scouting Service Rank (CSS). For example, Stanley Cup winner \textit{Kyle Cumiskey} was not ranked by the CSS in his draft year, but was ranked as the third draft prospect in his group by the model tree, just behind \textit{Brad Marchand} and \textit{Mathieu Carle}. Our case studies also show that the feature weights learned from the data can be used to explain the ranking in terms of which player features contribute the most to an above-average ranking. In this way the model tree can be used to highlight exceptional features of a player for scouts and teams to take into account in their evaluation.
\textit{Paper Outline.} After we review related work, we show and discuss the model tree learned from the 2004-2006 draft data. The rank correlations are reported to evaluate predictive accuracy. We discuss in detail how the ensemble of group models represents a rich set of interactions between player features, player categories, and NHL success. Case studies give examples of strong players in different groups and show how the model can used to highlight exceptional player features.
\section{Related Work}
Different approaches to player ranking are appropriate for different data types. For example, with dynamic play-by-play data, Markov models have been used to rank players \cite{Cervone2014,Thomas2013,Oliver2017,Kaplan2014}. For data that record the presence of players when a goal is scored, regression models have also been applied to extend the classic plus-minus metric \cite{Macdonald2011,Gramacy2013}. In this paper, we utilize player statistics that aggregate a season's performance into a single set of numbers. While this data is much less informative than play-by-play data, it is easier to obtain, interpret, and process.
\textit{Regression Approaches.} To our knowledge, this is the first application of model trees to hockey draft prediction, and the first model for predicting whether a draftee plays any games at all. The closest predecessor to our work is due to Schuckers \cite{Schuckers2016}, who uses a single generalized additive model to predict future NHL game counts from junior league data.
\textit{Similarity-Based Approaches} assume a similarity metric and group similar players to predict performance. A sophisticated example from baseball is the nearest neighbour analysis in the PECOTA system \cite{PECOTA}. For ice hockey, the Prospect Cohort Success (PCS) model \cite{PCS}, cohorts of draftees are defined based on age, height, and scoring rates. Model tree learning provides an automatic method for identifying cohorts with predictive validity. We refer to cohorts as groups to avoid confusion with the PCS concept. Because tree learning is computationally efficient, our model tree is able to take into account a larger set of features than age, height, and scoring rates. Also, it provides a separate predictive model for each group that assigns group-specific weights to different features. In contrast, PCS makes the same prediction for all players in the same cohort. So far, PCS has been applied to predict whether a player will score more than 200 games career total. Tree learning can easily be modified to make predictions for any game count threshold.
\section{Dataset}
Our data was obtained from public-domain on-line sources, including \url{nhl.com}, \url{eliteprospects.com}, and \url{draftanalyst.com}. We are also indebted to David Wilson for sharing his NHL performance dataset \cite{Wilson2016}. The full dataset is posted on the Github(\url{https://github.com/liuyejia/Model_Trees_Full_Dataset}). We consider players drafted into the NHL between 1998 to 2008 (excluding goalies). Following \cite{Schuckers2016}, we took as our dependent variable \textbf{the total number of games $g_i$ played} by a player $i$ after 7 years under an NHL contract. The first seven seasons are chosen because NHL teams have at least seven-year rights to players after they are drafted \cite{Schucker2013}. Our dataset includes also the total time on ice after $7$ years. The results for time on ice were very similar to number of games, so we discuss only the results for number of games. The independent variables include demographic factors (e.g. age), performance metrics for the year in which a player was drafted (e.g., goals scored), and the rank assigned to a player by the NHL Central Scouting Service (CSS). If a player was not ranked by the CSS, we assigned (1+ the maximum rank for his draft year) to his CSS rank value. Another preprocessing step was to pool all European countries into a single category. If a player played for more than one team in his draft year (e.g., a league team and a national team), we added up this counts from different teams. Table 1 lists all data columns and their meaning. Figure 1 shows an excerpt from the dataset.
\begin{table}[!h]
\begin{center}
\begin{tabular}{ | l | p{10cm} |}
\hline
Variable Name & Description \\ \hline
id & nhl.com id for NHL players, otherwise Eliteprospects.com id \\ \hline
DraftAge & Age in Draft Year \\ \hline
Country & Nationality. Canada -> 'CAN', USA -> 'USA', countries in Europe -> 'EURO' \\ \hline
Position & Position in Draft Year. Left Wing -> 'L', Right Wing -> 'R', Center -> 'C', Defencemen -> 'D' \\ \hline
Overall & Overall pick in NHL Entry Draft \\ \hline
CSS\_rank & Central scouting service ranking in Draft Year \\ \hline
rs\_GP & Games played in regular seasons in Draft Year \\ \hline
rs\_G & Goals in regular seasons in Draft Year \\ \hline
rs\_A & Assists in regular seasons in Draft Year \\ \hline
rs\_P & Points in regular seasons in Draft Year \\ \hline
rs\_PIM & Penalty Minutes in regular seasons in Draft Year \\ \hline
rs\_PlusMinus & Goal Differential in regular seasons in Draft Year\\ \hline
po\_GP & Games played in playoffs in Draft Year \\ \hline
po\_G & Goals in playoffs in Draft Year \\ \hline
po\_A & Assists in playoffs in Draft Year \\ \hline
po\_P & Points in playoffs in Draft Year \\ \hline
po\_PIM & Penalty Minutes in playoffs in Draft Year \\ \hline
po\_PlusMinus & Goal differential in playoffs in Draft Year \\ \hline
sum\_7yr\_GP & Total NHL games played in player's first 7 years of NHL career \\ \hline
sum\_7yr\_TOI & Total NHL Time on Ice in player's first 7 years of NHL career \\ \hline
GP\_7yr\_greater\_than\_0 & Played a game or not in player's first 7 years of NHL career\\ \hline
\end{tabular}
\caption{Player Attributes listed in dataset \textit{(excluding weight and height)}.}
\end{center}
\end{table}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{excerp_NHL_datasets.png}
\caption{Sample Player Data for their draft year. rs = regular season. We use the same statistics for the playoffs \textit{(not shown)}.}
\end{center}
\end{figure}
\section{Model Tree Construction}
Model trees are a flexible formalism that can be built for any regression model. An obvious candidate for a regression model would be linear regression; alternatives include a generalized additive model \cite{Schuckers2016}, and a Poisson regression model specially built for predicting counts \cite{Ryder}. We introduce a different approach: a logistic regression model to predict whether a player will play any games at all in the NHL ($g_i>0$). The motivation is that many players in the draft never play any NHL games at all (up to 50\% depending on the draft year) \cite{Tingling}. This poses an extreme zero-inflation problem for any regression model that aims to predict directly the number of games played. In contrast, for the classification problem of predicting whether a player will play any NHL games, zero-inflation means that the data set is balanced between the classes. This classification problem is interesting in itself; for instance, a player agent would be keen to know what chances their client has to participate in the NHL. The logistic regression probabilities $p_i=P(g_i>0)$ can be used not only to predict whether a player will play any NHL games, but also to rank players such that the ranking correlates well with the actual number of games played. Our method is therefore summarized as follows.
\begin{enumerate}
\boxitem{
\item[1.]
Build a tree whose leaves contain a logistic regression model.
\item[2.]
The tree assigns each player $i$ to a unique leaf node $l_i$, with a logistic regression model $m(l_i)$.
\item[3.]
Use $m(l_i)$ to compute a probability $p_i= P(g_i>0)$.
}
\end{enumerate}
Figure 1 shows the logistic regression model tree learned for our second cohort by the LogiBoost algorithm. It places CSS rank at the root as the most important attribute. Players ranked better than $12$ form an elite group, of whom almost $82\%$ play at least one NHL games. For players at rank $12$ or below, the tree considers next their regular season points total. Players with rank and total points below $12$ form an unpromising group: only $16\%$ of them play an NHL game. Players with rank below $12$ but whose points total is $12$ or higher, are divided by the tree into three groups according to whether their regular season plus-minus score is positive, negative, or $0$. (A three-way split is represented by two binary splits). If the plus-minus score is negative, the prospects of playing an NHL game are fairly low at about $37\%$. For a neutral plus-minus score, this increases to $61\%$. For players with a positive plus-minus score, the tree uses the number of playoff assists as the next most important attribute. Players with a positive plus-minus score and more than $10$ playoff assists form a small but strong group that is $92\%$ likely to play at least one NHL game.
\section{Results: Predictive Modelling}
Following \cite{Schuckers2016}, we evaluated the predictive accuracy of the LMT model using the Spearman Rank Correlation(SRC) between two player rankings: $i)$ the performance ranking based on the actual number of NHL games that a player played, and $ii)$ the ranking of players based on the probability $pi$ of playing at least one game(Tree Model SRC). We also compared it with $iii)$ the ranking of players based on the order in which they were drafted (Draft Order SRC). The draft order can be viewed as the ranking that reflects the judgment of NHL teams. We provide the formula for the Spearman correlation in the Appendix. Table 2 shows the Spearman correlation for different rankings.
\begin{table}[!h]
\centering
\begin{tabular}{|l|c|c|c|r|}
\hline
\begin{tabular}{@{}c@{}} Training Data \\ NHL Draft Years \end{tabular} & \begin{tabular}{@{}c@{}} Out of Sample \\ Draft Years\end{tabular} & \begin{tabular}{@{}c@{}} Draft Order \\ SRC\end{tabular} & \begin{tabular}{@{}c@{}} LMT \\ Classification Accuracy\end{tabular} & \begin{tabular}{@{}c@{}} LMT \\ SRC\end{tabular} \\ \hline
1998, 1999, 2000 & 2001 & 0.43 & 82.27\% & 0.83 \\ \hline
1998, 1999, 2000 & 2002 & 0.30 & 85.79\% & 0.85 \\ \hline
2004, 2005, 2006 & 2007 & 0.46 & 81.23\% & 0.84 \\ \hline
2004, 2005, 2006 & 2008 & 0.51 & 63.56\% & 0.71 \\ \hline
\end{tabular}
\caption{Predictive Performance (our Logitic Model Trees, over all draft ranking) using Spearman Rank Correlation. Bold indicates the best values.}
\end{table}
\textit{Other Approaches.} We also tried designs based on a linear regression model tree, using the M5P algorithm implemented in the Weka program. The result is a decision stump that splits on CSS rank only, which had substantially worse predictive performance(i.e., Spearman correlation of only $0.4$ for the $2004-2006$ cohort). For the generalized additive model (gam), the reported correlations were $2001: 0.53, 2002: 0.54, 2007: 0.69, 2008: 0.71$ \cite{Schuckers2016}. Our correlation is not directly comparable to the gam model because of differences in data preparation: the gam model was applied only to drafted players who played at least one NHL game, and the CSS rank was replaced by the Cescin conversion factors: for North American players, multiply CSS rank by $1.35$, and for European players, by $6.27$ \cite{Fyffe}. The Cescin conversion factors represent an interaction between the player's country and the player's CSS rank. A model tree offers another approach to representing such interactions: by splitting on the player location node, the tree can build a different model for each location. Whether the data warrant building different models for different locations is a data-driven decision made by the tree building algorithm. The same point applies to other sources of variability, for example the draft year or the junior league. Including the junior league as a feature has the potential to lead to insights about the differences between leagues, but would make the tree more difficult to interpret; we leave this topic for future work. In the next section we examine the interaction effects captured by the model tree in the different models learned in each leaf node.
\section{Results: Learned Groups and Logistic Regression Models}
We examine the learned group regression models, first in terms of the dependent success variable, then in terms of the player features.
\subsection{Groups and the Dependent Variable}
Figure 3 shows boxplots for the distribution of our dependent variable $g_i$. The strongest groups are, in order, 1, 6, and 4. The other groups show weaker performance on the whole, although in each group some players reach high numbers of games. Most players in Group 2\&3\&4\&5 have GP equals to zero while Group 1\&6 represent the strongest cohort in our prediction, where over $80\%$ players played at least 1 game in NHL. The tree identifies that among the players who do not have a very high CSS rank (worse than $12$), the combination of regular season $Points >= 12$, $PlusMinus > 0$, and $play-off Assists > 10$ is a strong indicator of playing a substantive number of NHL games (median $g_i = 128$).
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{nhl_boxplot.png}
\caption{Boxplots for the dependent variable $g_i$ , the total number of NHL games played after $7$ years under an NHL contract. Each boxplot shows the distribution for one of the groups learned by the logistic regression model tree. The group size is denoted $n$.}
\end{center}
\end{figure}
\subsection{Groups and the Independent Variables}
Figure 5.2 shows the average statistics by group and for all players. The CSS rank for Group 1 is by far the highest. The data validate the high ranking in that $82\%$ players in this group went on to play an NHL game. Group 6 in fact attains an even higher proportion of $92\%$. The average statistics of this group are even more impressive than those of group 1 (e.g., $67$ regular season points in group $6$ vs. $47$ for group 1). But the average CSS rank is the lowest of all groups. So this group may represent a small group of players ($n = 13$) overlooked by the scouts but identified by the tree. Other than Group 6, the group with the lowest CSS rank on average is Group 2. The data validate the low ranking in that only $16\%$ of players in this group went on to play an NHL game. The group averages are also low (e.g., $6$ regular season points is much lower than other groups).
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{mean_points_nhl.png}
\caption{Statistics for the average players in each group and all players.}
\end{center}
\end{figure}
\section{Group Models and Variable Interactions}
Figure 5 illustrates logistic regression weights by group. A positive weight implies that an increase in the covariate value predicts a large increase in the probability of playing more than one game, compared to the probability of playing zero games. Conversely, a negative weight implies that an increase in the covariate value decreases the predicted probability of playing more than one game. Bold numbers show the groups for which an attribute is most relevant. The table exhibits many interesting interactions among the independent variables; we discuss only a few. Notice that if the tree splits on an attribute, the attribute is assigned a high-magnitude regression weight by the logistic regression model for the relevant group. Therefore our discussion focuses on the tree attributes.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.0\textwidth]{nhl_weights_new.png}
\caption{Group 200(4 + 5 + 6 + 7 + 8) Weights Illustration. E = Europe, C = Canada, U = USA, rs = Regular Season, po = Playoff. Largest-magnitude weights are in bold. Underlined weights are discussed in the text.}
\end{center}
\end{figure}
At the tree root, \textit{CSS rank} receives a large negative weight of $-17.9$ for identifying the most successful players in Group 1, where all CSS ranks are better than $12$. Figure 6a shows that the proportion of above-zero to zero-game players decreases quickly in Group 1 with worse CSS rank. However, the decrease is not monotonic. Figure 6b is a scatterplot of the original data for Group 1. We see a strong linear correlation ($p = -0.39$), and also a large variance within each rank. The proportion aggregates the individual data points at a given rank, thereby eliminating the variance. This makes the proportion a smoother dependent variable than the individual counts for a regression model.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.0\textwidth]{CSS_rank_NHL_plot.png}
\caption{Proportion and scatter plots for CSS\_rank vs. sum\_7yr\_GP in Group 1.}
\end{center}
\end{figure}
Group 5 has the smallest logistic regression coefficient of $-0.65$. Group 5 consists of players whose CSS ranks are worse than $12$, regular season points above $12$, and plus-minus above $1$. Figure 7a plots CSS rank vs. above-zero proportion for Group 5. As the proportion plot shows, the low weight is due to the fact that the proportion trends downward only at ranks worse than $200$. The scatterplot in Figure 7b shows a similarly weak linear correlation of $-0.12$.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.0\textwidth]{Group_5_CSSrank.png}
\caption{Proportion and scatter plots for CSS\_rank vs.sum\_7yr\_GP in Group 5.}
\end{center}
\end{figure}
\textit{Regular season points} are the most important predictor for Group 2, which comprises players with CSS rank worse than $12$, and regular season points below $12$. In the proportion plot Figure 8, we see a strong relationship between points and the chance of playing more than 0 games (logistic regression weight $14.2$). In contrast in Group 4 (overall weight $-1.4$), there is essentially no relationship up to $65$ points; for players with points between $65$ and $85$ in fact the chance of playing more than zero games slightly decreases with increasing points.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.0\textwidth]{rs_points_nhl.png}
\caption{Proportion\_of\_Sum\_7yr\_GP\_greater\_than\_0 vs. rs\_P in Group 2\&4.}
\end{center}
\end{figure}
In Group 3, players are ranked at level $12$ or worse, have collected at least $12$ regular season points, and show a negative plus-minus score. The most important feature for Group $3$ is the \textit{regular season plus-minus} score (logistic regression weight $13.16$), which is negative for all players in this group. In this group, the chances of playing an NHL game increase with plus-minus, but not monotonically, as Figure 9 shows.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=1.0\textwidth]{rs_plusminus_nhl.png}
\caption{Proportion and scatter plots for rs\_PlusMinus vs.sum\_7yr\_GP in group 3.}
\end{center}
\end{figure}
For \textit{regular season goals}, Group 5 assigns a high logistic regression weight of $3.59$. However, Group 2 assigns a surprisingly negative weight of $-2.17$. Group 5 comprises players at CSS rank worse than $12$, regular season points 12 or higher, and positive plus-minus greater than $1$. About $64.8\%$ in this group are offensive players (see Figure 10). The positive weight therefore indicates that successful forwards score many goals, as we would expect.
Group 2 contains mainly defensemen ($61.6\%$; see Figure 10). The typical strong defenseman scores $0$ or $1$ goals in this group. Players with more goals tend to be forwards, who are weaker in this group. In sum, the tree assigns weights to goals that are appropriate for different positions, using statistics that correlate with position (e.g., plus-minus), rather than the position directly.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{position_nhl_plot.png}
\caption{Distribution of Defenseman vs. Forwards in Group 5\&2. The size is denoted as $n$.}
\end{center}
\end{figure}
\section{Identifying Exceptional Players}
Teams make drafting decisions not based on player statistics alone, but drawing on all relevant source of information, and with extensive input from scouts and other experts. As Cameron Lawrence from the Florida Panthers put it, \lq the numbers are often just the start of the discussion\rq \cite{Joyce}. In this section we discuss how the model tree can be applied to support the discussion of individual players by highlighting their special strengths. The idea is that the learned weights can be used to identify which features of a highly-ranked player differentiate him the most from others in his group.
\subsection*{Explaining the Rankings: identify weak points and strong points
}
Our method is as follows. For each group, we find the average feature vector of the players in the group, which we denote by $\overline{x_{g1}}, \overline{x_{g2}}, ..., \overline{x_{gm}}$ (see Figure 4). We denote the features of player $i$ as $x_{i1}, x_{i2}, ..., x_{im}$ . Then given a weight vector $(w_1, w_m)$ for the logistic regression model of group $g$, the log-odds difference between player $i$ and a random player in the group is given by
\begin{center}
$\sum_{j=1}^{m}w_j(x_{ij} - \overline{x_{gi}})$
\end{center}
We can interpret this sum as a measure of how high the model ranks player $i$ compared to other players in his group. This suggests defining as the player's strongest features the $x_{ij}$ that maximize $w_j(x_{ij} - \overline{x_{gi}})$, and as his weakest features those that minimize $w_j(x_{ij} - \overline{x_{gi}})$. This approach highlights features that are $i$) relevant to predicting future success, as measured by the magnitude of $w_j$, and $ii$) different from the average value in the player's group of comparables, as measured by the magnitude of $x_{ij} - \overline{x_{gi}}$.
\subsection*{Case Studies}
Figure 11 shows, for each group, the three strongest points for the most highly ranked players in the group. We see that the ranking for individual players is based on different features, even within the same group. The table also illustrates how the model allows us to identify a group of comparables for a given player. We discuss a few selected players and their strong points. The most interesting cases are often those where are ranking differs from the scouts' CSS rank. We therefore discuss the groups with lower rank first.
Among the players who were not ranked by CSS at all, our model ranks \textit{Kyle Cumiskey} at the top. Cumiskey was drafted in place $222$, played $132$ NHL games in his first $7$ years, represented Canada in the World Championship, and won a Stanley Cup in $2015$ with the Blackhawks. His strongest points were being Canadian, and the number of games played (e.g., $27$ playoff games vs. $19$ group average).
In the lowest CSS-rank group 6 (average $107$), our top-ranked player \textit{Brad Marchand} received CSS rank $80$, even below his Boston Bruin teammate Lucic's. Given his Stanley Cup win and success representing Canada, arguably our model was correct to identify him as a strong NHL prospect. The model highlights his superior play-off performance, both in terms of games played and points scored. Group 2 (CSS average $94$) is a much weaker group. \textit{Matt Pelech} is ranked at the top by our model because of his unusual weight, which in this group is unusually predictive of NHL participation. In group 4 (CSS average $86$), \textit{Sami Lepisto} was top-ranked, in part because he did not suffer many penalties although he played a high number of games. In group 3 (CSS average $76$), \textit{Brandon McMillan} is ranked relatively high by our model compared to the CSS. This is because in this group, left-wingers and shorter players are more likely to play in the NHL. In our ranking, \textit{Milan Lucic} tops Group 5 (CSS average $71$). At $58$, his CSS rank is above average in this group, but much below the highest CSS rank player (Legein at $13$). The main factors for the tree model are his high weight and number of play-off games played. Given his future success (Stanley Cup, NHL Young Stars Game), arguably our model correctly identified him as a star in an otherwise weaker group. The top players in Group 1 like \textit{Sidney Crosby} and \textit{Patrick Kane} are obvious stars, who have outstanding statistics even relative to other players in this strong group.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.58\textwidth]{NHL_exceptional_player.png}
\caption{Strongest Statistics for the top players in each group. Underlined players are discussed in the text.}
\end{center}
\end{figure}
\section{Conclusion and Future Work}
We have proposed building a regression model tree for ranking draftees in the NHL, or other sports, based on a list of player features and performance statistics. The model tree groups players according to the values of discrete features, or learned thresholds for continuous performance statistics. Each leaf node defines a group of players that is assigned its own regression model. Tree models combine the strength of both regression and cohort-based approaches, where player performance is predicted with reference to comparable players. An obvious approach is to use a linear regression tree for predicting our dependent variable, the number of NHL games played by a player within $7$ NHL years. However, we found that a linear regression tree performs poorly due to the zero-inflation problem (many draft picks never play any NHL game). Instead, we introduced the idea of using a logistic regression tree to predict whether a player plays any NHL game within $7$ years. Players are ranked according to the model tree probability that they play at least $1$ game.
Key findings include the following. 1) The model tree ranking correlates well with the actual success ranking according to the actual number of games played: better than draft order and competitive with the state-of-the-art generalized additive model \cite{Schuckers2016}. 2) The model predictions complement the Central Scouting Service (CSS) rank. For example, the tree identifies a group whose average CSS rank is only $107$, but whose median number of games played after $7$ years is $128$, including several Stanley Cup winners. 3) The model tree can highlight the exceptionally strong and weak points of draftees that make them stand out compared to the other players in their group.
Tree models are flexible and can be applied to other prediction problems to discover groups of comparable players as well as predictive models. For example, we can predict future NHL success from past NHL success, similar to Wilson \cite{Wilson2016} who used machine learning models to predict whether a player will play more than $160$ games in the NHL after $7$ years. Another direction is to apply the model to other sports, for example drafting for the National Basketball Association.
\bibliographystyle{alpha}
| {
"attr-fineweb-edu": 1.980469,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfbXxaKgQZYloKuRP | \section{Introduction}
\subsection{Background of the Study}
According to the Philippine Statistics Authority, tourism accounts to 12.7\% of the country's Gross Domestic Product in the year 2018 \cite{psa-2019-report}. Moreover, National Economic Development Authority reported that 1.5\% of the country's GDP on 2018 is accounted to international tourism with Korea, China and USA having the largest numbers of tourists coming in \cite{}. In addition, Department of Tourism recorded that 7.4\% of the total domestic tourists or an estimated figure of 3.97 million tourists, both foreign and domestics were in Davao Region on 2018 \cite{dot-report}. Also, employment in tourism industry was roughly estimated to 5.4 million in 2018 which constitutes 13\% of the employment in the country according to the Philippine Statistics Authority \cite{psa-2018-report}.
Hence, estimating the total earnings of the tourism industry in the Philippines will be very helpful in formulating necessary interventions and strategies to mitigate the effects of the COVID-19 pandemic. This paper will serve as a baseline research to describe and estimate the earnings lost of the said industry.
\subsection{Problem Statement}
The objective of this research is to forecast the monthly earnings loss of the tourism industry during the COVID-19 pandemic by forecasting the monthly foreign visitor arrivals using Seasonal Autoregressive Integrated Moving Average. Specifically, it aims to answer the following questions:
\begin{enumerate}
\item What is the order of the seasonal autoregressive intergrated moving average for the monthly foreign visitor arrivals in the Philippines?
\item How much earnings did the tourism industry lost during the COVID-19 pandemic?
\end{enumerate}
\subsection{Scope and Limitations}
The study covers a period of approximately eight years from January 2012 to December 2019. Also, the modeling technique that was considered in this research is limited only to autoregressive integrated moving average (ARIMA) and seasonal autoregressive integrated moving average (SARIMA). Other modeling techniques were not tested and considered.
\section{Methodology}
\subsection{Research Design}
The research utilized longitudinal research design wherein the monthly foreign visitor arrivals in the Philippines is recorded and analyzed. A longitudinal research design is an observational research method in which data is gathered for the same subject repeatedly over a period of time \cite{research-design}. Forecasting method, specifically the Seasonal Autoregressive Integrated Moving Average (SARIMA), was used to forecast the future monthly foreign visitor arrivals.
In selecting the appropriate model to forecast the monthly foreign visitor arrivals in the Philippines, the Box-Jenkins methodology was used. The data set was divided into two sets: the training set which is composed of 86 data points from January 2012 to December 2018; and testing set which is composed of 12 data points from January 2019 to December 2019. The training set was used to identify the appropriate SARIMA order whereas the testing set will measure the accuracy of the selected model using root mean squared error. The best model, in the context of this paper, was characterized to have a low Akaike's Information Criterion and low root mean squared error.
\subsection{Source of Data}
The data were extracted from Department of Tourism website. The data were composed of monthly foreign visitor arrivals from January 2012 to December 2019 which is composed of 98 data points.
\subsection{Procedure for Box-Jenkins Methodology}
Box-Jenkins methodology refers to a systematic method of identifying, fitting, checking, and using SARIMA time series models. The method is appropriate for time series of medium to long length which is at least 50 observations. The Box-Jenkins approach is divided into three stages: Model Identification, Model Estimation, and Diagnostic Checking.
\begin{enumerate}
\item \textit{Model Identification}
In this stage, the first step is to check whether the data is stationary or not. If it is not, then differencing was applied to the data until it becomes stationary. Stationary series means that the value of the series fluctuates around a constant mean and variance with no seasonality over time. Plotting the sample autocorrelation function (ACF) and sample partial autocorrelation function (PACF) can be used to assess if the series is stationary or not. Also, Augmented Dickey$-$Fuller (ADF) test can be applied to check if the series is stationary or not. Next step is to check if the variance of the series is constant or not. If it is not, data transformation such as differencing and/or Box-Cox transformation (eg. logarithm and square root) may be applied. Once done, the parameters $p$ and $q$ are identified using the ACF and PACF.
If there are 2 or more candidate models, the Akaike's Information Criterion (AIC) can be used to select which among the models is better. The model with the lowest AIC was selected.
\item \textit{Model Estimation}
In this stage, parameters are estimated by finding the values of the model coefficients which provide the best fit to the data. In this research, the combination of Conditional Sum of Squares and Maximum Likelihood estimates was used by the researcher. Conditional sum of squares was utilized to find the starting values, then maximum likelihood was applied after.
\item \textit{Diagnostic Checking}
Diagnostic checking performs residual analysis. This stage involves testing the assumptions of the model to identify any areas where the model is inadequate and if the corresponding residuals are uncorrelated. Box-Pierce and Ljung-Box tests may be used to test the assumptions. Once the model is a good fit, it can be used for forecasting.
\item \textit{Forecast Evaluation}
\hspace{5mm} Forecast evaluation involves generating forecasted values equal to the time frame of the model validation set then comparing these values to the latter. The root mean squared error was used to check the accuracy of the model. Moreover, the ACF and PACF plots were used to check if the residuals behave like white noise while the Shapiro-Wilk test was used to perform normality test.
\end{enumerate}
\subsection{Data Analysis}
The following statistical tools were used in the data analysis of this study.
\begin{enumerate}
\item Sample Autocorrelation Function
\hspace{5mm} Sample autocorrelation function measures how correlated past data points are to future values, based on how many time steps these points are separated by. Given a time series $X_t$, we define the sample autocorrelation function, $r_k$, at lag $k$ as \cite{time-series-book-01}
\begin{equation}
r_k = \dfrac{\displaystyle\sum_{t=1}^{N-k} (X_t - \bar{X})(X_{t+k} - \bar{X}) }{\displaystyle\sum_{t=1}^{N} (X_t - \bar{X})^2} \qquad \text{for } k = 1,2, ...
\end{equation}
where $\bar{X}$ is the average of $n$ observations .
\item Sample Partial Autocorrelation Function
\hspace{5mm} Sample partial autocorrelation function measures the correlation between two points that are separated by some number of periods but with the effect of the intervening correlations removed in the series. Given a time series $X_t$, the partial autocorrelation of lag $k$ is the autocorrelation between $X_t$ and $X_{t+k}$ with the linear dependence of $X_t$ on $X_{t+1}$ through $X_{t+k-1}$ removed. The sample partial autocorrelation function is defined as \cite{time-series-book-01}
\begin{equation}
\phi_{kk} = \dfrac{r_k - \displaystyle\sum_{j = 1}^{h-1} \phi_{k-1,j} r_{k-j}}{1 - \displaystyle\sum_{j = 1}^{j - 1} \phi_{k-1,j} r_j }
\end{equation}
where $\phi_{k,j} = \phi_{k-1,j} - \phi_{k,k} \phi_{k-1,k-j}, \text{for } j = 1,2, ..., k-1$, and $r_k$ is the sample autocorrelation at lag $k$.
\item Root Mean Square Error (RMSE)
\hspace{5mm} RMSE is a frequently used measure of the difference between values predicted by a model and the values actually observed from the environment that is being modelled. These individual differences are also called residuals, and the RMSE serves to aggregate them into a single measure of predictive power. The RMSE of a model prediction with respect to the estimated variable $X_{\text{model}}$ is defined as the square root of the mean squared error \cite{LSTM-book-01}
\begin{center}
$RMSE = \sqrt{\dfrac{1}{n}\displaystyle\sum_{i=1}^{n} (\hat{y}_{i} - y_i)^2}$
\end{center}
where $\hat{y_i}$ is the predicted values, $y_i$ is the actual value, and $n$ is the number of observations.
\item Akaike's Information Criterion (AIC)
\hspace{5mm} The AIC is a measure of how well a model fits a dataset, penalizing models that are so flexible that they would also fit unrelated datasets just as well. The general form for calculating the AIC is \cite{time-series-book-01}
\begin{equation}
AIC_{p,q} = \dfrac{-2 \ln(\text{maximized likelihood}) + 2r}{n}
\end{equation}
where $n$ is the sample size, $r = p + q + 1$ is the number of estimated parameters, and including a constant term.
\item Ljung$-$Box Q* Test
\hspace{5mm} The Ljung$-$Box statistic, also called the modified Box-Pierce statistic, is a function of the accumulated sample autocorrelation, $r_j$, up to any specified time lag $m$. This statistic is used to test whether the residuals of a series of observations over time are random and independent. The null hypothesis is that the model does not exhibit lack of fit and the alternative hypothesis is the model exhibits lack of fit. The test statistic is defined as \cite{time-series-book-01}
\begin{equation}
Q^* = n (n+2) \displaystyle\sum_{k = 1}^{m} \dfrac{ \hat{r}^2_k }{n - k}
\end{equation}
where $\hat{r}^2_k$ is the estimated autocorrelation of the series at lag $k$, $m$ is the number of lags being tested, $n$ is the sample size, and the given statistic is approximately Chi Square distributed with $h$ degrees of freedom, where $h = m - p - q$.
\item Conditional Sum of Squares
\hspace{5mm} Conditional sum of squares was utilized to find the starting values in estimating the parameters of the SARIMA process. The formula is given by \cite{forecast}
\begin{equation}
\hat{\theta}_n = \arg \min\limits_{\theta \in \ominus} s_n (\theta)
\end{equation}
where $s_n(\theta) = \dfrac{1}{n}\displaystyle\sum_{t=1}^{n}e^2_t(\theta) \ , e_t(\theta) = \displaystyle\sum_{j=0}^{t-1} \alpha_j(\theta)x_{t-j}$, and $\ominus \subset \mathbb{R}^p$ is a compact set.
\item Maximum Likelihood
\hspace{5mm} According to \cite{forecast}, once the model order has been identified, maximum likelihood was used to estimate the parameters $c$, $\phi_1, ..., \phi_p, \theta_1, ..., \theta_q$. This method finds the values of the parameters which maximize the probability of getting the data that has been observed . For SARIMA models, the process is very similar to the least squares estimates that would be obtained by minimizing
\begin{equation}
\displaystyle\sum_{t=1}^{T} \epsilon^2_t
\end{equation}
where $\epsilon_t$ is the error term.
\item Box$-$Cox Transformation
\hspace{5mm} Box$-$Cox Transformation is applied to stabilize the variance of a time series. It is a family of transformations that includes logarithms and power transformation which depend on the parameter $\lambda$ and are defined as follows \cite{Daimon2011}
\begin{center}
$y^{(\lambda)}_i =
\begin{cases}
\dfrac{y^\lambda_i - 1}{\lambda} & \text{, if } \lambda \neq 0\\
\ln y_i & \text{, if } \lambda = 0
\end{cases}
$
$
\qquad \qquad
$
$
w_i =
\begin{cases}
y_i^{\lambda} & \text{, if } \lambda \neq 0\\
\ln y_i & \text{, if } \lambda = 0
\end{cases}
$
\end{center}
where $y_i$ is the original time series values, $w_i$ is the transformed time series values using Box-Cox, and $\lambda$ is the parameter for the transformation.
\end{enumerate}
\subsection{Statistical Software}
R is a programming language and free software environment for statistical computing and graphics that is supported by the R Foundation for Statistical Computing \cite{R-software}. R includes linear and nonlinear modeling, classical statistical tests, time-series analysis, classification modeling, clustering, etc. The `forecast' package \cite{forecast} was utilized to generate time series plots, autocorrelation function/partial autocorrelation function plots, and forecasting. Also, the `tseries' package \cite{tseries} was used to perform Augmented Dickey-Fuller (ADF) to test stationarity. Moreover, the `lmtest' package \cite{lmtest} was used to test the parameters of the SARIMA model. Finally, the `ggplot2' \cite{ggplot2}, `tidyr' \cite{tidyr}, and `dplyr' \cite{dplyr} were used to plot time series data considered during the conduct of the research.
\section{Results and Discussion}
\begin{figure}[h]
\includegraphics[width=3.4in]{figure/img01}
\caption{Monthly Foreign Visitor Arrivals}
\label{rd01}
\end{figure}
Line plot was used to describe the behavior of the monthly foreign visitor arrivals in the Philippines. Figure~\ref{rd01} shows that there is an increasing trend and a seasonality pattern in the time series. Specifically, there is a seasonal increase in monthly foreign visitor arrivals every December and a seasonal decrease every September. These patterns suggest a seasonal autoregressive integrated moving average (SARIMA) approach in modeling and forecasting the monthly foreign visitor arrivals in the Philippines.
\begin{table}[h]
\captionof{table}{AIC and RMSE of the Two Models Considered}
\label{rd02}
\renewcommand{\arraystretch}{1}
\begin{tabularx}{3.35in}{Xcc} \hline
\textbf{Model} & \textbf{AIC} & \textbf{RMSE} \\ \hline
ARIMA (0,1,2)$\times$(1,0,1)$_{12}$ & $-414.56$ & 49517.48 \\
ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ & $-414.51$ & 47884.85 \\ \hline
\end{tabularx}
\end{table}
Akaike Information Criterion and Root Mean Squared Error were used to identify which model was used to model and forecast the monthly foreign visitor arrivals in the Philippines. Table~\ref{rd02} shows the top two SARIMA models based on AIC generated using R. ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ has the lowest AIC with a value of $-414.56$ which is followed by ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ with an AIC value of $-414.51$. Model estimation was performed on both models and generated significant parameters for both models (refer to Appendix A.2). Moreover, diagnostic checking was performed to assess the model. Both models passed the checks using residual versus time plot, residual versus fitted plot, normal Q-Q plot, ACF graph, PACF graphs, Ljung-Box test, and Shapiro-Wilk test (refer to Appendix A.3). Finally, forecast evaluation was performed to measure the accuracy of the model using an out-of-sample data set (refer to Appendix A.4). ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ produced the lowest RMSE relative to ARIMA (0,1,2)$\times$(1,0,1)$_{12}$. Hence, the former was used to forecast the monthly foreign visitor arrivals in the Philippines.
\subsection{How much Foreign Tourism Earnings was Lost during the COVID-19 Pandemic Crisis}
\begin{figure}[h]
\includegraphics[width=3.4in]{figure/img02}
\caption{Expected Monthly Earnings Loss}
\label{rd03}
\end{figure}
Figure~\ref{rd03} shows the estimated earnings loss (in billion pesos) of the tourism industry of the Philippines every month from April 2020 to December 2020. According to the Department of Tourism, the Average Daily Expenditure (ADE) for the month in review is \PHP 8,423.98 and the Average Length of Stay (ALoS) of tourists in the country is recorded at 7.11 nights. The figures were generated by multiplying the forecasted monthly foreign visitor arrivals, ADE, and ALoS (rounded to 7) \cite{dot-report}. Moreover, it is forecasted under community quarantine that the recovery time will take around four to five months (up to July) \cite{forecast-covid}. With this, the estimated earning loss of the country in terms of tourism will be around 170.5 billion pesos.
\section{Conclusions and Recommendations}
\subsection{Conclusions}
Based on the results presented on the study, the following findings were drawn:
\begin{enumerate}
\item The order of SARIMA model used to forecast the monthly foreign visitor arrival is ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ since it produced a relatively low AIC of $-414.51$ and the lowest RMSE of 47884.85 using an out-of-sample data. This means that the model is relatively better among other SARIMA models considered in forecasting the monthly foreign visitor arrivals in the Philippines.
\item If the COVID-19 Pandemic lasts up to five months, the tourism industry of the Philippines will have an estimated earnings loss of about \PHP 170.5 billion. Assumptions about average daily expenditure and average length of stay of tourists were based on the Department of Tourism reports.
\end{enumerate}
\subsection{Recommendations}
The projected \PHP 170.5 billion loss on Philippine's foreign tourism is really a huge money. Regaining such loss the soonest time, however, would only jeopardize the lives of the Filipino people. On the other hand, the government can, perhaps, reopen the Philippines' domestic tourism. This would somehow help regain the country's loss on revenue from tourism, although not fully.
However, the following recommendations, shown in scenarios/options below, may be helpful in regaining it, both in foreign and domestic tourism, and ensuring safety among Filipinos, as well.
\begin{enumerate}
\item Option 1: Stop foreign tourism until the availability of the vaccine, but gradually open domestic tourism starting July of 2020. In this scenario/option, the following considerations may be adhered to, viz.
\begin{enumerate}
\item not all domestic tourism shall be reopened in the entire country; only those areas with zero covid-19 cases;
\item for areas where domestic tourism is allowed/reopened, appropriate guidelines should be strictly implemented by concerned departments/agencies to eliminate/prevent covid-19 transmission; and
\item digital code that would help in tracing the contacts and whereabouts of domestic tourists, as being used in China and Singapore, should be installed before the reopening of the domestic tourism.
\end{enumerate}
\item Option 2: Gradual opening of foreign tourism starting July 2020 and full reopening of domestic tourism on the first semester of 2021 or when the covid-19 cases in the Philippines is already zero. However, the following considerations should be satisfied, viz.
\begin{enumerate}
\item only countries with covid-19 zero cases are allowed to enter the Philippines;
\item appropriate guidelines should be strictly implemented by concerned departments/ agencies both for foreign and domestic tourism to eliminate/ prevent the spread of the said virus; and
\item digital code that would help in tracing the contacts and whereabouts of foreign tourists, as being used in China and Singapore, should be installed before reopening the foreign tourism in the Philippines.
\end{enumerate}
\end{enumerate}
\bibliographystyle{asmems4}
\section{Introduction}
\subsection{Background of the Study}
According to the Philippine Statistics Authority, tourism accounts to 12.7\% of the country's Gross Domestic Product in the year 2018 \cite{psa-2019-report}. Moreover, National Economic Development Authority reported that 1.5\% of the country's GDP on 2018 is accounted to international tourism with Korea, China and USA having the largest numbers of tourists coming in \cite{}. In addition, Department of Tourism recorded that 7.4\% of the total domestic tourists or an estimated figure of 3.97 million tourists, both foreign and domestics were in Davao Region on 2018 \cite{dot-report}. Also, employment in tourism industry was roughly estimated to 5.4 million in 2018 which constitutes 13\% of the employment in the country according to the Philippine Statistics Authority \cite{psa-2018-report}.
Hence, estimating the total earnings of the tourism industry in the Philippines will be very helpful in formulating necessary interventions and strategies to mitigate the effects of the COVID-19 pandemic. This paper will serve as a baseline research to describe and estimate the earnings lost of the said industry.
\subsection{Problem Statement}
The objective of this research is to forecast the monthly earnings loss of the tourism industry during the COVID-19 pandemic by forecasting the monthly foreign visitor arrivals using Seasonal Autoregressive Integrated Moving Average. Specifically, it aims to answer the following questions:
\begin{enumerate}
\item What is the order of the seasonal autoregressive intergrated moving average for the monthly foreign visitor arrivals in the Philippines?
\item How much earnings did the tourism industry lost during the COVID-19 pandemic?
\end{enumerate}
\subsection{Scope and Limitations}
The study covers a period of approximately eight years from January 2012 to December 2019. Also, the modeling technique that was considered in this research is limited only to autoregressive integrated moving average (ARIMA) and seasonal autoregressive integrated moving average (SARIMA). Other modeling techniques were not tested and considered.
\section{Methodology}
\subsection{Research Design}
The research utilized longitudinal research design wherein the monthly foreign visitor arrivals in the Philippines is recorded and analyzed. A longitudinal research design is an observational research method in which data is gathered for the same subject repeatedly over a period of time \cite{research-design}. Forecasting method, specifically the Seasonal Autoregressive Integrated Moving Average (SARIMA), was used to forecast the future monthly foreign visitor arrivals.
In selecting the appropriate model to forecast the monthly foreign visitor arrivals in the Philippines, the Box-Jenkins methodology was used. The data set was divided into two sets: the training set which is composed of 86 data points from January 2012 to December 2018; and testing set which is composed of 12 data points from January 2019 to December 2019. The training set was used to identify the appropriate SARIMA order whereas the testing set will measure the accuracy of the selected model using root mean squared error. The best model, in the context of this paper, was characterized to have a low Akaike's Information Criterion and low root mean squared error.
\subsection{Source of Data}
The data were extracted from Department of Tourism website. The data were composed of monthly foreign visitor arrivals from January 2012 to December 2019 which is composed of 98 data points.
\subsection{Procedure for Box-Jenkins Methodology}
Box-Jenkins methodology refers to a systematic method of identifying, fitting, checking, and using SARIMA time series models. The method is appropriate for time series of medium to long length which is at least 50 observations. The Box-Jenkins approach is divided into three stages: Model Identification, Model Estimation, and Diagnostic Checking.
\begin{enumerate}
\item \textit{Model Identification}
In this stage, the first step is to check whether the data is stationary or not. If it is not, then differencing was applied to the data until it becomes stationary. Stationary series means that the value of the series fluctuates around a constant mean and variance with no seasonality over time. Plotting the sample autocorrelation function (ACF) and sample partial autocorrelation function (PACF) can be used to assess if the series is stationary or not. Also, Augmented Dickey$-$Fuller (ADF) test can be applied to check if the series is stationary or not. Next step is to check if the variance of the series is constant or not. If it is not, data transformation such as differencing and/or Box-Cox transformation (eg. logarithm and square root) may be applied. Once done, the parameters $p$ and $q$ are identified using the ACF and PACF.
If there are 2 or more candidate models, the Akaike's Information Criterion (AIC) can be used to select which among the models is better. The model with the lowest AIC was selected.
\item \textit{Model Estimation}
In this stage, parameters are estimated by finding the values of the model coefficients which provide the best fit to the data. In this research, the combination of Conditional Sum of Squares and Maximum Likelihood estimates was used by the researcher. Conditional sum of squares was utilized to find the starting values, then maximum likelihood was applied after.
\item \textit{Diagnostic Checking}
Diagnostic checking performs residual analysis. This stage involves testing the assumptions of the model to identify any areas where the model is inadequate and if the corresponding residuals are uncorrelated. Box-Pierce and Ljung-Box tests may be used to test the assumptions. Once the model is a good fit, it can be used for forecasting.
\item \textit{Forecast Evaluation}
\hspace{5mm} Forecast evaluation involves generating forecasted values equal to the time frame of the model validation set then comparing these values to the latter. The root mean squared error was used to check the accuracy of the model. Moreover, the ACF and PACF plots were used to check if the residuals behave like white noise while the Shapiro-Wilk test was used to perform normality test.
\end{enumerate}
\subsection{Data Analysis}
The following statistical tools were used in the data analysis of this study.
\begin{enumerate}
\item Sample Autocorrelation Function
\hspace{5mm} Sample autocorrelation function measures how correlated past data points are to future values, based on how many time steps these points are separated by. Given a time series $X_t$, we define the sample autocorrelation function, $r_k$, at lag $k$ as \cite{time-series-book-01}
\begin{equation}
r_k = \dfrac{\displaystyle\sum_{t=1}^{N-k} (X_t - \bar{X})(X_{t+k} - \bar{X}) }{\displaystyle\sum_{t=1}^{N} (X_t - \bar{X})^2} \qquad \text{for } k = 1,2, ...
\end{equation}
where $\bar{X}$ is the average of $n$ observations .
\item Sample Partial Autocorrelation Function
\hspace{5mm} Sample partial autocorrelation function measures the correlation between two points that are separated by some number of periods but with the effect of the intervening correlations removed in the series. Given a time series $X_t$, the partial autocorrelation of lag $k$ is the autocorrelation between $X_t$ and $X_{t+k}$ with the linear dependence of $X_t$ on $X_{t+1}$ through $X_{t+k-1}$ removed. The sample partial autocorrelation function is defined as \cite{time-series-book-01}
\begin{equation}
\phi_{kk} = \dfrac{r_k - \displaystyle\sum_{j = 1}^{h-1} \phi_{k-1,j} r_{k-j}}{1 - \displaystyle\sum_{j = 1}^{j - 1} \phi_{k-1,j} r_j }
\end{equation}
where $\phi_{k,j} = \phi_{k-1,j} - \phi_{k,k} \phi_{k-1,k-j}, \text{for } j = 1,2, ..., k-1$, and $r_k$ is the sample autocorrelation at lag $k$.
\item Root Mean Square Error (RMSE)
\hspace{5mm} RMSE is a frequently used measure of the difference between values predicted by a model and the values actually observed from the environment that is being modelled. These individual differences are also called residuals, and the RMSE serves to aggregate them into a single measure of predictive power. The RMSE of a model prediction with respect to the estimated variable $X_{\text{model}}$ is defined as the square root of the mean squared error \cite{LSTM-book-01}
\begin{center}
$RMSE = \sqrt{\dfrac{1}{n}\displaystyle\sum_{i=1}^{n} (\hat{y}_{i} - y_i)^2}$
\end{center}
where $\hat{y_i}$ is the predicted values, $y_i$ is the actual value, and $n$ is the number of observations.
\item Akaike's Information Criterion (AIC)
\hspace{5mm} The AIC is a measure of how well a model fits a dataset, penalizing models that are so flexible that they would also fit unrelated datasets just as well. The general form for calculating the AIC is \cite{time-series-book-01}
\begin{equation}
AIC_{p,q} = \dfrac{-2 \ln(\text{maximized likelihood}) + 2r}{n}
\end{equation}
where $n$ is the sample size, $r = p + q + 1$ is the number of estimated parameters, and including a constant term.
\item Ljung$-$Box Q* Test
\hspace{5mm} The Ljung$-$Box statistic, also called the modified Box-Pierce statistic, is a function of the accumulated sample autocorrelation, $r_j$, up to any specified time lag $m$. This statistic is used to test whether the residuals of a series of observations over time are random and independent. The null hypothesis is that the model does not exhibit lack of fit and the alternative hypothesis is the model exhibits lack of fit. The test statistic is defined as \cite{time-series-book-01}
\begin{equation}
Q^* = n (n+2) \displaystyle\sum_{k = 1}^{m} \dfrac{ \hat{r}^2_k }{n - k}
\end{equation}
where $\hat{r}^2_k$ is the estimated autocorrelation of the series at lag $k$, $m$ is the number of lags being tested, $n$ is the sample size, and the given statistic is approximately Chi Square distributed with $h$ degrees of freedom, where $h = m - p - q$.
\item Conditional Sum of Squares
\hspace{5mm} Conditional sum of squares was utilized to find the starting values in estimating the parameters of the SARIMA process. The formula is given by \cite{forecast}
\begin{equation}
\hat{\theta}_n = \arg \min\limits_{\theta \in \ominus} s_n (\theta)
\end{equation}
where $s_n(\theta) = \dfrac{1}{n}\displaystyle\sum_{t=1}^{n}e^2_t(\theta) \ , e_t(\theta) = \displaystyle\sum_{j=0}^{t-1} \alpha_j(\theta)x_{t-j}$, and $\ominus \subset \mathbb{R}^p$ is a compact set.
\item Maximum Likelihood
\hspace{5mm} According to \cite{forecast}, once the model order has been identified, maximum likelihood was used to estimate the parameters $c$, $\phi_1, ..., \phi_p, \theta_1, ..., \theta_q$. This method finds the values of the parameters which maximize the probability of getting the data that has been observed . For SARIMA models, the process is very similar to the least squares estimates that would be obtained by minimizing
\begin{equation}
\displaystyle\sum_{t=1}^{T} \epsilon^2_t
\end{equation}
where $\epsilon_t$ is the error term.
\item Box$-$Cox Transformation
\hspace{5mm} Box$-$Cox Transformation is applied to stabilize the variance of a time series. It is a family of transformations that includes logarithms and power transformation which depend on the parameter $\lambda$ and are defined as follows \cite{Daimon2011}
\begin{center}
$y^{(\lambda)}_i =
\begin{cases}
\dfrac{y^\lambda_i - 1}{\lambda} & \text{, if } \lambda \neq 0\\
\ln y_i & \text{, if } \lambda = 0
\end{cases}
$
$
\qquad \qquad
$
$
w_i =
\begin{cases}
y_i^{\lambda} & \text{, if } \lambda \neq 0\\
\ln y_i & \text{, if } \lambda = 0
\end{cases}
$
\end{center}
where $y_i$ is the original time series values, $w_i$ is the transformed time series values using Box-Cox, and $\lambda$ is the parameter for the transformation.
\end{enumerate}
\subsection{Statistical Software}
R is a programming language and free software environment for statistical computing and graphics that is supported by the R Foundation for Statistical Computing \cite{R-software}. R includes linear and nonlinear modeling, classical statistical tests, time-series analysis, classification modeling, clustering, etc. The `forecast' package \cite{forecast} was utilized to generate time series plots, autocorrelation function/partial autocorrelation function plots, and forecasting. Also, the `tseries' package \cite{tseries} was used to perform Augmented Dickey-Fuller (ADF) to test stationarity. Moreover, the `lmtest' package \cite{lmtest} was used to test the parameters of the SARIMA model. Finally, the `ggplot2' \cite{ggplot2}, `tidyr' \cite{tidyr}, and `dplyr' \cite{dplyr} were used to plot time series data considered during the conduct of the research.
\section{Results and Discussion}
\begin{figure}[h]
\includegraphics[width=3.4in]{figure/img01}
\caption{Monthly Foreign Visitor Arrivals}
\label{rd01}
\end{figure}
Line plot was used to describe the behavior of the monthly foreign visitor arrivals in the Philippines. Figure~\ref{rd01} shows that there is an increasing trend and a seasonality pattern in the time series. Specifically, there is a seasonal increase in monthly foreign visitor arrivals every December and a seasonal decrease every September. These patterns suggest a seasonal autoregressive integrated moving average (SARIMA) approach in modeling and forecasting the monthly foreign visitor arrivals in the Philippines.
\begin{table}[h]
\captionof{table}{AIC and RMSE of the Two Models Considered}
\label{rd02}
\renewcommand{\arraystretch}{1}
\begin{tabularx}{3.35in}{Xcc} \hline
\textbf{Model} & \textbf{AIC} & \textbf{RMSE} \\ \hline
ARIMA (0,1,2)$\times$(1,0,1)$_{12}$ & $-414.56$ & 49517.48 \\
ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ & $-414.51$ & 47884.85 \\ \hline
\end{tabularx}
\end{table}
Akaike Information Criterion and Root Mean Squared Error were used to identify which model was used to model and forecast the monthly foreign visitor arrivals in the Philippines. Table~\ref{rd02} shows the top two SARIMA models based on AIC generated using R. ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ has the lowest AIC with a value of $-414.56$ which is followed by ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ with an AIC value of $-414.51$. Model estimation was performed on both models and generated significant parameters for both models (refer to Appendix A.2). Moreover, diagnostic checking was performed to assess the model. Both models passed the checks using residual versus time plot, residual versus fitted plot, normal Q-Q plot, ACF graph, PACF graphs, Ljung-Box test, and Shapiro-Wilk test (refer to Appendix A.3). Finally, forecast evaluation was performed to measure the accuracy of the model using an out-of-sample data set (refer to Appendix A.4). ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ produced the lowest RMSE relative to ARIMA (0,1,2)$\times$(1,0,1)$_{12}$. Hence, the former was used to forecast the monthly foreign visitor arrivals in the Philippines.
\subsection{How much Foreign Tourism Earnings was Lost during the COVID-19 Pandemic Crisis}
\begin{figure}[h]
\includegraphics[width=3.4in]{figure/img02}
\caption{Expected Monthly Earnings Loss}
\label{rd03}
\end{figure}
Figure~\ref{rd03} shows the estimated earnings loss (in billion pesos) of the tourism industry of the Philippines every month from April 2020 to December 2020. According to the Department of Tourism, the Average Daily Expenditure (ADE) for the month in review is \PHP 8,423.98 and the Average Length of Stay (ALoS) of tourists in the country is recorded at 7.11 nights. The figures were generated by multiplying the forecasted monthly foreign visitor arrivals, ADE, and ALoS (rounded to 7) \cite{dot-report}. Moreover, it is forecasted under community quarantine that the recovery time will take around four to five months (up to July) \cite{forecast-covid}. With this, the estimated earning loss of the country in terms of tourism will be around 170.5 billion pesos.
\section{Conclusions and Recommendations}
\subsection{Conclusions}
Based on the results presented on the study, the following findings were drawn:
\begin{enumerate}
\item The order of SARIMA model used to forecast the monthly foreign visitor arrival is ARIMA (1,1,1)$\times$(1,0,1)$_{12}$ since it produced a relatively low AIC of $-414.51$ and the lowest RMSE of 47884.85 using an out-of-sample data. This means that the model is relatively better among other SARIMA models considered in forecasting the monthly foreign visitor arrivals in the Philippines.
\item If the COVID-19 Pandemic lasts up to five months, the tourism industry of the Philippines will have an estimated earnings loss of about \PHP 170.5 billion. Assumptions about average daily expenditure and average length of stay of tourists were based on the Department of Tourism reports.
\end{enumerate}
\subsection{Recommendations}
The projected \PHP 170.5 billion loss on Philippine's foreign tourism is really a huge money. Regaining such loss the soonest time, however, would only jeopardize the lives of the Filipino people. On the other hand, the government can, perhaps, reopen the Philippines' domestic tourism. This would somehow help regain the country's loss on revenue from tourism, although not fully.
However, the following recommendations, shown in scenarios/options below, may be helpful in regaining it, both in foreign and domestic tourism, and ensuring safety among Filipinos, as well.
\begin{enumerate}
\item Option 1: Stop foreign tourism until the availability of the vaccine, but gradually open domestic tourism starting July of 2020. In this scenario/option, the following considerations may be adhered to, viz.
\begin{enumerate}
\item not all domestic tourism shall be reopened in the entire country; only those areas with zero covid-19 cases;
\item for areas where domestic tourism is allowed/reopened, appropriate guidelines should be strictly implemented by concerned departments/agencies to eliminate/prevent covid-19 transmission; and
\item digital code that would help in tracing the contacts and whereabouts of domestic tourists, as being used in China and Singapore, should be installed before the reopening of the domestic tourism.
\end{enumerate}
\item Option 2: Gradual opening of foreign tourism starting July 2020 and full reopening of domestic tourism on the first semester of 2021 or when the covid-19 cases in the Philippines is already zero. However, the following considerations should be satisfied, viz.
\begin{enumerate}
\item only countries with covid-19 zero cases are allowed to enter the Philippines;
\item appropriate guidelines should be strictly implemented by concerned departments/ agencies both for foreign and domestic tourism to eliminate/ prevent the spread of the said virus; and
\item digital code that would help in tracing the contacts and whereabouts of foreign tourists, as being used in China and Singapore, should be installed before reopening the foreign tourism in the Philippines.
\end{enumerate}
\end{enumerate}
\bibliographystyle{asmems4}
| {
"attr-fineweb-edu": 1.893555,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdyk5qoTBFq8c9Npu | \section{Simulations}\label{section:simulation}
\subsection{Simulation Setting}
We choose a 5km$\times$5km region in London as the simulation area, where tasks and users are randomly distributed in the area.
We simulate a time period of $2$ hours, e.g., [10:00, 12:00], within which each task is initiated randomly and uniformly.
The task valid (survival) time of each task is selected randomly according to the (truncated) normal distribution with the expected value of $30$ minutes.
Each task needs to be started (but not necessarily to be completed) within its valid time period.
The reward of each task is selected randomly according to the (truncated) normal distribution with the expected value of $\$10$ (in dollar).
In the simulations, we fix the number of tasks to $8$, while change the number of users from $2$ to $20$ (to capture different levels of competition among users).
We consider three types of different users according to their travelling modes: (i) \emph{Walking users}, who travel by walking, with a relatively low travelling speed $5$km/h and cost $\$0.2$/km; (ii) \emph{Bike users}, who travel by bike, with a medium travelling speed $15$km/h and cost $\$0.5$/km, and (iii) \emph{Driving users}, who travel by driving, with a relatively high travelling speed $45$km/h and cost $\$1$/km.
Each user takes on average $10$ minutes to execute a task, incurring an execution cost of $\$1$ (in dollar) on average.
Both the execution time and cost follow the (truncated) normal distribution as other parameters.
\subsection{Social Welfare Gap}
We first show the social welfare gap between the NE and SE, which captures the efficiency loss of the NE.
Figure \ref{fig:mcs-simu-sw} presents the expected social welfare under SE and NE with three different types of users.
In the first subfigure (a), users are walking users with low travelling speed and cost.
In the second subfigure (b), users are bike users with medium travelling speed and cost.
In the third subfigure (c), users are driving users with high travelling speed and cost.
From Figure \ref{fig:mcs-simu-sw}, we have the following observations:
\emph{1)}
\emph{The social welfare under SE always increases with the number of users in all three scenarios (with walking, bike, and driving users).}
This is because with more users, it is more likely that tasks can be executed by lower cost users, hence resulting in a higher social welfare.
\emph{2)}
\emph{The social welfare under NE decreases with the number of users in most cases.}
This is because with more users, the competition among users becomes more intensive, and the probability that multiple users choosing the same task increases, which will lead to a higher total execution/travelling cost and hence a lower social welfare.
\emph{3)}
\emph{The social welfare under NE may also increase with the number of users in some cases.}
For example, in scenario (a) with walking users, when the number of walking user is small (e.g., less than 5), the social welfare under NE increases with the number of users slightly.
This is because the walking users often have a limited serving region due to the small travelling speed, hence there is almost no competition among users when the number of users is small.
In this case, increasing the number of users will not introduce much competition among users, but increase the probability that the tasks being executed by lower cost users, hence resulting in a higher social welfare.
\ifodd 0
This coincides with the numerical results in Figure \ref{fig:simu-1}, where the social welfare under NE increases with the number of users when the user number is small.
\fi
\subsection{Social Welfare Ratio}
Figure \ref{fig:mcs-simu-ratio} further presents the social welfare \emph{ratio} between NE and SE with three different types of users.
We can see that in all three scenarios, the social welfare ratio decreases with the number of users.
This is because the social welfare increases with the number of users under SE, while (mostly) decreases with the number of users under NE, as illustrated in the previous Figure \ref{fig:mcs-simu-sw}.
We can further see that the social welfare ratio with walking users is higher than those with bike users and driving users.
This is because the competition among walking users is less intensive than that among bike/driving users, due to the limited traveling speed and serving region of walking users.
Hence, the degradation of social welfare under NE (comparing with SE) is smaller with walking users.
More specifically, we can find from Figure \ref{fig:mcs-simu-ratio} that when the number of users changes from $2$ to $20$, the social welfare ratio decreases from $95\%$, $85\%$, and $75\%$ to approximately $30\%$ for walking, bike, and driving users, respectively.
\begin{figure}
\vspace{-3mm} \centering
\includegraphics[width=2.8in]{MCS-simu-ratio}
\vspace{-3mm}
\caption{Ratio of Social Welfare between SE and NE.}
\label{fig:mcs-simu-ratio}
\vspace{-3mm}
\end{figure}
\ifodd 0
\subsection{Fairness}
We now show the fairness at the NE, which captures how the generated social welfare is distributed among users.
We evaluate the fairness by the widely-used Jain's fairness index \cite{Jain}, which is defined as follows:
$$
J(\bs) = \frac{\left( \sum_{i\in\N} \u_i(\bs) \right)^2}{N \cdot \sum_{i\in\N} \u_i(\bs)^2 } .
$$
By the above definition, it is easy to see that the maximum Jain's fairness is $1$, which can be achieved when all users share the social welfare equally, and the minimum Jain's fairness is $\frac{1}{N}$, which can be achieved when one user gets all of the social welfare while all other users get a zero payoff.
Figure \ref{fig:mcs-simu-jane} presents the Jain's fairness index under NE with three different types of users.
We can see that the Jain's fairness index decreases with the number of users, and can be down to $0.8$, $0.6$, and $0.48$ for walking, bike, and driving users, respectively, when the total number of users is 20.
This implies that a larger number of users will lead to a poor fairness under NE.
This is because a larger number of users will lead to a more intensive competition among users, hence it is more likely that some highly competitive users occupy most of the social welfare, resulting in a lower fairness index.
Figure \ref{fig:mcs-simu-jane} also shows that under NE, the Jain's fairness index with walking users is often higher than those with bike users and driving users.
This is because the competition among walking users is less intensive than that among bike or driving users, due to the limited traveling speed and serving region of walking users.
Thus, with bike or driving users, it is more likely that some highly competitive users occupy most of the social welfare, resulting in a lower fairness index.
\fi
\subsection{Proof for Lemma \ref{lemma:mcs:potential}}\label{app:5}
\begin{proof}
To prove that the Task Selection Game $\mathcal{T}$ is a potential game with the potential function $\Phi(\bs)$ given in \eqref{eq:xxx:pf}, we need to show that for every user $i\in \N$ and any strategies $\bs_i,\bs_i' \subseteq \S_i $ of user $i$, the following equation holds:
\begin{equation}\label{eq:pf-proof}
\u_i (\bs_i, \bs_{-i} ) - \u_i (\bs_i', \bs_{-i} )
=
\Phi (\bs_i, \bs_{-i} ) - \Phi (\bs_i', \bs_{-i} ),
\end{equation}
under any feasible strategy profile $\bs_{-i}$ of other users.
We first show that \eqref{eq:pf-proof} holds when user $i$ only changes the execution order,
while not changing the task selection. Namely, $\bs_i $ and $\bs_i' $ contain the same task set (but different execution orders).
It is easy to see that
\begin{equation}
M_k (\bs_i, \bs_{-i}) = M_k (\bs_i', \bs_{-i}), \quad \forall k\in\S,
\end{equation}
that is, the number of users executing each task $k \in \S$ does not change when user $i$ only changes the execution order. This implies that the achieved reward and the incurred execution cost of user $i$ will not change.
Thus, by \eqref{eq:xxx:payoff}, we have:
\begin{equation}
\begin{aligned}
& \u_i (\bs_i, \bs_{-i} ) - \u_i (\bs_i', \bs_{-i} ) =
c_i^{\textsc{tr}} (\bs_i) - c_i^{\textsc{tr}} (\bs_i'),
\end{aligned}
\end{equation}
where only the travelling cost of user $i$ is changed.
For the potential function in \eqref{eq:xxx:pf}, we also have:
\begin{equation}
\begin{aligned}
\Phi (\bs_i, \bs_{-i} ) - \Phi (\bs_i', \bs_{-i} ) =
c_i^{\textsc{tr}} (\bs_i) - c_i^{\textsc{tr}} (\bs_i').
\end{aligned}
\end{equation}
This is because the change of user $i$'s execution order does not affect the execution costs and travelling costs of other users. Based on the above, we can see that \eqref{eq:pf-proof} holds when user $i$ changes the execution order only.
We then show that \eqref{eq:pf-proof} holds when player $i$ changes his strategy from $\bs_i $ to $\bs_i' $ by removing a task $\tau \in \bs_i$, i.e., $\bs_i' = \bs_i / \{ \tau \}$.
According to \eqref{eq:xxx:payoff}, we have:
\begin{equation*}
\begin{aligned}
& \u_i (\bs_i, \bs_{-i} ) - \u_i (\bs_i', \bs_{-i} )
\\
= & \ \frac{V_\tau}{M_\tau (\bs_i , \bs_{-i})} +
c_i^{\textsc{ex}} (\bs_i') + c_i^{\textsc{tr}} (\bs_i') -
c_i^{\textsc{ex}} (\bs_i) - c_i^{\textsc{tr}} (\bs_i) .
\end{aligned}
\end{equation*}
For the potential function in \eqref{eq:xxx:pf}, we also have:
\begin{equation*}\label{eq:proofxxxx}
\begin{aligned}
& \Phi (\bs_i, \bs_{-i} ) - \Phi (\bs_i', \bs_{-i} )
\\
= & \ \frac{V_\tau}{M_\tau (\bs_i , \bs_{-i})} +
c_i^{\textsc{ex}} (\bs_i') + c_i^{\textsc{tr}} (\bs_i') -
c_i^{\textsc{ex}} (\bs_i) - c_i^{\textsc{tr}} (\bs_i),
\end{aligned}
\end{equation*}
which follows because $ M_\tau(\bs_i , \bs_{-i}) = M_\tau(\bs_i', \bs_{-i}) + 1$ and $ M_k(\bs_i , \bs_{-i}) = M_k(\bs_i', \bs_{-i}) $ for all $k \neq \tau$.
Based on the above, we can see that \eqref{eq:pf-proof} holds when user $i$ removes a task from his strategy.
Similarly, we can show that \eqref{eq:pf-proof} holds when user $i$ adds a task into his strategy.
Using the above results iteratively, we can show that
the equation \eqref{eq:pf-proof} holds when user $i$ changes his strategy from an arbitrary $\bs_i $ to an arbitrary $\bs_i' $.
\end{proof}
\subsection{Proof for Lemma \ref{lemma:mcs:NE}}\label{app:6}
\begin{proof}
We will show that the outcome $\bs^*$ given by \eqref{eq:xxx:NE} is an NE of $\mathcal{T}$.
According to \eqref{eq:xxx:NE}, we find that for any $i \in \N$,
$$
\Phi(\bs_i^*, \bs_{-i}^*) \geq \Phi(\bs_i ', \bs_{-i}^*), \quad \forall \bs_i ' \subseteq \S_i.
$$
By Lemma \ref{lemma:mcs:potential}, we further have: for any $i\in \N$ and $\bs_i ' \subseteq \S_i$,
$$
\u_i (\bs_i^*, \bs_{-i}^* ) - \u_i (\bs_i', \bs_{-i}^* )
=
\Phi (\bs_i^*, \bs_{-i}^* ) - \Phi (\bs_i', \bs_{-i}^* ).
$$
Hence, we have the following observations:
$$
\u_i (\bs_i^*, \bs_{-i}^* ) \geq \u_i (\bs_i', \bs_{-i}^* ), \quad \forall \bs_i ' \subseteq \S_i, i\in\N,
$$
which implies that the outcome $\bs^*$ in \eqref{eq:xxx:NE} is an NE of $\mathcal{T}$.
\end{proof}
\subsection{Proof for Lemma \ref{lemma:mcs:tax}}\label{app:7}
\begin{proof}
This lemma can be proved by Theorem \ref{thm:efficiency} and Lemma \ref{lemma:fair1} directly.
\end{proof}
\section{Introduction}
\subsection{Background and Motivations}
With the development and proliferation of smartphones with rich build-in sensors and advanced computational capabilities,
we are witnessing a new sensing network paradigm known as \emph{participatory sensing (PS)} or \emph{mobile crowd sensing (MCS)} \cite{bg1,bg2,bg3}, which relies on the active participation and contribution of smartphone users (to contribute their smartphones as sensors).
Comparing with the traditional approach of deploying sensor nodes and sensor networks, this new sensing scheme can achieve a higher sensing coverage with a lower deployment cost, and hence it better adapts to the changing requirement of tasks and the varying environment.
Therefore, it has found a wide range of applications in environment, infrastructure, and community monitoring \cite{App-WeatherLah,App-OpenSignal, App-Atmos1,App-Waze}.
A typical PS framework often consists of (i) a service platform (\emph{server}) residing in the cloud and (ii)
a set of participating smartphone \emph{users} distributed and travelling on the ground \cite{bg1,bg2,bg3}.
The service platform launches many sensing tasks, possibly initiated by different requesters with different data requirements for different purposes;
and users subscribe to one or multiple task(s) and contribute their sensing data.
Due to the location-awareness and time-sensitivity of tasks
and the geographical distribution of users,
a proper \emph{scheduling} of tasks among users is critical for a PS system.
For example, if a task is scheduled to a user far away from its target location, the user may not be able to travel to the target location in time so as to complete the task successfully.
Depending on who (i.e., the server or each user) will make the task scheduling decision, there are two types of different PS models: \emph{Server-centric Participatory Sensing (SPS)} \cite{Luo-infocom2014, yang-mobicom12, Duan-infocom2012,Gao-2015,add1,add2} and \emph{User-centric Participatory Sensing (UPS)} \cite{ups-0,ups-1,ups-2,ups-3,add3}. In the SPS model, the server will make the task scheduling decision and determine the joint scheduling of all tasks among all users, often in a centralized manner with complete information (as in \cite{Luo-infocom2014, yang-mobicom12, Duan-infocom2012,Gao-2015,add1,add2}).
In the UPS model, each participating user will make his individual task scheduling decision and determine the tasks he is going to execute, often in a distributed manner with local information (as in\cite{ups-0,ups-1,ups-2,ups-3,add3}).
Clearly, the SPS model assigns more control to the server to make the (centralized) joint scheduling decision, hence can better satisfy the requirements of various tasks.
The UPS model, however, distributes the control among the participating users and enables each user to make the (distributed) individual scheduling decision.
Hence, it can faster adapt to the varying environment and the changing requirement of individual users.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
{In this work, we focus on \emph{the task scheduling in the UPS model}},
where the task scheduling decision is made by each user distributively.
Comparing with SPS, the UPS model has the appealing features of (i) low communication overhead and (ii) low computational complexity,
by distributing the complicated central control (and computation) among numerous participating users, hence it is more scalable.
Therefore, UPS is particularly suitable for a fast changing environment (where the information exchange in SPS may become a heavy burden) and a large-scale system (where the centralized task scheduling in SPS may be too complicated to compute in real-time), and hence it has been adopted in
some commercial PS systems, such as Field Agent \cite{example-1}
and Gigwalk \cite{example-2}.
\subsection{Related Work}
Many existing works have studied the task scheduling problem in different UPS models, aiming at either minimizing the energy consumption (e.g., \cite{add3, ups-1,ups-2}) or maximizing the social surplus (e.g., \cite{ups-0,ups-3}).
Specifically,
in \cite{add3}, Jiang \emph{et al.} studied the peer-to-peer based data sharing among users in mobile crowdsensing, but they considered neither the location-dependence nor the time-sensitivity of tasks.
In \cite{ups-1}, Sheng \emph{et al.} studied the opportunistic energy-efficient collaborative sensing for location-dependent road information.
In \cite{ups-2}, Zhao \emph{et al.} studied the fair and energy-efficient
task scheduling in mobile crowdsensing with location-dependent tasks.
In \cite{ups-0}, He \emph{et al.} studied the social surplus maximization for location-dependent task scheduling in mobile crowdsensing.
However, the above works did not consider the time-sensitivity of tasks, where each task can be executed at any time.
In this work, we will consider both the location-dependence and the time-sensitivity of tasks.
Cheung \emph{et al.} in \cite{ups-3} studied the social surplus maximization scheduling for both location-dependent and {time-sensitive} tasks, where each task must be executed at a particular time.
Inspired by \cite{ups-3}, in this work, we will consider a more general task model, where each task can be executed within a \emph{valid time period} (instead of the particular time in \cite{ups-3}).
Clearly, \emph{our task model generalizes the existing models in \cite{ups-0,ups-1,ups-2,ups-3,add3}}, as it will degenerate to the models in \cite{ups-0,ups-1,ups-2,add3} by simply choosing an infinitely large valid time period for each task, and degenerate to the model in \cite{ups-3} by simply shrinking the valid time period of each task to a single point.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\subsection{Solution and Contributions}
\begin{figure}
\vspace{-2mm}
\centering
\caption{An UPS Model with Location-Dependent Time-Sensitive Tasks. Each route denotes the task selection and execution order of each user.}
\label{fig:mcs-model}
\vspace{-3mm}
\end{figure}
In this work, we consider a general UPS model consisting of multiple tasks and multiple smartphone users, where each user will make his individual task scheduling decision distributively (e.g., deciding the set of tasks he is going to execute). Tasks are (i) \emph{location-dependent}, each associated with one or multiple target location(s) at which the task will be executed, and (ii) \emph{time-sensitive}, each associated with a valid time period within which the task must be executed.
Moreover, users are {geographically dispersed} (i.e., each associated with an initial location) and can travel to different locations for executing different tasks.
As different tasks may have different valid time periods, each user needs to decide not only the \emph{task selection} (i.e., the set of tasks he is going to execute) but also the \emph{execution order} of the selected tasks.
This is also the key difference between the task scheduling in our work and that in \cite{ups-3}, which focused on the task selection only, without considering the execution order.\footnote{In \cite{ups-3}, each task is associated with a particular time, hence the execution order is inherently given as long as the tasks are selected.}
Note that our task scheduling problem (i.e., task selection and order optimization) is much more challenging than that in \cite{ups-3} (i.e., task selection only), as even if the task selection is given, the execution order optimization is still an NP-hard problem.
Figure \ref{fig:mcs-model} illustrates an example of such a task scheduling decision in a UPS model with location-dependent time-sensitive tasks.
Each route denotes the task scheduling decision (i.e., task selection and execution order) of each user.
For example, user $1$ chooses to execute tasks $\{1,2,3,4\}$ in order, user $2$ chooses to execute tasks $\{5,1,6\}$ in order, and user 3 chooses to execute tasks $\{7,8,9\}$ in order.
\emph{Game Formulation}:
When a user executes a task successfully (i.e., at the target location and within the valid time period of the task), the user will obtain a certain \emph{reward} provided by the task owner.
When multiple users execute the same task, they will share the reward {equally} as in \cite{ups-3}.
This makes the task scheduling decisions of different users coupled with each other, leading to a \emph{strategic game} situation.
We formulate such a game, called \emph{Task Scheduling Game (TSG)}, and perform a comprehensive game-theoretic analysis.\footnote{Game theory has been widely used in wireless networks (e.g., \cite{gao-1,gao-2,gao-3,gao-4,gao-5,gao-6}) for modeling and analyzing the competitive and cooperative interactions among different network entities.}
Specifically, we first prove that the TSG game is a {potential game} \cite{potential}, which guarantees the existence of Nash equilibrium (NE).
Then we analyze the social efficiency loss at the NE (comparing with the socially optimal solution) induced by the selfish behaviors of users.
We further show how the efficiency loss changes with the user number and user type.
In summary, the main results and key contributions of this work are summarized as follows.
\begin{itemize}
\item \emph{General Model:}
We consider a general UPS model with location-dependent time-sensitive tasks, which generalizes the existing task models in the literature.
\item \emph{Game-Theoretic Analysis:}
We perform a comprehensive game-theoretic analysis for the task scheduling in the proposed UPS model, by using a potential game.
\item \emph{Performance Evaluation:}
We evaluate the efficiency loss and the fairness index at the NE under different situations.
Our simulations in practical scenarios with different types of users (walking, bike, and driving users) show that the efficiency loss can be up to $70\%$ due to the selfish behaviors of users.
\item \emph{Observations and Insights:}
Our analysis shows the NE performance may increase or decrease with the number of users, depending on the level of competition.
This implies that it is not always better to employ more users in the UPS system, which can provide a guidance for the system
designer to determine the optimal number of users to be
employed in a practical system.
\end{itemize}
The rest of the paper is organized as follows.
In Section \ref{section:model}, we present the system model.
In Section \ref{section:game} and \ref{section:analysis}, we formulate the task scheduling game and analyze the Nash equilibrium.
We present the simulation results in Section \ref{section:simulation}, and finally conclude in Section \ref{section:conclusion}.
\section{Application I: Mobile Crowd Sensing}
\section{System Model}\label{section:model}
We consider a user-centric UPS system consisting of a sensing platform and a set $\N = \{1,\cdots,N\}$ of mobile smartphone users.
The platform announces a set $\S = \{1,\cdots,S\}$ of sensing tasks.
Each task can represent a specific sensing event at a particular time and location, or a set of periodic sensing events within a certain time period, or a set of sensing events at multiple locations.
Each task $k \in \S$ is associated with a \emph{reward} $V_k$, denoting the money to be paid to the users who execute the task successfully.
Each user can choose the set of tasks he is going to execute.
When multiple users execute the same task, they will share the reward \emph{equally} as in \cite{ups-3}.
This makes the task scheduling decisions of different users coupled with each other, resulting in a \emph{strategic game} situation.
\subsection{Task Model}
We consider a general task model, where tasks are (i) \emph{location-dependent}: each task $k \in \S$ is associated with a target location $L_k $ at which the task will be executed;\footnote{Note that our analysis can be easily extended to the task model with multiple target locations, by simply dividing each task into multiple sub-tasks, each associated with one target location.}
and (ii) \emph{time-sensitive}: each task $k \in \S$ is associated with a valid time period $T_k \triangleq [T_k^{\dag},\ T_k^{\ddag}]$ within which the task must be executed.
Examples of such tasks includes the measurement of traffic speed at a particular road conjunction or the air quality of a particular location within a particular time interval.~~~~~~~~~~~~~~~~~
When enlarging the valid time period of each task to infinity, our model will degenerate to those in \cite{ups-0,ups-1,ups-2};
when shrinking the valid time period of each task to a single point, our model will degenerate to the model in \cite{ups-3}.
Thus, our model generalizes the existing models in \cite{ups-0,ups-1,ups-2,ups-3}.
\subsection{User Model}
Each user $i \in \N $ can choose one or multiple tasks (to execute) from a set of tasks $\S_i \subseteq \S$ available to him.
The availability of a task to a user depends on factors such as the user's device capability, time availability, mobility, and experience.
When executing a task successfully, the user can get the task reward solely or share the task reward equally with other users who also execute the task.
The \emph{payoff} of each user is defined as the difference between the achieved \emph{reward} and the incurred \emph{cost}, mainly including the execution cost and the travelling cost (to be described below).
When executing a task, a user needs to consume some time and device resource (e.g., energy, bandwidth, and CPU cycle), hence incur certain \emph{execution cost}.
Such an execution cost and time depends on both the task natures (e.g., one-shot or periodic sensing) and the user characteristics (e.g., experienced or inexperienced, resource limited or adequate).
Let $T_{i, k}$ and $C_{i, k}$ denote the time and cost of user $i \in \N $ for executing task $ k \in \S_i $.
Each user $i$ has a total budget $C_i$ of resource that can be used for executing tasks.
Moreover, in order to execute a task, a user needs to move to the target location of the task (in a certain travelling speed), which may incur certain \emph{travelling cost}.
After executing a task, the user will stay at that location (to save travelling cost) until he starts to move to a new location to execute a new task.
By abuse of notation, we denote $L_i $ as the initial location of user $i\in \N$.
The travelling cost and speed mainly depend on the type of transportation that the user takes.
For example, a walking user has a low speed and cost, while a driving user may have a high speed and cost.
Let $\widetilde{C}_i$ and $R_i$ denote the travelling cost (per unit of travelling distance) and speed (m/s) of user $i\in \N$, respectively.
\subsection{Problem Description}
As different tasks may have different valid time periods,
each user needs to consider not only the \emph{task selection} (i.e.,
the set of tasks to be executed) but also the \emph{execution order} of
the selected tasks. As shown in Figure \ref{fig:mcs-model}, the task execution
order is important, as it affects not only the user's travelling
cost but also whether the selected tasks can be executed within
their valid time periods. For example, if user $1$ executes task
$4$ first (within the time period [13:00, 14:00], say 13:30), he
cannot execute tasks $\{1,2,3\}$ any more within their valid time
periods (all of which are earlier than 13:30). {This is also one of the key differences between our problem and
that in \cite{ups-3}, which only focused on the task selection, without
considering the task execution order.}
\section{Game Formulation}\label{section:game}
As mentioned before, when multiple users choose to execute the same task, they will share the task reward \emph{equally}.
This makes the decisions of different users coupled with each other, leading to a \emph{strategic game} situation. In this section, we will provide the formal definition for such a game.
\subsection{Strategy and Feasibility}
As discussed in Section \ref{section:model}, the strategy of each user $i\in \N$ is to choose a set of available tasks (to execute) and the execution order of the selected tasks, aiming at maximizing his payoff.
Such a strategy of user $i$ can be formally characterized by an \emph{ordered} task set, denoted by
\begin{equation}
\bs_i \triangleq \{ k_i^1, \cdots, k_i^{|\bs_i|} \} \subseteq \S_i
\end{equation}
where the $j$-th element $k_i^j $ denotes the $j$-th task selected and executed by user $i$.
A strategy $\bs_i = \{ k_i^1, \cdots, k_i^{|\bs_i|} \} $ of user $i$ is feasible, only if the time-sensitivity constraints of all selected tasks in $\bs_i$ are satisfied,
or equivalently, there exists a reasonable execution time vector such that all selected tasks in $\bs_i$ can be executed within their valid time periods.
Let $[T_i^1,\cdots,T_i^{|\bs_i|}]$ denote a potential execution time vector of user $i$, where $T_i^j$ denotes the execution time for the $j$-th task $k_{i}^j$.
Then, $[T_i^1,\cdots,T_i^{|\bs_i|}]$ is feasible, only if (i) it satisfies the time-sensitivity constraints of all selected tasks, i.e.,
\begin{equation}\label{eq:mcs-fs1}
T_{k_i^j}^{\dag} \leq T_i^j \leq T_{k_i^j}^{\ddag}, \quad j=1,\cdots, |\bs_i|,
\end{equation}
and meanwhile (ii) it is
reasonable in the temporal logic, i.e.,
\begin{equation}\label{eq:mcs-fs2}
\left\{
\begin{aligned}
T_i^{1} & \textstyle
\geq \frac{D({i} , {k_i^{1}})}{R_i},
\\
T_i^{j } &\textstyle \geq T_i^{j-1} + T_{i, k_i^{j-1}} + \frac{D( {k_i^{j-1}}, {k_i^j}) }{R_i}, \ j = 2,\cdots, |\bs_i| ,
\end{aligned}
\right.
\end{equation}
where $D({i} , {k_i^{1}}) = |L_{i}- L_{k_i^{1}}| $ denotes the distance between user $i$'s initial location $L_i$ and the first task $k_i^{1}$, $D({k_i^{j-1}}, {k_i^{j}}) = |L_{k_i^{j-1}}- L_{k_i^{j}}| $ denotes the distance between tasks $k_i^{j-1}$ and $k_i^{j}$, and $T_{i, k_i^{j-1}}$ denotes the time for executing task $k_i^{j-1}$.\footnote{The first condition denotes that user $i$ needs to take at least the time $\frac{D({i} , {k_i^{1}})}{R_i}$ to reach the first task $k_i^{1}$,
and the following conditions mean that user $i$ needs to takes at least the time $T_{i, k_i^{j-1}} + \frac{D( {k_i^{j-1}}, {k_i^j}) }{R_i}$ to reach task $k_i^{j}$, where the first part of time is used for completing the previous task $k_i^{j-1}$ and the second part of time is used for travelling from task $k_i^{j-1}$ to task $k_i^{j}$.}
Moreover, a strategy $\bs_i $ of user $i$ is feasible, only if it satisfies the resource budget constraint. That is,
\begin{equation}\label{eq:mcs-fs3}
\textstyle
\sum \limits_{k \in \bs_i} C_{i, k} \leq C_i.
\end{equation}
Based on the above, we express in the following lemma the feasibility conditions for the user strategy $\bs_i $.
\begin{lemma}[Feasibility]\label{lemma:mcs-feasibility}
A strategy $\bs_i $ of user $i$ is feasible, if and only if the conditions \eqref{eq:mcs-fs1}-\eqref{eq:mcs-fs3} are satisfied.
\end{lemma}
Intuitively, if one of the conditions in \eqref{eq:mcs-fs1}-\eqref{eq:mcs-fs3} is not satisfied, then the strategy $\bs_i $ is not feasible as explained early. Thus, if $\bs_i $ is feasible, the conditions \eqref{eq:mcs-fs1}-\eqref{eq:mcs-fs3} must be satisfied.
One the other hand, if \eqref{eq:mcs-fs1}-\eqref{eq:mcs-fs3} are satisfied, the strategy $\bs_i $ can be implemented successfully, hence is feasible.
\subsection{Payoff Definition}
Given a feasible strategy profile $\bs \triangleq (\bs_1,\cdots,\bs_N)$, i.e., the feasible strategies of all users,
we can compute the number of users executing task $k \in \S$, denoted by $M_k (\bs)$, that is,
\begin{equation}\label{eq:xxx:mk}
\textstyle
M_k (\bs) = \sum\limits_{i \in \N} \mathbf{1}_{(k \in \bs_i)}, \quad \forall k\in \S,
\end{equation}
where the indicator $\mathbf{1}_{(k \in \bs_i)} = 1$ if $ k \in \bs_i$, and $ 0$ otherwise.
Then, the total reward of each user $i \in \N $ can be computed~by
\begin{equation}\label{eq:xxx:reward}
\textstyle
r_i (\bs) \triangleq r_i (\bs_i, \bs_{-i}) = \sum\limits_{k \in \bs_i} \frac{V_k}{M_k (\bs)} ,
\end{equation}
which depends on both his own strategy $\bs_i$ and the strategies of other users, i.e., $\bs_{-i} \triangleq (\bs_1,\cdots,\bs_{i-1},\bs_{i+1},\cdots,\bs_N) $.
The total execution cost of user $i \in \N $ can be computed by
\begin{equation}\label{eq:xxx:excost}
\textstyle
c_i^{\textsc{ex}} (\bs_i ) = \sum \limits_{k \in \bs_i} C_{i,k},
\end{equation}
which depends only on his own strategy $\bs_i$.
The total travelling cost of user $i \in \N$ can be computed by
\begin{equation}\label{eq:xxx:excost}
\textstyle
c_i^{\textsc{tr}} (\bs_i ) = \sum\limits_{j = 1}^{ |\bs_i| } D({k_i^{j-1}}, {k_i^{j}}) \cdot \widetilde{C}_{i} ,
\end{equation}
which depends only on his own strategy $\bs_i$.
Here we use the index $k_i^0$ to denote user $i$'s initial location.
Based on the above, the payoff of each user $ i \in \N $ can be written as follows:
\begin{equation}\label{eq:xxx:payoff}
\begin{aligned}
\u_i (\bs) \triangleq \u_i (\bs_i, \bs_{-i})
& = r_i (\bs ) - c_i^{\textsc{ex}} (\bs_i ) - c_i^{\textsc{tr}} (\bs_i )
\\
& \textstyle = \sum \limits_{k \in \bs_i} \frac{V_k}{M_k (\bs)} - c_i^{\textsc{ex}} (\bs_i ) - c_i^{\textsc{tr}} (\bs_i ).
\end{aligned}
\end{equation}
\subsection{Task Scheduling Game -- TSG}
Now we define the Task Scheduling Game (TSG) and the
associated Nash equilibrium (NE) formally.
\begin{definition}[\textbf{Task Scheduling Game -- TSG}]\label{def:task-game}
The Task Selection Game $\mathcal{T} \triangleq (\N, \{\S_i\}_{i\in\N}, \{\u_i\}_{i\in\N})$ is defined by:~~~~~~~~~
\begin{itemize}
\item \textbf{Player}: the set of participating users $\N = \{1, \cdots , N\}$;
\item \textbf{Strategy}: an ordered set of available tasks $\bs_i \subseteq \S_i$ for each participating user $i\in\N$;
\item \textbf{Payoff}: a payoff function $ \u_i (\bs_i, \bs_{-i})$ defined in \eqref{eq:xxx:payoff} for each participating user $i\in\N$.
\end{itemize}
\end{definition}
A feasible strategy profile $\bs^* \triangleq (\bs_1^*,\cdots,\bs_N^*)$ is an NE of the Task Scheduling Game $\mathcal{T}$, if
\begin{equation}\label{eq:xxx:nene}
\begin{aligned}
\bs_i^* = \arg \max_{\bs_i \subseteq \S_i} \ & \u_i (\bs_i, \bs_{-i}^*) \\
s.t.\ & \bs_i \mbox{ satisfies \eqref{eq:mcs-fs1}-\eqref{eq:mcs-fs3},}
\end{aligned}
\end{equation}
for every user $i\in\N$.
\section{Game Equilibrium Analysis}\label{section:analysis}
We now analyze the NE of the Task Scheduling Game.
\subsection{Potential Game}
We first show that the Task Scheduling Game $\mathcal{T}$ is~a~\emph{potential game} \cite{potential}.
A game is called an (exact) potential game, if there exists an (exact) \emph{potential function}, such that for any user, when changing his strategy, the change of his payoff is equivalent to that of the potential function.
Formally,
\begin{definition}[Potential Game \cite{potential}]\label{def:potential-game}
A game $\mathcal{G} = (\N,$ $\{\S_i\}_{i\in\N},\ \{\u_i\}_{i\in\N})$ is called a potential game, if it admits a potential function $\Phi(\bs)$ such that for every player $i\in \N$ and any two strategies $\bs_i,\bs_i' \subseteq \S_i $ of player $i$,
\begin{equation}\label{eq:pf-proof}
\u_i (\bs_i, \bs_{-i} ) - \u_i (\bs_i', \bs_{-i} )
=
\Phi (\bs_i, \bs_{-i} ) - \Phi (\bs_i', \bs_{-i} ),
\end{equation}
under any strategy profile $\bs_{-i}$ of players other than $i$.
\end{definition}
\begin{lemma}\label{lemma:mcs:potential}
The Task Scheduling Game $\mathcal{T}$ is a potential game, with the following potential function $\Phi(\bs)$:
\begin{equation}\label{eq:xxx:pf}
\begin{aligned}
\textstyle
\Phi(\bs) = \sum\limits_{k \in \S} \sum \limits_{m=1}^{M_k(\bs)} \frac{V_k }{m}
- \sum \limits_{i \in \N } c_i^{\textsc{ex}} (\bs_i ) - \sum \limits_{i \in \N } c_i^{\textsc{tr}} (\bs_i ),
\end{aligned}
\end{equation}
where $M_k(\bs)$ is defined in \eqref{eq:xxx:mk}, i.e., the total number of users executing task $k$ under the strategy profile $\bs$.
\end{lemma}
The above lemma can be proved by showing that the condition \eqref{eq:pf-proof} holds under the following two situations.
First, a user changes the execution order, but not the task selection.
Second, a user changes the task selection by adding an additional task or removing an existing task.
Due to space limit, we put the detailed proof in our online technical report \cite{report}.
\begin{figure*}
\hspace{-5mm}
\centering
\includegraphics[width=2.3in]{MCS-simu-sw1}
~~
~~
\includegraphics[width=2.3in]{MCS-simu-sw3}
\vspace{-3mm}
\caption{Social Welfare under SE and NE: (a) Walking Users; (b) Bike Users; (c) Driving Users.}
\label{fig:mcs-simu-sw}
\vspace{-3mm}
\end{figure*}
\subsection{Nash Equilibrium -- NE}
We now analyze the NE of the proposed game. As shown in \cite{potential},
an appealing property of a potential game is that it
always admits an NE. In addition, any strategy profile $\bs^*$ that maximizes the potential function $\Phi(\bs)$ is an NE. Formally,
\begin{lemma}\label{lemma:mcs:NE}
The Task Scheduling Game $\mathcal{T}$ has at least one NE $\bs^* \triangleq (\bs_1^*,\cdots,\bs_N^*)$, which is given by
\begin{equation}\label{eq:xxx:NE}
\begin{aligned}
\bs^* \triangleq \arg \max_{\bs} &\ \ \Phi(\bs)
\\
s.t.\ & \bs_i \mbox{ satisfies \eqref{eq:mcs-fs1}-\eqref{eq:mcs-fs3}}, \ \forall i\in \N,
\end{aligned}
\end{equation}
where $\Phi(\bs)$ is the potential function defined in \eqref{eq:xxx:pf}.
\end{lemma}
This lemma can be easily proved by observing that
$$
\u_i (\bs_i^*, \bs_{-i}^* ) - \u_i (\bs_i', \bs_{-i}^* )
=
\Phi (\bs_i^*, \bs_{-i}^* ) - \Phi (\bs_i', \bs_{-i}^* ) \geq 0,
$$
for any user $i\in \N$ and $\bs_i ' \subseteq \S_i$. The last inequality follows because $(\bs_i^*, \bs_{-i}^*)$ is the maximizer of
$\Phi (\bs)$ by \eqref{eq:xxx:NE}.
\subsection{Efficiency of NE}
Now we show that the NE of the Task Scheduling Game
$\mathcal{T}$, especially the one given by \eqref{eq:xxx:NE}, is often \emph{not} efficient.
Specifically, a strategy profile $\bs^\circ$ is socially efficient (SE), if it maximizes the following social welfare:
\begin{equation}\label{eq:xxx:sw}
\textstyle
\W(\bs) = \sum\limits_{k \in \S} V_k \cdot \big( \mathbf{1}_{ M_k(\bs) \geq 1 } \big)
- \sum \limits_{i \in \N } \big( c_i^{\textsc{ex}} (\bs_i ) + c_i^{\textsc{tr}} (\bs_i ) \big) ,
\end{equation}
where $\mathbf{1}_{ M_k(\bs) \geq 1 } = 1$ if $M_k(\bs) \geq 1$, and $0$ otherwise. The first term denotes the total reward collected by all users, where the reward $V_k$ of a task $k$ is collected if at least one user executes the task successfully (i.e., $M_k(\bs) \geq 1$).
The second term denotes the total cost incurred on all users.
Formally, the socially efficient solution $\bs^\circ$ is given by:
\begin{equation}\label{eq:xxx:soooo}
\begin{aligned}
\bs^\circ \triangleq \arg \max_{\bs} &\ \ \W(\bs)
\\
s.t.\ & \bs_i \mbox{ satisfies \eqref{eq:mcs-fs1}-\eqref{eq:mcs-fs3}}, \ \forall i\in \N.
\end{aligned}
\end{equation}
By comparing $\W(\bs) $ in \eqref{eq:xxx:sw}
and
$ \Phi(\bs)$ in \eqref{eq:xxx:pf}, we can see that both functions have the similar structure, except the
coefficients for $ V_k, k\in \S $ in the first term, i.e.,
$
\sum_{m=1}^{M_k(\bs)} \frac{1 }{m}
$
and
$ \mathbf{1}_{( M_k(\bs) \geq 1 )}
$.
We can further see that
$$
\textstyle
\sum\limits_{m=1}^{M_k(\bs)} \frac{1 }{m}
\geq \mathbf{1}_{( M_k(\bs) \geq 1 )},
$$
where the equality holds only when $M_k(\bs) = 1$ (both sides are $1$) or $0$ (both sides are $0$).
Namely, the coefficient for each $ V_k$ in $ \Phi(\bs) $ is no smaller than that in $\W(\bs) $.
This implies that for any task, users are more likely to execute the task at the NE than SE.
This leads to the following observation.
\begin{observation}
The task selections at the NE, especially that resulting from \eqref{eq:xxx:NE}, are more {aggressive}, comparing with those at the SE resulting from maximizing $ \W(\bs) $.
\end{observation}
\ifodd 0
To illustrate this, we provide a simple example with one task and two users. Suppose that the reward of the task is $10$, the travelling costs of both users are zero, and the sensing costs of users are $4.8$ and $4.9$, respectively.
The SE outcome is: only user 1 executes the task, leading to a social welfare of $10-4.8 = 5.2$.
Under the NE, however, both users will choose to execute the task (as both can achieve a positive payoff even if they share the reward), leading to a social welfare of $10-4.8-4.9 = 0.3$.
\begin{figure}
\centering
\includegraphics[width=2.8in]{Figures/simu-1new}
\vspace{-2mm}
\caption{Social Welfare under SE and NE.}
\label{fig:simu-1}
\vspace{-4mm}
\end{figure}
In order to obtain more useful engineering insights, we also perform numerical studies to illustrate the efficiency loss at the NE (comparing with the SE) under different situations.
Figure \ref{fig:simu-1} illustrates the normalized social welfare under the NE (the blue bars) and the SE (the hollow bars with black borders).
Each bar group denotes the results under a particular task reward (e.g., $V_k = 0.2$ for the first group and $V_k = 1$ for the last group), and different bars in the same group correspond to different numbers of users (e.g., $N=2$ for the first bar and $N=14$ for the last bar in each group).
From Figure \ref{fig:simu-1}, we have the following observations.
\begin{observation}\label{ob:2}
The social welfare under SE increases with the number of users (for any possible task reward).
\emph{The reason is that with more users, it is more likely to choose users with smaller costs to execute the tasks.}
\end{observation}
\begin{observation}\label{ob:3}
When the task reward is small (e.g., $V_k = 0.2$), the social welfare under NE increases with the number of users.
\emph{The reason is similar as that for SE in Observation \ref{ob:2}: with more users, it is more likely to choose users with smaller costs to execute the tasks.}
\end{observation}
\begin{observation}\label{ob:5}
When the task reward is large (e.g., $V_k = 1$), the social welfare under NE decreases with the number of users.
\emph{The reason is that with more users it is more likely to choose multiple users to execute the same task, which will reduce the social welfare.}
\end{observation}
\begin{observation}\label{ob:4}
When the task reward is medium (e.g., $V_k \in [0.4,0.8]$), however, the social welfare under NE first increases and then decreases with the number of users.
\emph{The reason is similar as those in Observations \ref{ob:3} and \ref{ob:5}.}
\end{observation}
\fi
\section{Conclusion}\label{section:conclusion}
In this work, we study the task scheduling in the user-centric
PS system by using a game-theoretic analysis. We formulate
the strategic interaction of users as a task scheduling game, and
analyze the NE by using a potential game. We further analyze
the efficiency loss at the NE under
different situations. There are several interesting directions for
the future research. First, our analysis and simulations show
that the social efficiency loss at the NE can be up to 70\%.
Thus, it is important to design some mechanisms to reduce
the efficiency loss. Second, the current model did not consider
the different efforts of users in executing a task. It is important
to incorporate the effort into the user decision.
| {
"attr-fineweb-edu": 1.953125,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdEQ5qWTBDpT5Qaej | \section{Introduction.}\label{intro}
\section{Introduction}
Over the past three decades, there has been rapid growth in the size of betting markets for professional and college sports. A survey by the American Gaming Association found that between 2018 and 2020 the legal U.S. sports betting market surged from \$6.6 billion to \$25.5 billion \citep{betting_growth}. Alongside the launch of several new sports betting markets, there was also adoption on a state level, most recently in Illinois and Colorado. Despite this increase in the popularity of sports betting, the general consensus has been that betting markets have stayed efficient. There have been some studies finding specific inefficiencies embedded within a small set of features or match-ups \citep{berkowitz,borghesi,gandar}, but nothing systematically inefficient. It is unlikely that these markets are profitable for amateur betters, unless there was some extreme exogenous event that disrupted either these markets or the sports themselves.
The COVID-19 pandemic was just such an extreme event that affected nearly every professional and college sports league in the world in 2020 and 2021. The National Basketball Association (NBA) and National Hockey League (NHL) had to suspend their on-going seasons when the pandemic began \citep{nba,nhl}. Later, in the summer of 2020, the NBA resumed its season in an isolated environment where players were quarantined and no fans were allowed \citep{bubble}. This isolation \emph{bubble} was the NBA's attempt to let the season continue safely. While Major League Baseball (MLB) and the NHL resumed their 2020 seasons without fans, the National Football League (NFL) resumed with 13 teams allowing fans at partial capacity. The NBA's 2021 season began in complete isolation and then gradually fans were allowed to attend the games, depending on the hometown city's policies. COVID-19 affected other aspects of the sports besides fan attendance. Game schedules were altered to minimize the amount of travel for the teams. Players who tested positive for COVID-19 (or had contact exposure), were excluded from participating in games. This often resulted in games where superstars or key players were missing.
An interesting element of sports is their respective betting markets. One popular type of bet in these markets is known as the \emph{moneyline}. In this bet, the odds makers give payouts for each team in a game. The team with the higher payout is known as the \emph{underdog} because they have a lower implied probability of winning, assuming the payouts are chosen to make the bet have non-positive mean payout. In efficient markets, one cannot make a consistent profit by betting solely on underdogs or favorites. An interesting question is what impact COVID-19 had on the efficiency of betting markets. If COVID-19 created inefficiencies, in what sport did this occur, what was the nature of the inefficiencies, and how much could one earn by betting in these inefficient markets?
In this work we analyze the impact of COVID-19 on the efficiency of moneyline betting markets for a variety of sports. Our main finding is that the NBA experienced incredible inefficiencies in its betting markets during COVID-19, while other major sports had markets that stayed efficient. The inefficiency in the NBA market was so stark that simple betting strategies were profitable. For instance, consider the strategy where each day one bets an equal amount on the underdog in each game. Using this simple underdog strategy from when the NBA COVID-19 bubble began until the end of the 2020 season gave a return of 16.7\%. More clever strategies that reinvest the winnings gave returns as high as 2,666\%. We conduct a deeper analysis of the data to explain the nature of the NBA inefficiencies. We also test different betting strategies to see which ones are most profitable in this market.
This paper is organized as follows. We begin by reviewing related literature in Section \ref{sec:review}. Then in Section \ref{sec:data} we describe our dataset on moneyline odds for various sports. In Section \ref{sec:covid} we present a detailed analysis of the efficiency of the moneyline betting markets for all sports across multiple seasons, including during the COVID-19 pandemic. We study different betting strategies given the moneyline inefficiencies in Section \ref{sec:betting}. Finally, we conclude in Section \ref{sec:conclusion}.
\section{Related Literature}\label{sec:review}
Many studies have looked at the efficiency in the sports betting markets. \cite{gandar} find that the home-field advantage is efficient for the NFL, MLB, and NBA in both the regular season and playoff games. However, \cite{borghesi} finds that in the NFL the realized winning probability for the home-underdog is significantly larger during the later part of a season. \cite{woodland} found for college basketball and football that there is clear evidence of the favorite long shot bias in money-line betting markets and that betting on heavy favorites offers a near zero return over several years, suggesting that the markets for these two sports are efficient within transaction costs. \cite{gray} find that in select seasons a few linear models can be used to generate significant profits in NFL betting markets, but for most years the market is efficient.
Much of the COVID-19 related sports research focused not on betting markets, but on the potential impacts of the virus on athletes at both the microscopic and macroscopic level. For example, \cite{verwoert} consider a tree-based protocol dictating the stratification of athletes from a cardiovascular perspective. The logic branches on asymptomatic/regional/local symptoms and hospitalization. Similarly, researchers have provided a cardio-vascular based risk-mitigation approach for the eligibility of an athlete to return to a sport depending on his or her state (positive test or waiting for test) \citep{schellhorn,baggish}. Meanwhile, \cite{wong} studied the transmission risk of COVID-19 in soccer based on the degree of contact between players and their actions. For example, using forwards and mid-fielders, they detail the contagion risk for a player over various time intervals of the game.
\section{Moneyline Definitions and Datasets}\label{sec:data}
\subsection{Moneyline Bets}
We begin by presenting a brief description of moneyline bets. In a game between two teams, oddsmakers will set moneyline odds for each team. If one team's odds are strictly greater than the other team's odds, we refer to the team with the larger odds as the \emph{underdog} and the other team as the \emph{favorite}. If both teams' odds are equal, we refer to them both as underdogs. The moneyline bet works as follows. Consider a game where team $u$ has moneyline odds $o_u$. In typical moneyline bets we either have $o_u\leq-100$ or $o_u\geq 100$, and each case has a different payout. First, we consider $o_u\geq 100$. In this case, if one bets 100 USD on $u$ and the team wins, then one receives a payout of $100+o_u$ USD (a net profit of $o_u$ USD). If on the other hand, $u$ loses, then one incurs a loss of 100 USD. Second, we consider $o_u\leq -100$. In this case, if one bets $o_u$ USD on $u$ and the team wins, then one receives a payout of $100+o_u$ USD (a net profit of $100$ USD). If $u$ loses, then one incurs a loss of $o_u$ USD.
The implied probability of a team winning is determined by the moneyline odds and the payout of the moneyline bet. Let us define the implied probability of winning for a team $u$ as $p_u$. We can calculate the implied probability by setting the expected profit of a moneyline bet on the team equal to zero. For $o_u\geq 100$, this gives $o_up_u -100(1-p_u)=0$ and for $o_u\leq -100$, this gives $100p_u +o_u(1-p_u)=0$. Solving for $p_u$ gives
\begin{align}
p_u &= \frac{100}{100+|o_u|}, ~~~o_u\geq 100.\label{eqn:pdog}\\
p_u &= \frac{|o_u|}{100+|o_u|} ~~~o_u\leq -100.\label{eqn:pfav}
\end{align}
\subsection{Underdog Profit Margin}
We now consider measures to quantify the efficiency of betting markets. There are multiple ways to do this, but in this work we focus on one simple measure which will prove useful. We assume a bettor places a 1 USD bet on the underdog in each game in a set of games. Then the efficiency measure, which we refer to as the \emph{average underdog profit margin}, is the average profit margin per game using this underdog betting strategy. To provide an expression for this measure, we define some terms. For a team $u$, let $p_u$ be the implied win probability based on the moneyline odds, and let $q_u$ be the actual win probability. Let $W_u$ be a Bernoulli random variable that is one if the team wins the game. With our notation we have $\mathbf E[W_u] = q_u$. Assume we bet 1 USD on the underdog in $n$ different games. Then a simple calculation shows that the average underdog profit margin, which we define as $\alpha$, is equal to
\begin{align}
\alpha = -1 + \frac{1}{n}\sum_{u=1}^n \frac{W_u}{p_u}\label{eq:upm}.
\end{align}
To gain insights into this measure, we take the expectation over the uncertainty in game outcome to obtain
\[
\mathbf E[\alpha] = -1 + \frac{1}{n}\sum_{u=1}^n \frac{q_u}{p_u}.
\]
We see from this expression that $\alpha$ measures how much the mean of the ratio $q_u/p_u$ deviates from one. If the oddsmakers set the odds accurately so $p_u=q_u$, then $\alpha$ is zero. If the oddsmakers are efficiently setting underdog moneyline odds, $\alpha$ should be negative, implying one cannot make a consistent profit by betting on underdogs. However, if there are many upsets and underdogs win more frequently than implied by the odds, $\alpha$ will be positive.
\subsection{Data}
We collected the moneyline odds for thousands of games from a website that archived this data for multiple sports across multiple seasons \citep{sportsbookreviewsonline}. Our dataset covers professional sports leagues such as the NFL, NBA, NHL, and MLB, along with college sports such as NCAA Football (NCAAF) and NCAA Basketball (NCAAB). For each game, we have the initial moneyline odds for both teams, the date of the game, and the game outcome. In total we have moneyline data for over 130 thousand games spanning 14 years. The complete dataset can be obtained from our project repository \citep{github}.
Our data covers seasons as far back as 2007, but the data for MLB begins at 2010. For this reason, in our analysis we consider seasons between 2010 and 2021 for all sports. Also, in order to exclude potentially erroneous records in our dataset, we require the following conditions for a game to be included:
\begin{enumerate}
\item The favorite odds must be less than or equal to -100
\item The underdog odds must be either greater than 100 or between -200 and -100.
\end{enumerate}
There are 109,249 games which satisfy these constraints in our data set and 40 games which do not. Table \ref{table:data} contains a summary of the dataset.
\begin{table}
\caption{The number of games for each sport in our moneyline odds dataset, along with the dates covered by each sport.}\label{table:data}
\centering
\begin{tabular}{@{}|l|l|l|c|@{}}
\toprule
\textbf{} & \textbf{Start Date} & \textbf{End Date} & \textbf{Number of Games} \\ \midrule
\textbf{MLB} & 2010-04-04 & 2020-10-27 & 25,599 \\\hline
\textbf{NBA} & 2007-10-30 & 2021-03-04 & 17,196 \\\hline
\textbf{NCAAB} & 2007-11-05 & 2021-02-16 & 56,052 \\\hline
\textbf{NCAAF} & 2007-08-30 & 2020-09-26 & 10,672 \\\hline
\textbf{NFL} & 2007-09-06 & 2021-02-07 & 3,740 \\\hline
\textbf{NHL} & 2007-09-29 & 2021-03-22 & 17,080 \\ \bottomrule
\end{tabular}
\end{table}
\section{Statistical Analysis of Betting Market Inefficiencies During COVID-19}\label{sec:covid}
COVID-19 disrupted certain games in our dataset, both in terms of game outcomes and betting market efficiency. In this section we conduct a statistical analysis comparing these properties with respect to underdogs for COVID-19 games and normal season games. To begin this analysis we had to determine which games to designate as COVID-19 games. The NBA and NHL experienced mid-season pauses due to COVID-19. The COVID-19 games for these sports were played after the resumption of the season. For the NFL and NCAAF, the season was not paused, so their COVID-19 games started in the fall of 2020. The NCAAB only cancelled the March Madness Tournament, but all regular season games were played before COVID-19. Table \ref{table:pause} contains the start dates of the COVID-19 games and the number of COVID-19 games played for each sport, along with the number of games in a normal season for reference. There is a range of games due to some teams receiving different treatment based on their win-loss records when COVID-19 began impacting game schedules.
\begin{table}[h]
\label{table:pause}
\begin{tabular}{|l|l|l|l|}
\hline
\textbf{Sport} & \textbf{COVID-19 Games } & \textbf{Number of Normal } & \textbf{Number of COVID-19} \\
& \textbf{Start Date } & \textbf{ Season Games} & \textbf{Games} \\ \hline
\textbf{NBA} & 2020-07-30 & 81 & 64-72 \\ \hline
\textbf{NFL} & 2020-09-10 & 16 & 17 \\ \hline
\textbf{NHL} & 2020-08-01 & 68 & 57-63 \\ \hline
\textbf{MLB} & 2020-07-23 & 162 & 60 \\ \hline
\textbf{NCAAF} & 2020-09-03 & 10-13 & 4-12 \\ \hline
\textbf{NCAAB} & N/A & 25-35 & 25-35 \\ \hline
\end{tabular}
\caption{Start date of COVID-19 games, number of games per season, and number of COVID-19 games played for each sport in our dataset.}
\end{table}
\subsection{Underdog Profit Margin and Win Probability }
For each season we study two properties of underdogs: their actual win probability, and the efficiency of their moneyline odds. The underdog win probability shows how frequently upsets occurred. The underdog profit margin provides a monetary value to these upsets. Figure \ref{fig:covid_sport} shows the average underdog win probability and profit margin for the sports during normal and COVID-19 games. It can be seen that the NBA shows the largest increase in win probability and profit margin for underdogs during COVID-19. The NBA underdog win probability goes from approximately 0.3 to 0.4. For all other sports, there is not such a visible difference. In fact, if we look at the underdog profit margin in Figure \ref{fig:covid_sport}, we see that the NBA is the only sport showing a substantial positive mean during COVID-19.
We tested the significance of the differences in both underdog win probability and profit margin using multiple statistical tests. Significance was assessed using the Holm-Bonferonni correction \citep{holm} due to the testing of hypotheses for multiple sports. We used non-parametric tests such as the Kolomogorov-Smirnov (KS) and Mann-Whitney U (MW) test. Non-parametric tests are more appropriate for the underdog profit margin, which exhibits multi-modal behavior and has substantial skew.
The test results are shown in Table \ref{table:covid_sport}. The difference in underdog win probability for the NBA is statistically significant for all tests at a 1\% level. The only other sport showing a significant difference in win probability is the NCAAB, but this is only for the MW test. The NBA also shows a significant difference in its underdog profit margin for both non-parametric tests. The NCAAB underdog profit margin has a significant difference only for the MW test. However, from Figure \ref{fig:covid_sport} we see that the mean value is negative, which is less interesting from a betting perspective.
The average NBA underdog profit margin during COVID-19 is the only substantially positive value among all sports. To verify that this value is truly positive and not just a random fluctuation, we conduct a Wilcoxon signed-rank test \citep{wilcoxon1945} on the NBA COVID-19 games. This test is a non-parametric version of the standard paired t-test without any assumptions on normality. The null hypothesis is that the distribution of the values is symmetric about zero. We find that according to this test, we can reject the null hypothesis at the 1\% level (p-value $\leq 10^{-5}$). This indicates that the average underdog profit margin is positive for the NBA during COVID-19. We also carried out the test on a season by season basis, and found that the only two seasons for which this result holds was during the 2019-2020 and 2020-2021 COVID-19 seasons. We could not reject the null hypothesis for the remaining seasons.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{figures/figure_uwp_upm_sport.pdf}
\caption{Plot of (left) mean underdog win probability and (right) mean underdog profit margin grouped by sport and COVID-19 status. The error bars represent 95\% confidence intervals.}
\label{fig:covid_sport}
\end{center}
\end{figure}
\begin{table}
\caption{P-values for different statistical tests for the underdog win probability and profit margin during COVID-19 versus normal time periods segmented by sport ($^*$ indicates significance at a 5\% level and $^{**}$ indicates significance at a 1\% level under the Holm-Bonferonni correction). The tests are Kolmogorov-Smirnov (KS) and Mann-Whitney U (MW).}
\centering
\begin{tabular}{|l|c|c||c|c|}
\toprule
\multirow{2}{*}{Sport} &
\multicolumn{2}{|c|}{Underdog win probability} &
\multicolumn{2}{|c|}{Underdog profit margin} \\
&KS & MW & KS & MW \\\midrule
NBA & $0.0010^{**}$ & $0.0000^{**}$ & $0.0010^{**}$ & $0.0000^{**}$ \\\hline
NFL & 1.0000 &0.2937 & 0.9957& 0.2693 \\\hline
NHL & 0.5572 &0.0541 & 0.5572& 0.1895\\\hline
MLB & 0.8812 &0.1173 & $0.0001^{**}$& 0.0838\\\hline
NCAAB & 0.0332 & $0.0006^{**} $ & 0.0207& $ 0.0010^{**} $ \\\hline
NCAAF & 0.5449 &0.0319 & 0.5449& 0.0326 \\\hline
\bottomrule
\end{tabular}\label{table:covid_sport}
\end{table}
\subsection{Underdog Profit Margin Versus Moneyline Odds for NBA COVID-19 Games}
We now examine NBA COVID-19 games more closely to understand which games have the highest underdog profit margin. The underdog moneyline odds cover a wide range. An interesting question is which segments of this range showed a high underdog profit margin. To answer this question, we segment the NBA games by their underdog implied win probabilities into bins ranging from zero to one. Table \ref{table:implied_prob_count} shows the bin intervals and the number of games in each bin.
We plot the mean underdog win probability and profit margin versus implied underdog probability bin in Figure \ref{fig:nba_bin}. We see that both quantities are greater during COVID-19 than during normal seasons for several bins. To asses the statistical significance of these differences, we conducted multiple tests. The results are shown in Table \ref{table:covid_nba_bins}. Because we are testing multiple bins simultaneously, we used the Holm-Bonferonni correction to assess significance. We find that only the $(0.2,0.3]$ bin has a significant difference in both metrics for the MW test at a 1\% level.
From this analysis we find evidence that games with implied underdog win probabilities between 0.2 and 0.3 provided much of the inefficiency in the NBA betting markets during COVID-19. This corresponds to games where the underdog odds are between 233 and 400. From Table \ref{table:implied_prob_count} we see that there are 153 games in this bin. This represents 21.7\% of NBA games played during COVID-19. Therefore, we see that a small fraction of the NBA games played during COVID-19 are responsible for the positive underdog profit margin during this period.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{figures/figure_nba_upm_bucket.pdf}
\caption{Plot of (left) mean underdog win probability and (right) mean underdog profit margin for the NBA grouped by underdog implied win probability (the value on the x-axis is the inclusive upper bound of the implied underdog probability bin.) The error bars represent 95\% confidence intervals.}
\label{fig:nba_bin}
\end{center}
\end{figure}
\begin{table}
\caption{Number of NBA games in each implied probability bin for normal and COVID-19 time periods.}\label{table:implied_prob_count}
\centering
\begin{tabular}{|c|c|c|}
\toprule
Implied probability & Number of NBA games & Number of NBA games \\
bin & (Normal) & (COVID-19) \\\hline
$(0.0,0.1]$ & 539 & 5 \\\hline
$(0.1,0.2]$ & 1,955 & 67 \\\hline
$(0.2,0.3]$ & 2,603 & 153 \\\hline
$(0.3,0.4]$ & 3,627 & 240 \\\hline
$(0.4,0.5]$ & 3,445 & 210 \\\hline
$(0.5,1.0]$ & 374 & 30 \\\hline
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{P-values for different statistical tests for the NBA underdog win probability and profit margin during COVID-19 versus normal time periods segmented by underdog implied win probability ($^*$ indicates significance at a 5\% level and $^{**}$ indicates significance at a 1\% level under the Holm-Bonferonni correction). The tests are Kolmogorov-Smirnov (KS) and Mann-Whitney U (MW)}\label{table:covid_nba_bins}
\centering
\begin{tabular}{|c|c|c||c|c|}
\toprule
\multirow{2}{*}{Underdog implied } &
\multicolumn{2}{|c|}{Underdog win probability} &
\multicolumn{2}{|c|}{Underdog profit margin} \\
win probability&KS & MW & KS & MW \\ \midrule
$(0.0,0.1]$ & 0.5524 & $0.0025^{*}$ & 0.5524 & 0.0025 \\\hline
$(0.1,0.2]$ & 0.8359 & 0.0514 & 0.8359 & 0.0537 \\\hline
$(0.2,0.3]$ & 0.0191 & $0.00021^{**}$ & 0.0186 & $0.0003^{**}$\\\hline
$(0.3,0.4]$ & 0.8743 &0.1118 & 0.8457 & 0.1139 \\\hline
$(0.4,0.5]$ & 0.6038 & 0.0652 & 0.5935 & 0.0862 \\\hline
$(0.5,1.0]$ & 1.0000 & 0.4333 & 0.9999 & 0.2974 \\\hline
\bottomrule
\end{tabular}
\end{table}
\subsection{Post-COVID-19 NBA Games}
Though the COVID-19 pandemic continued throughout the NBA 2020-2021 season, there was a point when the league began changing its policies.
The first return from COVID-19 in the 2019-2020 season was played in the isolated bubble. The beginning of the 2020-2021 season was played in the normal arenas, but with no or very limited fans in attendance. After the NBA All-Star Game, many teams began allowing larger numbers of fans to attend the games \citep{fans_return}. We wish to understand the impact of fans on the performance of the betting markets.
We plot the average underdog profit margin versus season for the NBA in Figure \ref{fig:NBA_post_covid}. We see that the average profit margin became very positive in the COVID-19 seasons, but then became negative during the post-COVID-19 season when fans returned. We conduct a one-sided Wilcoxon signed-rank test on the two COVID-19 seasons and the post-COVID-19 season using a Bonferonni-Holm correction to verify the sign of these profit margins. The resulting average underdog profit margin and p-values are shown in Table \ref{table:postcovid}. As can be seen, the average underdog profit margin is negative for the post-COVID games when fans returned in attendance, while it is positive for the COVID-19 games with no fans (significant at a 1\% level).
\begin{figure}
\begin{center}
\includegraphics[scale = 0.5]{figures/figure_nba_upm_post_COVID.pdf}
\caption{Plot of mean underdog profit margin for the NBA versus season and COVID-19 game status. }
\label{fig:NBA_post_covid}
\end{center}
\end{figure}
\begin{table}
\caption{Average underdog profit margin and one sided p-values for Wilcoxon signed-rank test for the NBA underdog profit margin during the 2019-2020 and 2020-2021 COVID-19 seasons and the 2020-2021 post-COVID-19 seasons ($^*$ indicates significance at a 5\% level and $^{**}$ indicates significance at a 1\% level under the Holm-Bonferonni correction). }
\label{table:postcovid}
\centering
\begin{tabular}{|c|c|c|}
\hline
Season & Average Underdog \\
&Profit Margin (p-value) \\\hline
2019-2020 COVID-19 & 0.17 ($0.001^{**}$) \\\hline
2020-2021 COVID-19 & 0.10 ($0.001^{**}$) \\\hline
2020-2021 Post-COVID-19 & -0.9 ($0.001^{**}$) \\\hline
\end{tabular}
\end{table}
\subsection{Discussion}
Our analysis has shown that the NBA had large inefficiencies in their moneyline betting markets during COVID-19, while other sports did not. If we assume that the oddsmakers are good at incorporating known information to set the odds for the games, then any inefficiency must come from some unaccounted factors. These factors may arise from a combination of the structure of the sports leagues, the nature of the gameplay, and the impact of COVID-19 on the gameplay.
We begin with the structure of the leagues. By structure, we refer to the tendency for underdogs to win games. In leagues with a concentrated distribution of talent, one would expect underdogs to win more frequently. In fact, for a league where all teams are equally skilled, we would expect the underdog win probability to be near 0.5. From Figure \ref{fig:covid_sport}, we see that the sports with the lowest underdog win probabilities are the two college sports (NCAAF and NCAAB). The spectrum of talent is quite wide in college sports, as they are not professional. Therefore, one would expect in some games upsets to be incredibly unlikely. While we did see a small increase in the underdog win probability for the NCAAB, the average underdog profit margin was not positive for both of these college sports. Therefore, it seems that COVID-19 was not able to impact the games enough to overcome the wide talent spectrum in the college leagues.
We next consider the nature of gameplay. The highest average underdog win probabilities during normal season games belong to the NHL and MLB, with values exceeding 0.4. This may be due to how these sports are played. Table \ref{table:ppg} shows statistics for the sum of points scored by both teams in a game, which we refer to as \emph{total points}. We see that the NHL and MLB have the lowest average total points and highest coefficient of variation (standard deviation divided by mean) for total points. This suggests that scoring is rare and the points scored in a game can fluctuate greatly. These fluctuations may be due to random factors which cannot be incorporated into the odds. Baseball involves hitting a small ball thrown at a high velocity with a narrow bat. Hockey involves hitting a puck into a small net guarded by a goalie. Randomness plays a large role in both of these sports. This is likely why the underdog win probability is so large. We saw that the average underdog profit margin of the MLB and NHL were both negative. Therefore, it appears that the random factors arising from COVID-19 could not increase the overall randomness such that the betting markets became inefficient.
The NBA's average underdog win probability was near 0.3 during normal seasons, and increased to 0.38 during COVID-19. From Table \ref{table:ppg} we see that the NBA has the lowest coefficient of variation of total points. Therefore, NBA basketball is inherently less random than other sports. The major impact of COVID-19 was to remove live audiences from the games. In the bubble, there was also no travel and all teams lived in a closed environment. However, we saw in Figure \ref{fig:NBA_post_covid} that when the NBA went from the bubble to their own arenas, but without fans, the average underdog profit margin stayed positive. However, once audiences were allowed to attend the games after the All-Star Game, the average underdog profit margin became negative again. Therefore, it seems likely that the absence of fans at the games was a cause of the betting market inefficiency. It is not clear why this is the case. One hypothesis is that when fans are removed, the home team advantage is eliminated. The NBA is a professional league, so it is likely the skill level of the players are concentrated. If this is the case, then when the home field advantage is absent, the game becomes more susceptible to randomness. In fact, the average underdog win probability increased during COVID-19 to a value comprable to more random sports such as hockey or baseball. Our analysis shows that the oddsmakers were not able to account for this, resulting in the inefficiency in the markets.
\begin{table}[t]
\centering
\begin{tabular}{|l|l|r|r|r|r|r|}
\hline
Total Points per Game & NBA & NFL & NHL & MLB & NCAAB & NCAAF \\ \hline
Mean & 205.3 & 45.3 & 5.6 & 8.8 & 139.6 & 55.6 \\ \hline
Standard deviation & 22.1 & 14.1 & 2.3 & 4.4 & 19.8 & 18.4 \\ \hline
Coefficient of variation & 0.11 & 0.31 & 0.41 & 0.50 & 0.14 & 0.33 \\ \hline
\end{tabular}
\caption{Statistics of the total points per game (sum of the points scored by each team in a game) for different sports.}\label{table:ppg}
\end{table}
\section{Underdog Betting Strategies}\label{sec:betting}
The analysis in Section \ref{sec:covid} showed that the NBA average underdog profit margin was positive during COVID-19. This suggests that one could have made profit by betting on NBA underdogs during this period. From Figure \ref{fig:covid_sport} we see that by just betting an equal amount of money on the underdog in each game results in a 16.7\% profit margin. In this section we explore how much more profit could be achieved with more complex betting strategies.
\subsection{Betting Scenario}
We consider a scenario where we place daily bets on the underdogs in each game. The bets for games played in a single day are placed simultaneously. This is close to what would be done in practice as many games are played at the same time. The bankroll on day $t$ is denoted $M_t$ and we begin with a bankroll of $M_0$. We use the following strategy to determine the amount of the bankroll to bet each day. We select a value $\lambda\in[0,1]$, and each day we only reinvest a fraction $\lambda$ of the total bankroll on the games.
After designating $\lambda M_t$ for betting on day $t$, we must decide how to allocate these funds across the available games. In practice we would not know a priori which games are profitable. Therefore, we must consider strategies which only utilize information available before the games begin. For an underdog team $u$ in a game, the information we consider is the implied underdog win probability $p_u$ which is set by the betting market. We must choose a strategy that maps the $p_u$ to a bet amount. To do this, we define $w_u\in [0,1]$ as the fraction of the allocated bankroll to bet on the underdog. With this notation the wager on $u$ is given by $ \lambda M_tw_u$. To specify $w_u$, we select a non-negative weight function $f(p_u)$. Then $w_u$ is given by
\begin{align*}
w_u = \frac{f(p_u)}{\sum_{v\in G} f(p_v)},
\end{align*}
where we denote the set of games played on the day as $G$.
There are many possibilities for $f(p_u)$. If we assume that the implied underdog win probability $p_u$ equals the true win probability $q_u$, then the underdog bet has a mean payout of zero. In this case, underdog bets can be distinguished by other statistics, such as their variance. For a 1 USD moneyline bet on an underdog with implied and true win probabilities both equal to $p_u$, the variance is $(1-p_u)/p_u$. This means that games with lower underdog win probabilities have a higher variance. Risk-averse strategies would place smaller wagers on high variance bets, given that the bets have equal means. This translates to weight functions $f(p_u)$ that are monotonically increasing in $p_u$. In contrast, risk-seeking strategies would have monotonically decreasing weight functions. We list several different choices for the weight functions in Table \ref{table:weights}. We consider both risk-averse and risk-seeking strategies. The weight functions for risk-seeking strategies are the inverse of the implied underdog win probability, the inverse standard deviation of a Bernoulli random variable for the underdog winning (with $p_u\leq 0.5$), and the standard deviation of a 1 USD underdog moneyline bet. The risk-averse weight functions are simply the inverse of these functions. We also consider a uniform weight function, which has no risk-preference.
\begin{table}[h]
\label{table:weights}
\centering
\begin{tabular}{|l|l|}
\hline
\textbf{Name} & \textbf{Weight Function } \\ \hline
\textbf{Uniform} & $1$ \\ \hline
\textbf{Probability} & ${p_u}$ \\\hline
\textbf{Inverse Probability} & $\frac{1}{p_u}$ \\ \hline
\textbf{Bernoulli } & $\sqrt{p_u(1-p_u)}$ \\ \hline
\textbf{Inverse Bernoulli } & $\frac{1}{\sqrt{p_u(1-p_u)}}$ \\ \hline
\textbf{Moneyline } & $\sqrt{\frac{1-p_u}{p_u}}$ \\ \hline
\textbf{Inverse Moneyline } & $\sqrt{\frac{p_u}{1-p_u}}$ \\ \hline
\end{tabular}
\caption{Weight functions $f(p_u)$ used in different underdog betting strategies.}
\end{table}
\subsection{Betting Performance}
We test different betting strategies by varying $\lambda$ and $f(p_u)$. For $\lambda$ we use values between zero and one spaced 0.1 apart. We start with a bankroll of $M_0=100$ USD and bets are place on all games in the COVID-19 portion of the 2019-2020 season, and all games in the 2020-2021 season before the All-Star Game. Figure \ref{fig:return_f_lambda} shows the return for each strategy. We see that the return is maximized for the strategy where $\lambda = 1.0$ and the weight function is inverse probability. The corresponding return is 2,666 USD, or nearly a 26-fold gain in the initial investment. The inverse probability weight is a risk-seeking strategy and places more bets on games with high underdog odds. However, all weight functions give similar returns when the entire bankroll is bet, except for the probability weight, which ends up losing the entire bankroll. We compare the time evolution of the returns for each weighting function with $\lambda = 1.0$ in Figure \ref{fig:covid_weighting_reinvest}. As can be seen, the probability weight function loses the entire bankroll within a month. All other weight functions show very similar growth for the return over the course of the COVID-19 seasons.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{figures/figure_returns.jpg}
\caption{Returns [USD] from an initial investment of 100 USD of NBA underdog betting strategies during COVID-19 for different weight functions and values of the reinvestment fraction $\lambda$. A value of zero indicates ruin. }
\label{fig:return_f_lambda}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale = 0.33]{figures/figure_nba_cumulative_returns_timeseries.pdf}
\caption{Plot of the cumulative returns of NBA underdog betting strategies during COVID-19 for different weight functions and a reinvestment fraction of $\lambda=1.0$. The initial investment is 100 USD. }
\label{fig:covid_weighting_reinvest}
\end{center}
\end{figure}
The final return is not the only metric we can use to evaluate the betting strategies. Many times one is more concerned with the risk taken to achieve a given return. One way to quantify this is to use the Sharpe ratio, which equals the mean return divided by the standard deviation of the return. Strategies with large Sharpe ratios achieve a return with very little risk. The Sharpe ratio assumes the returns are normally distributed. However, in the case of moneyline bets, the returns have a bimodal distribution (in fact, for a given game the returns take two values, one negative and one positive). For this reason, we can define a more robust version of the Sharpe ratio as
\begin{equation}
\gamma = \frac{\text{median}(R)}{\text{MAD}(R)} \label{eq:gamma}
\end{equation}
where $R$ is the daily return, and MAD is the median absolute deviation. Both the numerator and denominator of this modified Sharpe ration are robust to outliers, making this measure more appropriate for the returns associated with moneyline bets.
We show the robust and normal Sharpe ratios for different betting strategies in Figures \ref{fig:sharpe_robust} and \ref{fig:sharpe}. We see that $\lambda = 1.0$ does not have the highest Sharpe ratio. Rather, the highest Sharpe ratios come from the Bernoulli weight with $\lambda$ = 0.1. This is a risk-averse strategy as most money is not placed on low probability games and only a small fraction of the bankroll is reinvested each day. The return is 291.73 USD for this strategy, which is much lower than the 26-fold gain achieved with a more risk-seeking approach.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.8]{figures/figure_sharpe_robust.jpg}
\caption{Robust Sharpe ratios of NBA underdog betting strategies during COVID-19 for different weight functions and values of the reinvestment fraction $\lambda$. }
\label{fig:sharpe_robust}
\end{center}
\end{figure}
\begin{figure}[]
\label{fig:normal_sharpe}
\begin{center}
\includegraphics[scale = 0.8]{figures/figure_sharpe.jpg}
\caption{Sharpe ratios of NBA underdog betting strategies during COVID-19 for different weight functions and values of the reinvestment fraction $\lambda$. }
\label{fig:sharpe}
\end{center}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
COVID-19 affected nearly all professional and college sports. However, our analysis found that the betting markets for most sports remained efficient, except for the NBA. Here we saw that there were many more upsets than predicted by the odds makers. We are not able to precisely identify the reasons for this inefficiency, and it remains an open question as to why it occurred. However, we do have supporting evidence that it may be due to the more frequent scoring in basketball combined with the absence of fans when the NBA season resumed during COVID-19. For whatever reason, odds makers were not able to adjust for the impact of these factors.
This NBA market inefficiency provided a lucrative opportunity for bettors. We found that most of the inefficiency was concentrated around a small percentage of the games where the underdog had odds between 233 and 400. Simply betting an equal amount on every underdog resulted in a 16.7\% return, while more complex strategies which combine game specific allocations and reinvestment of winnings resulted in a 26-fold gain in the initial investment. Our work shows that sports betting markets are generally efficient, but occasionally odds makers are not able to correctly account for the impact of extreme events.
\newpage
\section{Introduction.}\label{intro}
\section{Introduction}
Although it has been around for over a century, margarine was not
always the preferred tablespread in the United States. In 1930, per
capita consumption of margarine was only 2.6 pounds (vs. 17.6 pounds
of butter). Times have changed for the better, though (if you're a
margarine manufacturer, that is). Today, per capita consumption of
margarine in the United States is 8.3 pounds (including vegetable
oil spreads) whereas butter consumption is down to about 4.2 pounds.
Furthermore, as shown in Figure \ref{frontier}, it is always butter,
not margarine, that is traded off against guns. This leads to the announcement of our result.
\begin{theorem}
\label{marg-butt-th}
In a reverse dictionary, $(\mbox{\bi marg}\succ\mbox{\bi butt\/}\; \land\;
\mbox{\bi arine}\succ\mbox{\bi er})$.
Moreover, continuous reading of a compact subset of the dictionary
attains the minimum of patience at the moment of giving up.
\end{theorem}
The proof will be given in the e-companion to this paper.
\begin{figure}[t]
\begin{center}
\includegraphics[height=2in]{Sample-Figure}
\caption{Production Possibilities Frontier.} \label{frontier}
\end{center}
\end{figure}
\section{Motivation}
Margarine or butter? According to the website of the
\cite{namm}, ``Despite the
recommendations of health professionals and leading health
organizations to choose margarine, many consumers are confused.''
But whether or not they are confused, consumers are voting with
their pocketbooks. The
\cite{abi}, whose
slogan is ``Things are better with butter!'', presents many tempting
recipes on its website, but also reports declining sales in its
marketing releases.
\begin{hypothesis}
Things are better with butter.
\end{hypothesis}
Indeed, even though a reputed chain email letter claims that margarine is ``but
one molecule from being plastic''
\citep{btc},
American consumers appear to be
sliding away from butter. Given this trend, a historical review of
margarine is in order.
\begin{lemma}
Many consumers are confused.
\end{lemma}
\begin{lemma}
Whether or not the consumers are confused, they are voting with
their pocketbooks.
\end{lemma}
\begin{proposition}
American consumers are sliding away from butter.
\end{proposition}
\section{Historical Timeline}
The following are milestones in the history of margarine as
reported by the
\cite{namm2}.
Note that they have been transcribed verbatim here, which
is generally bad practice. Even if the material is explicitly
indicated as a quotation, having this much content from another
source will almost certainly result in rejection of the paper for
lack of originality.
But if not called out {\em as a quotation}, lifting even a single
sentence (or less) from another source is plagiarism, even if the
source is cited. Plagiarism is a very serious offense, which will
not only lead to rejection of a paper, but will also bring more
serious sanctions, such as being banned from the journal,
notification of your dean or department chair, etc. So don't do it!
There are many on-line resources to help determine what constitutes
plagiarism and how to avoid (see, e.g., CollegeBoard.com). But the
simplest rule to follow is ``when it doubt, call it out.'' That is,
make very plain what comes from other sources, in properly cited
word-for-word quotations or paraphrases.
\section{1800s}
\begin{quotation}
\begin{description}
\item[\bf 1870] Margarine was created by a Frenchman from Provence,
France -- Hippolyte M\`ege-Mouriez -- in response to an offer by
the Emperor Louis Napoleon III for the production of a satisfactory
substitute for butter. To formulate his entry, M\`ege-Mouriez used
margaric acid, a fatty acid component isolated in 1813 by Michael
Chevreul and named because of the lustrous pearly drops that reminded
him of the Greek word for pearl -- margarites. From this word,
M\`ege-Mouriez coined the name margarine for his invention that
claimed the Emperor's prize.
\item[\bf 1873] An American patent was granted to M\`ege-Mouriez who
intended to expand his French margarine factory and production to
the United States. While demand for margarine was strong in northern
Europe and the potential equally as promising in the U.S.,
M\`ege-Mouriez's operations nevertheless failed and he died obscurely.
\item[\bf 1878] Unilever began manufacturing margarine in Europe.
\item[\bf 1871-73] The U. S. Dairy Company in New York City began
production of "artificial butter."
\item[\bf 1877] State laws requiring identification of margarine were
passed in New York and Maryland as the dairy industry began to feel
the impact of this rapidly growing product
\item[\bf 1881] Improvements to M\`ege-Mouriez's formulation were made;
U.S. Dairy created a subsidiary, the Commercial Manufacturing Company,
to produce several million pounds annually of this new product.
\item[\bf 1885] When a court voided a ban on margarine in New York,
dairy militants turned their attention to Washington, resulting in
Congressional passage of the Margarine Act of 1886. The Act imposed
a tax of two cents per pound on margarine and required expensive
licenses for manufacturers, wholesalers and retailers of margarine.
President Grover Cleveland, from the dairy state of New York, signed
the law, describing it as a revenue measure. However, the 1886 law
failed to slow the sale of margarine principally because it did not
require identification of margarine at the point of sale and
margarine adversaries turned their attention back to the states.
\item[\bf 1886] More than 30 manufacturing facilities were reported to
be engaged in the production of margarine. Among them were Armour
and Company of Chicago and Lever Brothers of New York. Seventeen
states required the product to be specifically identified as
margarine. Various state laws to control margarine were passed in a
number of states, but were not enforced. Later that year, New York
and New Jersey prohibited the manufacture and sale of yellow-colored
margarine.
\end{description}
\end{quotation}
\section{1900s}
\subsection{Before the End of WWII}
\begin{quotation}
\begin{description}
\item[\bf 1902] 32 states and 80\% of the U.S. population lived under
margarine color bans. While the Supreme Court upheld such bans, it
did strike down forced coloration (pink) which had begun in an
effort to get around the ban on yellow coloring. During this period
coloring in the home began, with purveyors providing capsules of
food coloring to be kneaded into the margarine. This practice
continued through World War II.
\item[\bf 1902] Amendments to the Federal Margarine Act raised the tax
on colored margarine five-fold, but decreased licensing fees for
white margarine. But demand for colored margarine remained so
strong, that bootleg colored margarine flourished.
\item[\bf 1904] Margarine production suffered and consumption dropped
from 120 million pounds in 1902 to 48 million.
\item[\bf 1910] Intense pressure by competitors to keep prices low and
new product innovations, as well as dairy price increases, returned
production levels of margarine back to 130 million pounds. The
Federal tax remained despite many efforts to repeal it, but
consumption grew gradually in spite of it.
\item[\bf 1920] With America's entry into World War I, the country began
to experience a fat shortage and a sharp increase in the cost of
living, both factors in driving margarine consumption to an annual
per capita level of 3.5 pounds.
\item[\bf 1930] The Margarine Act was again amended to place the Federal
tax on naturally-colored (darkened with the use of palm oil) as well
as artificially-colored margarine. During the Depression dairy
interests again prevailed upon the states to enact legislation
equalizing butter and margarine prices. Consumers reacted and
consumption of margarine dropped to an annual per capita level of
1.6 pounds.
\item[\bf 1932] Besides Federal taxes and licenses, 27 states prohibited
the manufacture or sale of colored margarine, 24 imposed some kind
of consumer tax and 26 required licenses or otherwise restricted
margarine sales. The Army, Navy and other Federal agencies were
barred from using margarine for other than cooking purposes.
\item[\bf 1941] Through production innovations, advertising and improved
packaging, margarine consumption regained lost ground. A Federal
standard was established recognizing margarine as a spread of its
own kind. With raised awareness of margarine's health benefits from
a 1941 National Nutrition Conference, consumers began to take notice
of restrictions on margarine that were keeping the product from them
and artificially inflating the price.
\item[\bf 1943] State taxes on margarine were repealed in Oklahoma. The courts
removed color barriers in other states shortly after World War II (see \citealt{tjp}).
\end{description}
\end{quotation}
\subsection{After the End of WWII}
\begin{quotation}
\begin{description}
\item[\bf 1947] Residual war shortages of butter sent it to a dollar a pound
and Margarine Act repeal legislation was offered from many politicians.
\item[\bf 1950] Some of the more popular brands prior up until now were Cloverbloom,
Mayflower, Mazola, Nucoa, Blue Plate, Mrs. Filbert's, Parkay,
Imperial, Good Luck, Nu-Maid, Farmbelle, Shedd's Safflower,
Churngold, Blue Bonnet, Fleischmann's, Sunnyland and Table Maid.
\item[\bf 1950] Margarine taxes and restrictions became the talk of the country.
Finally, following a significant effort by the National Association
of Margarine Manufacturers, President Truman signed the Margarine
Act of 1950 on March 23 of that year.
\item[\bf 1951] The Federal margarine tax system came to an end. Pre-colored
margarine was enjoyed by a consumer also pleased with lower prices.
Consumption almost doubled in the next twenty years. State color
bans, taxes, licenses and other restrictions began to fall.
\item[\bf 1960s] The first tub margarine and vegetable oil spreads were
introduced to the American public.
\item[\bf 1967] Wisconsin became the last state to repeal restrictions on
margarine \citep{w}.
\item[\bf 1996] A bill introduced by Rep. Ed Whitfield would signal an end
to the last piece of legislation that adversely affects the sale of
margarine. Currently, federal law prohibits the retail sale of
margarine in packages larger than one pound, as well as detailed
requirements regarding the size and types of labeling of margarine
and a color requirement. This new legislation would remove these
restrictions from the Federal Food, Drug, and Cosmetic Act (FFDCA).
Rep. Whitfield's bill, the Margarine Equity Act, is part of HR 3200,
the Food and Drug Administration (FDA) reform package and addresses
dated requirements that are not applicable to the marketplace.
\item[\bf 1998] 125th anniversary of the U.S. patent for margarine
\noindent{{\em Source:}
\cite{namm}.}
\end{description}
\end{quotation}
| {
"attr-fineweb-edu": 1.887695,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcCDxK3YB9m3uuefl | \section{Introduction} \label{sec:intro}
The \emph{min-max multiple traveling salesmen problem} (or \emph{m$^3$TSP}) is a generalization of the traveling salesman problem (TSP). Instead of one traveling salesman, the {m$^3$TSP} involves multiple salesmen who together visit a group of designated cities with the objective of minimizing the maximum distance traveled by any member of the group. We consider the problem in the Euclidean setting, where the cities to be visited are points in $\RE^2$, and the distance between cities is the Euclidean distance. We support a formulation where each salesman begins and ends at the same designated depot. We present a randomized polynomial-time approximation scheme (PTAS) for this problem.
While the majority of research on the multiple TSP problem has focused on minimizing the sum of travel distances, the min-max formulation is most appropriate for applications where the tours are executed in parallel, and hence the natural objective is to minimize the maximum service time (or \emph{makespan}). We assume that all salesmen travel at the same speed, and thus the objective is to minimize the maximum distance traveled by any of the salesmen.
\subsection{Definitions and Results} \label{subsec:defs}
The input to {m$^3$TSP} consists of a set $C = \{c_1, \ldots, c_n\}$ of $n$ points (or ``cities'') in $\RE^2$, the number $k$ of salesmen, and a depot $d \in C$. The output is a set of $k$ \emph{tours}, where the $h$th tour is a sequence of cities that starts and ends at the depot. All of the cities of $C$ must be visited by at least one salesman. The length of a tour is just the sum of inter-city distances under the Euclidean metric, and the objective is to minimize the maximum tour length over all $k$ tours. In the approximate version, we are also given an approximation parameter $\eps > 0$, and the algorithm returns a set of $k$ tours such that the maximum tour length is within a factor of $(1 + \eps)$ of the optimal min-max length. Throughout, we assume that $k$ is a fixed constant, and we treat $n$ and $\eps$ as asymptotic quantities. Here is our main result.
\begin{theorem} \label{thm:main}
Given a set of $n$ points in $\RE^2$, a fixed integer constant $k \geq 1$, and a approximation parameter $\eps > 0$, there exists a randomized algorithm for {m$^3$TSP} that runs in expected time $O\big(n ((1/\eps) \log (n/\eps))^{O(1/\eps)} \big)$.
\end{theorem}
Note that the big-O notation in the polynomial's exponent conceals a constant factor that depends on $k$.
\subsection{Related Work} \label{subsec:related_work}
The multiple traveling salesmen problem with the min-max objective has been studied from both a theoretical and practical perspective. Arkin, Hassin, and Levin describe approximation algorithms for a variety of vehicle routing problems within graphs~\cite{Ark06}. This includes the minimum path cover, where we are given a bound on each path length and the objective is to minimize the number of salesmen required to visit each node. Most notably, they provide a 3-factor approximation to the min-max path cover, which is equivalent to our {m$^3$TSP} problem (in graphs, not Euclidean space) when depot locations are assigned to each salesman. Xu and Rodrigues give a $3/2$-factor approximation for the min-max multiple TSP in graphs where there are multiple distinct depots~\cite{Xu10}. In both cases, the number of salesmen $k$ is taken to be a constant.
Researchers have also explored different heuristic approaches to solving the min-max problem. França, Gendreau, Laporte, and Müller provide two exact algorithms for {m$^3$TSP} in graphs with a single depot. They also present an empirical analysis of their algorithms with cases for $\leq 50$ cities and $m = 2,3,4,5$ ~\cite{Franc95}. Other approaches include neural networks and tabu search (see, e.g., \cite{Mats14, Franc95}). Carlsson, \textit{et al.} apply a combination of linear programming with global improvement and a region partitioning heuristic~\cite{Car07}.
Even, Garg, Könemann, Ravi, and Sinha present constant factor approximations for covering the nodes of a graph using trees with the min-max objective~\cite{Even04}. The algorithms run in polynomial time over the size of the graph and $\log({1/\eps})$. In this case, the trees are either rooted or unrooted, and there is a given upper bound on the number of trees involved. The authors describe the instance as the ``nurse station location'' problem, where a hospital needs to send multiple nurses to different sets of patients such that the latest completion time is minimized.
Becker and Paul considered multiple TSP in the context of navigation in trees~\cite{Be19}. Their objective is the same as ours, but distances are measured as path lengths in a tree. They present a PTAS for the min-max routing problem in this context. In their formulation, the root of the tree serves as the common depot for all the tours. They strategically split the tree into clusters that are covered by only one or two salesmen. Their algorithm is based on dynamic programming, where each vertex contains a limited configuration array of rounded values representing the possible tour lengths up to that point. This rounding reduces the search space and allows for a polynomial run-time. Our approach also applies a form of length rounding, though our strategy differs in that it rounds on a logarithmic scale and performs dynamic-programming on a quadtree structure.
Asano, Katoh, Tamaki, and Tokuyama considered a multi-tour variant of TSP, where for a given $k$ the objective is to cover a set of points by tours each of which starts and ends at a given depot and visits at most $k$ points of the set~\cite{Asano97}. They present a dynamic programming algorithm in the style of Arora's PTAS whose running time is $(k/\eps)^{O(k/\eps^3)} + O(n \log n)$.
\subsection{Classic TSP PTAS}
In his landmark paper, Arora presented a PTAS for the standard TSP problem in the Euclidean metric~\cite{Arora98}. Since we will adopt a similar approach, we provide a brief overview of his algorithm. It begins by rounding the coordinates of the cities so they lie on points of an integer grid, such that each pair of cities is separated by a distance of at least eight units. (See our version of perturbation in Section~\ref{subsec:perturbation}.) This step introduces an error of at most $\frac{\eps \OPT}{4}$ to the tour length, where $\OPT$ is the length of the optimum tour. Let $L$ denote the side length of the bounding box after perturbation; note that $L = O(n/\eps)$.
The box is first partitioned through a quadtree dissection, in which a square cell is subdivided into four child squares, each of half the side length. This continues until a cell contains at most one point. Due to rounding, each cell has side length at least $1$ (see Figure~\ref{fig:quadtree}, left). The dissection is randomized by introducing an $(a, b)$-shift. The values $a$ and $b$ are randomly-chosen integers in $[0, L)$; an $(a,b)$ shift adds $a$ to the $x$ coordinate and $b$ to the $y$ coordinate of the dissection's lines before reducing the value by modulo $L$ (see Figure~\ref{fig:quadtree}, right). We identify each square in the quadtree based on its level in the recursion, so that a level $i$ node is a square created after performing our recursive split $i$ times.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=1]{Figs/shift_quadtree.pdf}}
\caption{An example of a quadtree structure incurred on a set of points. The left shows the original quadtree structure, while the right displays an $(a,b)$-shift applied to the same set of points.}
\label{fig:quadtree}
\end{figure}
The meat of Arora's algorithm lies in his notion of portals. He defines an $m$-regular set of portals for the shifted dissection as a set of $m$ evenly spaced points, called \emph{portals}, placed on each of the four edges of a quadtree square. Moreover, one portal is placed on each corner. He defines a salesman tour as \emph{$(m,r)$-light} if it crosses each edge of the square only at a portal, at most $r$ times. His DP algorithm proceeds bottom-up from the leaves of the quadtree, calculating the optimal TSP path within the square subject to the $(m,r)$-light restriction. In the end, this approach takes polynomial time over $n$.
Arora proves in his \emph{Structure Theorem} that, given a randomly-chosen $(a,b)$-shift, with probability at least $1/2$ the $(m,r)$-light solution has a cost of at most $(1 + \eps) \OPT$ (where $m = O\left(\frac{\log L}{\eps}\right)$ and $r = O\left(\frac{1}{\eps}\right)$). The proof shows that perturbing the optimal tour to be $(m,r)$-light with respect to each quadtree square adds only a small amount of error. The probability of cost is calculated through the expected value of extra length charged to each line in the quadtree.
Our algorithm shares elements in common with Arora's algorithm, including quadtree-based dissection, random shifting, and $(m,r)$-light portal restrictions, all within a dynamic-programming approach. However, to handle multiple tours, we alter the DP formulation and develop a rounding strategy to balance the lengths of the various tours.
\section{Algorithm} \label{sec:algorithm}
In this section we present our algorithm for {m$^3$TSP}.
\subsection{Perturbation} \label{subsec:perturbation}
Before computing the dissection on which the DP is based, we begin by perturbing and rounding the points to a suitably sized square grid in the same manner as Arora. That is, we require that (i) all points have integral coordinates, (ii) any nonzero distance between two points is at least eight units, and (iii) the maximum distance between two points is $O(n/\eps)$. Take $L_0$ as the original size of the bounding box of the point set, and let $\OPT$ denote the optimal solution's makespan (bearing in mind that $\OPT \geq L_0/k$). To accomplish (i), a grid of granularity $\frac{\eps L_0}{8 k n}$ is placed and all points are moved to the closest coordinate. This means that, for a fixed order of nodes visited, the maximum tour length is increased by at most $2 n \frac{\eps L_0}{8 k n} = \frac{\eps L_0}{4 k} < \frac{\eps \OPT}{4}$. To accomplish (ii) and (iii), all distances are divided by $\frac{\eps L_0}{64 k n}$. This step leads to nonzero internode distances of at least $\frac{\eps L_0}{8 k n} / \frac{\eps L_0}{64 k n} = 8$. Moreover, we have that $L = \frac{64kn}{\eps} = O(n/\eps)$, so the maximum internode distance is $O(n/\eps)$. Note that the error incurred on $\OPT$ by snapping points to the grid could be as much as $\frac{\eps \OPT}{4}$.
\subsection{DP Program Structure} \label{subsec:dp_structure}
We define the same $(m,r)$-light portal restriction for each of our tours; however, in our case we have $m = O\left(\frac{k\log L}{\eps}\right)$ and $r = O\left(\frac{k}{\eps}\right)$ (see \ref{lem:structure_arora} for the specific values).
On top of Arora's portal-pairing argument for each quadtree node, leading to a DP lookup table of tours, we add another restriction for our $k$ tours. For each separate TSP tour, in each quadtree node, we store all possible path lengths as booleans rounded on a certain scale. We need this rounding to manage our algorithm's running time, for otherwise, the number of possible tour lengths would explode as we climb further up our DP table.
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.75]{Figs/DPscale.pdf}}
\caption{A visualization of the rounding scale}
\label{fig:dp_scale}
\end{figure}
Our algorithm rounds any given path length within a level $i$ node to the nearest tick mark above. This scale, shown in Figure 1, increases by a multiplicative factor of $1+\alpha$. Its lower bound, $\delta$, equal to Arora's interportal distance within the level, is $\frac{L}{2^i m}$ (where $L = \frac{64kn}{\eps}$ is our bounding box side length). Arora's single tour TSP solution allows for $\delta$ amount of error in adjusting the tour to only cross edges at portals. Meanwhile, $\alpha = \frac{2 \eps}{\log L} $ represents the degree of granularity we have in our rounding scale (the total error in rounding a tour of length $l$ can be no larger than $(1 + \alpha)l$).
The upper bound on the scale is obtained by applying a basic algorithm to visit each point (which are all snapped to a grid thanks to our perturbation step in section~\ref{subsec:perturbation}). One salesman is chosen that simply zig-zags through each grid line; this leads to a tour length bounded by $L^2$ (see Figure 2).
\begin{figure}[htbp]
\centerline{\includegraphics[scale=0.75]{Figs/upperbound.pdf}}
\caption{The single salesman tour that leads to our upper bound on the TSP tour length}
\label{fig:upper_bound}
\end{figure}
\subsubsection{Configurations} \label{subsubsec:configurations}
To describe our configuration, we first introduce some notation. Let $\{ \cdots \}$ denote a multiset, let $\ang{\cdots}$ denote a list, and let $(\cdots)$ denote an ordered tuple. Also, given positive integers $f$ and $g$, define
\[
\{p\}_{f,g} = \{p_{f,1}, p_{f,2}, \ldots, p_{f,g}\} \qquad\text{and}\qquad
\ang{(s,t)_{f,g}} = \ang{(s_{f,1},t_{f,1}), (s_{f,2},t_{f,2}), \ldots, (s_{f,g},t_{f,g})}.
\]
For a node in the quadtree, all possible groupings of the $k$ rounded paths are represented as a configuration $(A, B, C)$ where:
\begin{flalign*}\label{eq:configuration}
A & ~ = ~ \big( \{p\}_{1,2 e_1}, \{p\}_{2,2 e_2}, \ldots, \{p\}_{k,2 e_k} \big),\\
B & ~ = ~ \big( \ang{(s,t)_{1,e_1}}, \ang{(s,t)_{2,e_2}}, \ldots, \ang{(s,t)_{k,e_k}} \big), \\
C & ~ = ~ \big(l_1, l_2, \ldots l_k\big)
\end{flalign*}
where
\begin{itemize}
\item $A$ is a $k$-element ordered tuple of multisets containing portals. The $h$th multiset is limited to no more than $r$ portals on each of the square's four edges. The total size of each multiset should be an even number $2e_h \leq 4r$.
\item $B$ is a $k$-element ordered tuple of lists, the $h$th list representing all the entry-exit pairings for the $h$th tour. Each tuple $(s_{h,j},t_{h,j})$ represents a pairing of two distinct portals from the $h$th multiset in $A$.
\item $C$ is an ordered tuple of $k$ lengths indicating that the $h$th path within this configuration has a rounded length of $l_h$.
\end{itemize}
The DP lookup table is indexed by quadtree node and configuration. A particular value in the table is set to true when the algorithm finds that the specified configuration is achievable within the node. Otherwise, the value is by default set to false.
\subsubsection{Single Node Runtime} \label{subsubsec:single_node_runtime}
Throughout, we use ``$\log$'' and ``$\ln$'' to denote the base-2 and natural logarithms, respectively. We will make use of the following easy bounds on $\ln (1+x)$, which follows directly from the well known inequality $1 - \frac{1}{x} \leq \ln x \leq x-1$ for all $x \geq 0$.
\begin{lemma} \label{lem:natural_log_estimate}
For $0 < x < 1$, $\frac{x}{2} < \ln (1+x) < x$.
\end{lemma}
\begin{lemma} \label{lem:runtime-lem}
The number of rounded values at a level $i$ node is $O \left( \frac{1}{\eps} \log^2{\left( \frac{n}{\eps} \right) } \right)$.
\end{lemma}
\begin{proof}
An upper bound on the number of rounded values can be obtained by calculating the smallest integer $z$ where
$\delta(1 + \alpha)^z = L^2$. We take the $\log$ of both sides and solve for $z$:
\begin{align*}
\delta(1 + \alpha)^z
& ~ = ~ L^2 \\
\frac{L}{2^i m} (1 + \alpha)^z
& ~ = ~ L^2 \\
(1 + \alpha)^z
& ~ = ~ 2^i L m \\
z
& ~ = ~ \frac{\ln \left(2^i L m \right)}{\ln \left( 1 + \alpha \right)}.
\end{align*}
We can assume that $\eps < 1$, which means $\alpha = \frac{\eps}{2\log(L)} < \frac{1}{2\log(64 k n)} \leq \frac{1}{10}$. Lemma 2.1 can then be applied to $\ln(1 + \alpha)$ so that we have
\[
z
~ = ~ \frac{\ln \left(2^i L m \right)}{\ln \left( 1 + \alpha\right)}
~ < ~ \frac{2 \ln \left( 2^i L m \right)}{\alpha}.
\]
Moreover, since $i \leq \log{L}$, we have $2^i \leq 2^{\log{L}} = L$, and thus
\[
\frac{2 \ln \left( 2^i L m \right)}{\alpha}
~ \leq ~ \frac{2}{\alpha} \ln \left( L^2 m \right).
\]
We finally plug in values for $L = \frac{64 k n}{\eps}$, $m = O\left(\frac{k\log{L}}{\eps} \right)$, and $\alpha = \frac{\eps}{2\log(L)}$, yielding
\begin{align*}
\frac{2}{\alpha} \ln{\left( L^2 m \right) }
& ~ = ~ \frac{4\log{L}}{\eps} \ln{\left( L^2 m \right) } \\
& ~ = ~ \frac{4}{\eps} \log{\left( \frac{64 k n}{\eps} \right)} \ln{\left( \left( \frac{64 k n}{\eps} \right)^2 O \left(\frac{k \log{\frac{k n}{\eps}}}{\eps} \right) \right) } \\
& ~ = ~ \frac{4}{\eps} \log{\left( \frac{64 k n}{\eps} \right)} \frac{1}{\log{e}} \left[ \log{ \left( \frac{64 k n}{\eps} \right)^2 + \log \left( O \left( \log{\frac{k n}{\eps}} \right) \right) + \log{\left( O \left( \frac{k}{ \eps} \right) \right)} } \right] \\
& ~ = ~ O \left( \frac{1}{\eps} \left[ \log^2{\left( \frac{k n}{\eps} \right) } + \log{\left( \frac{k n}{\eps} \right) } \log{\left(\frac{k}{\eps}\right)} \right] \right).
\end{align*}
We can assume by nature of the problem that the number of tours is bounded by the number of points. Therefore, since $k \leq n$, we simplify our Big O bound to
\[
O \left( \frac{1}{\eps} \log^2 \left( \frac{n}{\eps} \right) \right).
\]
\end{proof}
\subsection{DP Recursion} \label{subsec:dp_recursion}
We define the $(m,r)$-multipath-multitour problem in the same way as Arora, but with the addition of our multiple tours. That is, an instance of this problem is specified by the following:
\begin{enumerate}[label=(\alph*)]
\item[\stepcounter{enumi}\theenumi] A nonempty square in the shifted quadtree.
\item For our $h$th tour, a multiset $\{p\}_{h,2 e_h}$ containing $\leq r$ portals on each of the square's four sides. The total amount of such portals in the $h$th multiset will be $2e_h \leq 4r$.
\item A list of tuples $\ang{(s,t)_{h,e_h}}$ indicating ordered pairings between the $2e_h$ portals specified in (b).
\item An instance of (b) and (c) for each of our $k$ tours.
\end{enumerate}
As shown earlier (in Section~\ref{subsubsec:configurations}), a configuration represents one possible combination of rounded tour lengths for a particular instance of the $(m,r)$-multipath-multitour problem.
\subsubsection{Base Case} \label{subsubsec:base_case}
Given a fixed choice in (1) the $2e_h$ portals for each of our $k$ multisets, containing at most $r$ portals from each square edge, and (2) the $\leq e_h$ entry-exit pairings for each of the $k$ tours, we have two cases for each leaf in our quadtree.
\begin{enumerate}
\item There is no point to visit in the leaf: then we simply take the lengths of the given entry-exit pairing paths for each tour, setting all configurations that they round up to as true in the DP table. This takes $O(1)$.
\item There is one point in the leaf: then we iterate over all assignments of the point to each tour within the node \emph{and} each entry-exit pairing within that tour (see Figure \ref{fig:leaves}). The path between the chosen entry-exit pairing within that tour is bent to visit the point, and we set that particular configuration of tours to true. We have $k$ tours and $\leq 2r$ pairings to choose for each tour, so this takes $2kr = O(kr)$.
\end{enumerate}
\begin{figure}[htbp]
\centerline{\includegraphics[scale=1]{Figs/leaves.pdf}}
\caption{A visualization of case (2) in our base case with $m=2$. The top row shows our two choices in which entry-exit path bends to visit the point when the orange tour is picked. The bottom row represents the possible configurations when the green tour is picked.}
\label{fig:leaves}
\end{figure}
\subsubsection{Root Case} \label{subsubsec:root_case}
We consider the case where we have one depot, chosen from our $n$ locations, assigned for all of our $k$ tours. Then we require that each tour visits the depot within its leaf node. This involves $2r$ choices for each tour in deciding which entry-exit path bends to the point, and thus $O(r^k)$ different combinations of tour paths within the leaf.
\subsubsection{General Case} \label{subsubsec:general_case}
Assume by induction that for level depth $> i$, the $(m,r)$-multipath-multitour problem has already been solved (that is, all possible configurations for these squares are stored in the lookup table). Let $S$ be any square at depth $i$, with children $S_1, S_2, S_3,$ and $S_4$. We have
\begin{itemize}
\item $(m+4)^{4rk}$ choices in (a), our multisets containing $\leq r$ portals on each of the square's four sides, for each of our tours
\item $(4r)!^k$ choices in (b), the associated portal pairings for our k tours
\end{itemize}
For each of these choices in (a) and (b) for the outer edges of $S$, we have another set of choices within the four inner edges created by $S$'s children. Specifically, we have
\begin{itemize}
\item $(m+4)^{4rk}$ choices in (a'), the multisets of $\leq r$ portals for each of our tours within the inner edges of $S_1, S_2, S_3, S_4$
\item $(4r)!^k (4r)^{4rk}$ choices in (b'), the portal pairings for each tour within the inner edges (the term $(4r)^{4rk}$ represents the number of ways of placing the inner edge portal chosen in (a') within the entry-exit ordering of $S$'s tour. There are $\leq 4r$ outer edge portals to choose from, and the inner edge portal can only be crossed $\leq 4r$ times per tour).
\end{itemize}
\begin{figure}[htbp]
\centerline{\includegraphics[scale=1]{Figs/DPrecursion.pdf}}
\caption{An example of the recursion step for one of our $k$ tours. The red portals represent our fixed choice in (a) and (b), while the green display a particular configuration of our inner edge portals leading to a valid tour. The extension to $k$ tours works similarly, except we iterate through all rounded lengths for each fixed order of portals.}
\label{fig:dp_recursion}
\end{figure}
Combining our choices in (a) and (b) for the outer edges with our choices in (a') and (b') for the inner edges leads to an $(m,r)$-multipath-multitour problem in the four children, whose solutions (by induction) exist in the lookup table with corresponding rounded lengths. There are up to $O \left( \left( \frac{1}{\eps} \log^2{\left( \frac{n}{\eps} \right) } \right)^{4k} \right)$ different rounded lengths stored for each of the $k$ tours, for each child square (Lemma~\ref{lem:runtime-lem}). We add each combination of rounded tour lengths together for the four children, and further round this value on level $i$'s scale. Finally, we set the resulting configurations of portal multi-sets, pairings, and tour lengths to true in our DP table.
Our runtime, then, is expressed as the product between the number of quadtree nodes ($T = O(n\log{L)}$) and the total number of choices outlined above:
\[
O\left( T (m+4)^{8rk} (4r)^{4rk} (4r)!^{2k} \left( \left( 1/\eps \right) \log^2{\left( n/\eps \right) } \right)^{4k} \right)
~ = ~ O \left(n \left( \left( k/\eps \right) \log {\left( n/\eps \right) } \right)^{O\left(k^2/\eps \right)} \right)
\]
Since we are treating $k$ as a constant, then, we simplify this bound to
\[
O \left(n \left( (1/\eps) \log {\left( n/\eps \right) } \right) ^ {O(1/\eps)} \right).
\]
It is evident that the above runtime of our DP algorithm is polynomial in $n$.
\section{Error Analysis} \label{sec:structure_thm}
We have three different sources of error in our algorithm. The first derives from our enforcement of $(m,r)$-light lines within the quadtree, as this requires us to shift each tour so they satisfy the portal pairing requirement. Moreover, our rounding incurs a small amount of cost on the min-max tour length. Finally, the perturbation step adds error to the solution as well. The following theorems will prove what those costs are, and we will show that compounding them on top of each other in our algorithm still leads to a min-max length bounded by $(1 + \eps) \OPT$.
\subsection{Arora's Error} \label{subsec:arora_structure}
\begin{lemma} \label{lem:structure_arora}
The total error incurred on the makespan when enforcing the tours to be $(m,r)$-light is $\leq (1 + \eps') \OPT$.
\end{lemma}
\begin{proof}
Let there be a minimum internode distance of 8, with $(a, b)$ shifts applied randomly to the quadtree (see Arora). Then for $\eps' > 0$, Arora proves that the expected cost of enforcing one tour to be $(m,r)$-light is no more than $\frac{6g l}{s}$, where $l$ is the optimal length of the tour, $g = 6$ and $s = 12g/\eps'$.
Take $l_{max}$ as the length of the optimal solution's makespan. Unlike Arora's problem, we have multiple tours. Thus we need to make sure that each one has a small enough expected cost so its length does not overpower $(1 + \eps') l_{max}$. We take $s = \frac{12gk}{\eps'}$ (where $m \geq 2s\log{L}$ and $r = s + 4$). This means that $\mathbb{E}(X_h) \leq \frac{\eps' l_i}{2k}$, where $X_h$ is a random variable representing the $h$th tour's incurred cost and $l_h$ is that tour's optimal solution length.
We know $P\left(\bigcap\limits_{h=1}^{k}{(X_h < \eps' l_h)}\right) = 1 - P\left(\bigcup\limits_{h=1}^{k}{(X_h \geq \eps' l_h)}\right)$. By Boole's Inequality,
\[
P\left( \bigcup\limits_{h=1}^{k}{(X_h \geq \eps' l_h)}\right)
~ \leq ~ \sum_{h=1}^k P(X_h \geq \eps' l_h).
\]
By Markov's Inequality, we know $P(X_h \geq \eps' l_h) \leq \frac{\mathbf{E}(X_h)}{\eps' l_h} = \frac{1}{2k}$, and therefore we have
\[
P\left(\bigcap\limits_{h=1}^{k}{(X_h < \eps' l_h)}\right)
~ \geq ~ 1 - k\left(\frac{1}{2k}\right)
~ = ~ \frac{1}{2}.
\]
We can derandomize this result by iterating through each of the possible $(a,b)$ shifts and taking the lowest possible makespan, which runs in time $O(n^2)$.
\end{proof}
\subsection{Rounding Argument Error} \label{subsec:rounding_error}
\begin{lemma} \label{lem:error}
The error accumulated to the root level quadtree node by our rounding argument is $(1 + \alpha)^{\log{L}} \OPT$.
\end{lemma}
\begin{proof}
We take a look at each tour separately. At our root level, the total length of the rounded tour can be represented as $l_1^{(1)}$. But $l_1^{(1)}$ is just the sum of rounded lengths corresponding to the tour in each of its children. That is, $l_1^{(1)} = (1 + \alpha) \left(l_1^{(2)} + l_2^{(2)} + l_3^{(2)} + l_4^{(2)}\right)$. Our notation indicates that $l_1^{(2)}$ is the first indexed square at level 2, $l_2^{(2)}$ is the second, and so on. In general we have
\[
l_j^{(i)}
~ = ~ (1 + \alpha) \left(l_{4j-3}^{(i+1)} + l_{4j-2}^{(i+1)} + l_{4j-1}^{(i+1)} + l_{4j}^{(i+1)} \right).
\]
If we recursively expand each term in our initial equation, then, our root level rounded tour length becomes
\[
l_1^{(1)}
~ = ~ (1 + \alpha)^{\log{L} - 1} \left(l_1^{(\log{L})} + l_2^{(\log{L})} + \cdots + l_{4^{\log{L}}}^{(\log{L})} \right),
\]
where $\log{L}$ is the number of levels we have in our quadtree.
But $l_j^{(\log{L})}$ just represents the rounded lengths of our leaves, which are simply bounded by $\left( 1 + \alpha\right) t_j^{(\log{L})}$ (where $t$ represents the true optimum tour length in that node). Thus we have
\begin{align*}
l_1^{(1)}
& ~ = ~ (1 + \alpha)^{\log{L}} \left(t_1^{(\log{L})} + t_2^{(\log{L})} + \cdots + t_{4^{\log{L}}}^{(\log{L})} \right) \\
& ~ = ~ \left( 1 + \alpha \right)^{\log{L}} \OPT.
\end{align*}
\end{proof}
\begin{lemma} \label{lem:structure_rounding}
For $\alpha = \frac{\eps'}{2\log{L}}$, the total error of the minimum makespan when applying our rounding argument is at most $(1 + \eps') \OPT$.
\end{lemma}
\begin{proof}
By Theorem 2, we know the error accumulated for each of the $k$ tours is bounded by $(1 + \alpha)^{\log{L}} \OPT$. Thus we need $(1 + \alpha)^{\log{L}} \leq (1 + \eps')$ to prove our error bound. We simplify our objective:
\[
\log{(L)} \ln{(1 + \alpha)}
~ \leq ~ \ln{(1 + \eps')}.
\]
By definition of the natural log, $\ln(1 + \alpha) \leq \alpha$ since $\alpha > 0.$ Thus we have
\[
\log{(L)} \ln{(1 + \alpha)}
~ \leq ~ \alpha \log{(L)}.
\]
Finally we plug in our $\alpha$ value:
\[
\alpha \log{(L)}
~ = ~ \frac{\eps'}{2\log{(L)}} \log{(L)}
~ = ~ \frac{\eps'}{2}.
\]
By Lemma 2.1, we know $\ln(1 + \eps') \geq \frac{\eps'}{2}$. We therefore satisfy the inequality, as
\[
\log{(L)} \ln{(1 + \alpha)}
~ \leq ~ \frac{\eps'}{2}
~ \leq ~ \ln{(1 + \eps')}.
\]
\end{proof}
\subsection{Total Error} \label{subsec:total_error}
We know the perturbation affects our optimum makespan length by adding at most $\frac{\eps}{4} \OPT$. Therefore we have a bounded makespan of $\left(1 + \frac{\eps}{4}\right) \OPT$ after this step. Given that $\eps' = \eps/4$, then, we layer the rounding and portal errors on top of this length to conclude that the total makespan length is bounded by
\[
\left(1+\frac{\eps}{4}\right)^3 \OPT
~ \leq ~ \left(1 + \eps\right)\OPT \qquad\big(\hbox{given that $\eps \leq \frac{\sqrt{13}-3}{2}$}\big),
\]
as desired.
\section{Problem Variations} \label{sec:variations}
\begin{enumerate}
\item We require a subset of our $n$ points to be visited by a specific one of the tours. In this case, we would simply incur the same argument at the leaf nodes of these points as our base case in the DP Section~\ref{subsubsec:base_case}. This would just add a constant factor to the final runtime for each point in the subset.
\end{enumerate}
\section{Conclusion} \label{sec:conclusion}
We have demonstrated a PTAS for the min-max Euclidean multiple TSP. Our algorithm builds on top of Arora's PTAS for the single-tour TSP: that is, we restrict the search space of the solution by limiting tours to cross quadtree square edges a certain amount of times at evenly-spaced portals. On top of this simplification, we require that the $k$ resulting tour lengths within a quadtree node are rounded on a logarithmic scale depending on the square level. We then present a dynamic program that stores all possible restricted tour solutions within each level of the quadtree. This algorithm finds the optimal solution (within the portal and rounding simplifications) in polynomial time over $n$, with the degree growing as a function of $k$. Finally, we show that adjusting the optimal solution of an {m$^3$TSP} instance to satisfy the portal and rounding requirements within the quadtree accumulates error of at most $(1 + \eps) \OPT$ units.
Extending our PTAS solution for the {m$^3$TSP} to higher dimensions would be a straightforward exercise. Moreover, generalizing the PTAS for the {m$^3$TSP} to any shortest-path metric of a weighted planar graph should be an approachable problem following the solution presented by Arora, Grigni, Karger, Klein and Woloszyn for the single TSP~\cite{AGK98}. An open question remains as to whether there exists a PTAS that runs in polynomial time over both $n$ and $k$. Extensions such as variable speed and capacity restrictions on the $k$ salesmen also warrant exploration. Finally, we wonder whether similar DP techniques could be applied to the original problem that led us to the {m$^3$TSP}: the Freeze Tag Problem~\cite{Bend06}, where robots in space aim to wake each other up in the least amount of time possible.
\bibliographystyle{alpha}
| {
"attr-fineweb-edu": 1.874023,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbTI4c3aisLHGDlA5 | \section{Comments}
1. The potentially most interesting component is the DRL method, but insufficient detail is given to understand it. More fundamentally, it is unclear why the method is based on learning at all. The system appears to possess all information and sensors required to determine the vantage point by reasoning about geometry.
2. Although the introduction discusses what properties make a
good shot, such as lighting, framing or composition, there
is little discussion in the paper on how such criteria
should be optimised. The use of templates provide a simple
way to avoid those issues, but it would have been
interesting to try to optimise this important aspect of the
problem.
3. Although the application is interesting, the
experimental testing of the system is minimal. It would
have been interesting to provide more extensive testing and
analysis of the system's performance in varied conditions.
Also, some evaluation of the system's output (eg, ratings
by users) would have been welcome. Similarly, some
components, such as a the template matching could have been
analysed more rigorously.
4. The description of the tracker module takes too much space for
a subsystem which, if I understand it well, does not
contribute really to the state of the art in person
tracking but reuses other methods.
5. You describe in fact two different systems for the
tracking (figs. 5b and 5c). I think this does not
contribute to make the paper clearer. It would be much
better if you simply describe the approach that you have
finally chosen (which is, if I am correct, the Yolo one).
6. How do you determine the model of the person to track in
the first place? I think that the problem is not easy as
the quality of the tracking will probably depend a lot on
this first image of the person of interest.
7. You mention that the candidate person with the highest
similarity is chosen as the target; however, I wonder what
happens when Yolo, for some reason, could not detect the
target. Will you select a badly ranked candidate in that
case? For example, in Fig. 5c, there are 3 detections with
quite high scores (0.78, 0.8, 0.91). If the correct person
is not detected, could this trigger some critical failure
in your system?
8. The vector v
present in the Equation 1 is not clear at all: Is this the
same as the pose vector which is given as an output from
Openpose? If not, how do you define it and how do you use
the pose vector. Also, what value of alpha did you choose
and why?
9. The third part (final photo acquisition) is not
necessary at all. \hk{I do not think we should remove it.}
10. I would have liked to see a better motivation of the
use of a DRL technique here; intuitively, given the size,
the location, and the desired pose, there should be an
optimal pose of the robot/camera to do the shot; so given
the few degrees of freedom of the robot, a much simpler
scheme may be more efficient.
Another important point that is not answered in the paper
is why the geometric constraints are not used to find the
possible locations of the robot for taking a photo. Since
the human location is known and the pre-defined template is
available, the approximate distance (depth) to the user can
be found by using the RGB-D sensor, which is already used
for tracking.
11. \sout{The
robot should always be in close proximity to the human,
otherwise the human may move fast and the path to track the
human cannot be found by the robot. A method which is
computationally less expensive (e.g. [1]) can be considered
to compare.
Z Kalal, K. Mikolajczyk, and J. Matas,
"Tracking-Learning-Detection," Pattern Analysis and Machine
Intelligence 2011.}
12. \sout{It is not mentioned if the tracking runs
on the remote computer or on the robot.}
13. It is stated that cosine similarity is used to compare the
ground truth person image with the cropped candidate person
image. Which color space is used for comparison?
14. It is considered that the View Evaluation Net gives the
best candidate template for the dynamical view selection. A
small survey for this assumption for the given conditions
(indoors full body portrait photos) would be helpful to
validate this assumption. Also I could not find the number
of candidate templates generated.
15. A new DRL model is necessary for each
of new pose template. Although this may be feasible for a
small number of pose templates, this would increase the
number of models significantly for pose templates with more
than one person. I think this point needs some discussion
for clarification.
16. The font in Fig.3 can be enlarged.
17. Should there be page numbers? -TJ
\section{Introduction}
Getting a good picture requires getting into a fine location, proper light, and right timing.
While getting a great picture requires talent and artistic preparation, there is a possible way of getting a well-taken image automatically.
For example, a system could follow the rule of thirds, empty area, keeping eyes in sight, and other commonly-accepted conventions~\cite{Grill1990}.
Some previous work has been done in this direction. For example, Kim et al.~\cite{Kim2010} introduced a robot that can move and capture photographs according to composition lines of the human target. Robot Photographer of Luke~\cite{Zabarauskas2014} can randomly walk in an unstructured environment, and take photographs of humans basing on heuristic composition rules. To get a good picture, we need to move the robot to a certain location. At the same time, a good sense of composition needs to be taught to the robot. However, the robustness of geometric-based robot motion planning relies heavily on sensor accuracy. Moreover, traditional rule-based designs are usually rigid in handling various composition situations.
Therefore, we propose a flexible learning-based view adjustment framework for taking indoor portraits that we call: LeRoP. The objective of our work is to create a framework that can train a robot to automatically move and capture the best view of a person. We implemented the LeRoP utilizing a photo evaluation model to propose good views, and using a Deep Reinforcement Learning (DRL) model to adjust the robot position and orientation towards the best view to capture. Additionally, the framework is interactive to the user. For example, our robot with the framework can be triggered to follow the target to a photograph location before searching for the best view. We ingeniously utilized a $360$ camera as a supplement to the main camera (for photography). The omindirectional view helps avoid repetitive rotations in tracking and selects views for the DRL template matching procedure. The modular design allows the photo evaluation aesthetic model to be swapped flexibly basing on photo style preferences. The DRL model can adapt to different view adjustment methods by simply re-training the network with a suitable reward function and apply to different hardware settings.
An example in Figure~\ref{fig:teaser} shows our framework at work. The user first selects \textit{Tracking Mode} to lead the robot to a user desired location. The user then selects \textit{Composing Mode} to start the autonomous composing process. Once \textit{Composing Mode} is activated, the robot observes the scene with its $360^{\circ}$ camera for hunting the best view (template). Once the template is detected, it moves into a location matches the template that allows for taking a high-resolution portrait with the second camera.
We tested it on a robot system built on \textit{Turtlebot}~shown in Figure~\ref{fig:sohw}, and our robot can (a) interact with the user, (b) identify and follow the user, (c) propose well-composed template dynamically or use supplemental pre-defined template, and (d) adjust position to match the template and capture the portrait.
\begin{figure}[!hbt]
\centering
\includegraphics[width=\linewidth]{images/teaser}
\caption{Our LeRoP robot at work. Photo (a) is the third-person view of the working scene. Photo (b) is the final capture (the photo has been rotated by $180^{\circ}$ for better visualization).}
\label{fig:teaser}
\end{figure}
We claim the following contributions:
\begin{enumerate}
\item A template matching based DRL solution with its synthetic virtual training environment for robot view adjustment.
\item A method to utilize an omnidirectional camera to support tracking and DRL view selection.
\item An interactive modular robot framework design that supports automatically capture high quality human portraits.
\end{enumerate}
\section{Related Work}
Autonomous cameras for both virtual and real photography have long been explored. We refer the reader to the review~\cite{Chen2014} that summarizes autonomous cameras from the viewpoint of camera planning, controlling, and target selecting. Galvane et al.~\cite{Galvane2013, Galvane2014, Galvane2015} also provided several studies on automatic cinematography and editing for virtual environments.
One of the earliest robotic cameras was introduced in~\cite{Pinhanez1995,Pinhanez1997}. This camera allows intelligent framing of subjects and objects in a TV studio upon verbal request and script information. Byers et al.~\cite{Byers2003} developed probably the first robot photographer that can navigate with collision avoidance and frame image using face detection and predefined rules. They expanded their work by discussing their observations, experiences, and further plans with the robot in follow up studies~\cite{Smart2003,Byers2004}. Four principles from photography perspective~\cite{Grill1990} were applied to their system and to many later robot photographer systems. In particular they implemented the rule of thirds, the empty-space rule, the no-middle rule, and the edge rule.
The follow-up work~\cite{Ahn2006} extended the work of Byers et al.~\cite{Byers2003,Smart2003,Byers2004} by making the robot photographer interactive with users. A framework introduced in~\cite{Zabarauskas2014} described RGB-D data-based solutions with Microsoft Kinect and the capability of detecting direction via human voice recognition was added in~\cite{Kim2010}. Campbell et al.~\cite{Campbell2005} introduced a mobile robot system that can automate group-picture-framing by applying optical flow and motion parallax techniques. The photo composing in most of these studies rely on heuristic composition rules or similar techniques. Such settings usually do not adequately account for the correlation between the foreground and the background, as well as the light and color effects of the entire image. They also lack a generalizable framework for varied application scenarios.
Recent studies use UAV technology for drone photography trajectory planning ~\cite{Gebhardt2016, Richter2016, Roberts2016} and drone photography system redesign~\cite{Kang2017, Lan2017}. Although these studies can make it easier for users to obtain high-quality photos, the methods usually only provide an auxiliary semi-automated photo taking process. Users still need to use subjective composition principles to get satisfactory photos.
One crucial part in autonomous
camera photography is view selection (\emph{i.e.}~framing). Besides the composition principles~\cite{Grill1990} that are widely used on robot photography ~\cite{Gadde2011,Gooch2001,Cavalcanti2006,Banerjee2007} for view selection, there are many recent efforts
in photo quality evaluation and aesthetics analysis using computational methods~\cite{Datta2006,Dhar2011,Ke2006,Luo2011,Nishiyama2011}. These methods use machine learning algorithms to learn aesthetic models for image aesthetic evaluation.
The recent advances in deep learning further elevate the research in this direction. Large datasets containing photo rankings basing on aesthetics and attributes are curated ~\cite{Murray2012,Kong2016,Wei2018} and they allow training deep neural networks to predict photo aesthetics levels and composition quality~\cite{Lu2015,Lu2014,Marchesotti2015,Mai2016,Kang2014,Wei2018}. Compared with aesthetic evaluation methods based on pre-set rules, these data-driven approaches can handle more general and complex scenes where these rules cannot simply be applied. Therefore, we embrace the deep learning based aesthetic models for view selection in our system.
The success in Altari 2600 video games \cite{Mnih2015} and AlphaGo \cite{Silver2016} showed the power of DRL in solving decision-making problems. DRL also enables the control policy end-to-end learning for virtual agents \cite{Peng2017, Hodgins2017}, and real robots \cite{Gu2017, Hwangbo2017}. Our study can be simplified as a visual servoing problem of robot special navigation. Several DRL driven visual servoing \cite{Levine2016, Levine2018} and navigation \cite{Mirowski2016, Zhu2017} scenarios were well studied to allow their autonomous agents to interact with the environments. The previous research proved the capability and aptness of using DRL to learn optimal behaviors effectively and efficiently.
\section{System Overview}
We designed the framework on the robot to: interact with the user, track the user to the desired location, and adjust the position that can take a well-composed portrait. The framework will be systematically discussed from both the hardware and software perspectives.
\subsection{Hardware}
The hardware of the entire framework consists of eight major components shown in Figure~\ref{fig:sohw} and the devices with corresponding models are listed in Table~\ref{tab:sohw}.
\begin{figure}[!hbt]
\centering
\includegraphics[width=\linewidth]{images/system_overview_hw}
\caption{Hardware system overview. }
\label{fig:sohw}
\end{figure}
\begin{table}[!hbt]
\caption{The hardware configuration.}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|c|c|}
\hline
\textbf{Device} & \textbf{Model}\\ \hline
360 Camera & Ricoh Theta S\\ \hline
Web Camera & Logitech Brio 4K\\ \hline
RGB-D Camera & Orbbec Astra\\ \hline
Tablet Computer & iPad\\ \hline
Distance Sensors & TeraRanger Tower\\ \hline
Mobile Base & Yujin Turtlebot 2\\ \hline
& CPU: Intel Core [email protected]\\
Onboard PC & RAM: 8GB\\
& GPU: Intel Iris Graphics 6100\\ \hline
& CPU: Intel Xeon [email protected]\\
Remote PC & RAM: 16GB \\
& GPU: Nvidia Geforce GTX 1080\\ \hline
\end{tabular}
}
\label{tab:sohw}
\end{table}
\subsection{Software}
The software framework is built on top of \textit{Robot Operating System} (ROS)~\cite{Quigley2009}, and the architecture is shown in Figure~\ref{fig:sosw}.
The \textit{Core Node} runs on the on-board PC, and it is in charge of communication between nodes on controlling different hardware and software components.
The \textit{Kinematic Node} controls the linear and angular motion of the robot, and avoids collision with obstacles.
The \textit{Camera Node} provides the framework with vision ability such as real-time video streaming and photo shooting. The Kinematic and Camera nodes both reside on the on-board PC. The vision is essential to the robot photographer, relating to two major modes: \textit{Following Mode} and \textit{Photographing Mode}. The two modes can be activated and switched through the Interaction Node which is presented as an iPad application.
The application has a \textit{Graphical User Interface} (GUI) that takes touch gestures and human poses as input, and gives graphical results and voice prompts as output (Section~\ref{sec:interaction}).
The two modes are implemented with two separate nodes. The \textit{Tracker Node} (Section~\ref{sec:tracker}) is deployed to the remote PC, which analyzes real-time video stream using Person Detection and Re-Identification Neural Network models to identify and follow the user. The tracking is made with the help of wide-view panorama images provided by the 360 camera and the depth information supported by the RGB-D camera. The \textit{Composer Node} also resides on the remote PC. Composer Node (Section~\ref{sec:composer}) utilizes a Deep Neural Network (DNN) Composition model to determine: the best target view (template), adjust the robot towards the target view with a Deep Reinforcement Learning (DRL) Template Matching Model, and finally shoot the target and select the best photo from candidates with a DNN Best Frame Selection model.
\begin{figure}[!hbt]
\centering
\includegraphics[width=\linewidth]{images/system_overview_sw}
\caption{Software framework overview.}
\label{fig:sosw}
\end{figure}
\section{Implementation}\label{sec:sw}
\subsection{The Tracker}
\label{sec:tracker}
The \textit{Tracker Module} is responsible for the spatial tracking of the user. It allows the robot to generate instructions for the \textit{Kinematic Node} to follow the user. The architecture of the \textit{Tracker Module} is presented in Figure~\ref{fig:tracker_block}.
\begin{figure}[hbt!]
\centering
\includegraphics[width=\linewidth]{images/tracker_block}
\caption{The tracker module architecture.}
\label{fig:tracker_block}
\end{figure}
Our tracker module utilizes the images from the 360 and RGB-D cameras. The panorama image provides a $360^{\circ}$ view for omni-directionally searching and identifying the user around the robot. When the user is located, the robot rotates until the user is centered in the RGB-D camera view. The depth information is used to retrieve the distance between the robot and the target. The distance value determines the linear tracking velocity of the robot. A velocity smoother~\cite{Yujin2018} is used to control the robot's acceleration. Tracking is not activated when the user is within "operating zone", which is within ${0.5}$ meters around the robot. When obstacles less than $0.5$ meters are detected by OA sensors, the linear velocity of the robot decreases to zero. When the tracking target is missing from the view, the robot stops and waits for the target to appear or the tracker to be reset.
The \textit{Tracker Node} uses YOLO~\cite{Redmon2016} to generate candidate person bounding boxes for the input panoramic images. The candidate person images are cropped out and broadcast to the \textit{ReId Node} that uses the person ReId model from \cite{Hermans2017} to compare the reference person image with each candidate person image, and predicts cosine similarity scores. The candidate person most similar to the ground truth person is considered to be the target with a goal drop threshold ($0.80$). The reference person image is initially set when the tracker is activated by the user, and continuously updated when the score of a candidate is higher than the threshold ($0.95$). The processing time for the tracker node is about $0.04$ seconds for a $1440\times360$ input image. The velocity commands generated by the \textit{Tracker Node} are broadcast and received by the \textit{Kinematic Node} for robot motions.
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.99\linewidth]{images/reid}
\caption{\textit{Person ReId} results. The ground truth person image in (a) is the query person to track. Part (b) demonstrates the proposed bounding boxes of candidates, and their similarity scores using the joint ReId model~\cite{Xiao2017}. The reference person images in (c) are generated with YOLO~\cite{Redmon2016}. The similarity scores in (c) are predicted with the triplet-loss ReId model~\cite{Hermans2017}. Both methods can provide correct result. Note that the scores in (b) and (c) are not normalized to the same scale.}
\label{fig:reid}
\end{figure}
\subsection{The Composer}\label{sec:composer}
After the robot tracks the user to a desired photography location, photo composing can be activated to automatically adjust the robot position and take photos for the user based on (a) the static pre-defined templates (Figure~\ref{fig:pre-template}a) or (b) dynamically proposed well-composed views (Figure~\ref{fig:composition}d). The pipeline for the photo composing process is described in Figure~\ref{fig:photographerpipline}.
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.99\linewidth]{images/photographer_pipeline}
\caption{The pipeline of photo composing.}
\label{fig:photographerpipline}
\end{figure}
\subsubsection{Template Generation}
A template contains information that is used to guide the robot composing a user satisfied photo. We use three input data from a template: (a) the location of the person in the photo, (b) the size of the person in the photo, and (c) the pose of the person in the photo. The template can be chosen manually or generated automatically.
There is a set of pre-defined templates varying in location, size, and pose for the user to manually pick from the system. Figure~\ref{fig:pre-template}a demonstrates a pre-defined template with a cartoon avatar. With this template set, the robot moves around to compose the final photo that matches the template (Section~\ref{sec:matching}) for the user (Figure~\ref{fig:pre-template}b). The pre-defined template is not necessarily a cartoon image; the system supports any single person photo with proper aspect ratio.
The system also provides a dynamic template generation solution for photo composing. The novel modular solution enables autonomous photographing with the robot. The solution requires the panorama photos from the 360 camera and the final capture with the high-quality webcam. An example panorama photo is shown in Figure~\ref{fig:unwarp}a. The panorama photo is cropped and remapped with the method described in~\cite{Mo2018} to form a collection of candidate templates (Figure~\ref{fig:unwarp}b). The candidate templates are generated with different levels of distance and yaw angles. Each candidate template is guaranteed to contain the person target. The remapping de-warps the images with the camera parameters of the webcam to make sure the view is reachable from webcam.
The panorama photo processing is performed on the remote PC, and the procedures of finding and using the best template is shown in Figure~\ref{fig:composition}. Candidate templates (Figure~\ref{fig:composition}a) are passed through an off-the-shelf photo evaluation model (Figure~\ref{fig:composition}b). The View Evaluation Net presented in~\cite{Wei2018} is used in the system on the remote PC, which evaluates and scores photos basing on compositions (Figure~\ref{fig:composition}c). The candidate template with the highest score is chosen (Figure~\ref{fig:composition}d). Once the robot finishes Template Matching (Section~\ref{sec:matching}), the final shot is taken with the webcam (Figure~\ref{fig:composition}e). The system is designed to be modular, so that the photo evaluation model can be swapped adaptively with needs.
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.6\linewidth]{images/predefined_temp}
\caption{Pre-defined templates can be used for photo composing. (a) demonstrates an example of a pre-defined template. (b) shows the final capture with the template from (a).}
\label{fig:pre-template}
\end{figure}
\begin{figure*}[hbt]
\centering
\includegraphics[width=\linewidth]{images/unwarp}
\caption{The panorama photo in (a) is cropped and de-warped to form a collection of candidate templates in (b). }
\label{fig:unwarp}
\end{figure*}
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.99\linewidth]{images/composition}
\caption{Finding the best template. Part (a) presents a collection of candidate templates. The candidate templates are passed through a modular photo evaluation network in (b). Scores are retrieved from the network for each candidate template in (c). The template with the highest score is chosen to be the template in (d). The final photo is composed with the webcam in (e).}
\label{fig:composition}
\end{figure}
\subsubsection{Template Matching}\label{sec:matching}
The template matching is a process of making the position, size, and pose of the target person in the webcam view as similar as possible to the person in the template by moving the robot to the appropriate location. The similarity can be estimated by comparing the distance of human pose key-points between the current webcam view and the template. OpenPose~\cite{Wei2016} is used to extract the pose key-points and Figure~\ref{fig:matching} demonstrates a simple example of template matching procedures. The template is shown in the bottom right corner in each camera view. Figure~\ref{fig:matching}a is the initial camera view of the robot. The robot first turns right. The target is centered in the camera view as shown in Figure~\ref{fig:matching}b. The robot then moves forward. The target becomes bigger as shown in Figure~\ref{fig:matching}c. The robot moves forward again. The size of the target increases as Figure~\ref{fig:matching}d presents. The robot eventually turns right to reach the end state. The target matches the template with small error below the threshold as shown in Figure~\ref{fig:matching}e.
\begin{figure}[!hbt]
\centering
\includegraphics[width=\linewidth]{images/template_matching}
\caption{A step-by-step template matching example.}
\label{fig:matching}
\end{figure}
The goal of template matching is to get the robot to capture the desired view with the fixed webcam. The input is the visual information from the webcam video stream. The output is a sequence of robot motor actions. The template matching can be simplified as a problem of robot special navigation. Deep Reinforcement Learning (DRL) has been successfully applied to the robot navigation problem in recent studies~\cite{Zhang2017,Mo2018b}. The advantage of using DRL in our framework is that such settings are more adaptive to changes than rule-based and geometric-based solutions. New policies can be simply re-trained with a tuned reward function.
In typical reinforcement learning settings, the agent receives $(s_{t}, a_{t}, r_{t}, s_{t+1})$ at each time $t$ when interacting with the environment, where $s_{t}$ is the current state, $a_{t}$ is the action the agent makes according to its policy $\pi$, $r_{t}$ is the reward produced by the environment based on $s_{t}$ and $a_{t}$, and $s_{t+1}$ is the next state after transitioning through the environment. The goal is to maximize the expected cumulative return from each state $s_{t}$ in $\mathbb{E}_{\pi}[\sum_{i\geqslant{0}}^{\infty}\gamma^{i}r(s_{t+i}, a_{t+i})]$, where $\gamma$ is the discount factor in $(0,1]$.
For the template matching, the actions are robot discrete linear and angular velocities. The reward function can be set as an exponential function in Equation~\ref{eqn:reward}.
\begin{eqnarray}
r &=& \mathrm{e}^{-\alpha||v - v^\prime||},\label{eqn:reward}
\end{eqnarray}
where $v$ and $v^\prime$ represent the current target keypoint vector and the goal template keypoint vector extracted with OpenPose~\cite{Wei2016} respectively, and $\alpha$ is a constant factor (2.5e-03) to adjust the $L^{2}$ distance scale. The DRL training and experiment is discussed in Section~\ref{sec:experiment}.
The architecture of the composer module is demonstrated in Figure~\ref{fig:composer_block}. The \textit{Observer} receives a webcam image from the \textit{Camera Node}, and extracts out human pose key-points as an observation. The \textit{Composer Node} passes the observation to the \textit{Action Parser Node}, which uses a DRL model to predict the next robot action based on the current observation. When an action is decided, the \textit{Action Parser Node} sends it back to the \textit{Composer Node}. The \textit{Responder} parses the robot action into corresponding velocity commands and broadcasts out for execution. The composer module runs on the remote PC.
\subsubsection{The Final Photo Acquisition}
After the current webcam view matches the template, the robot starts capturing photos. The webcam shutter can be triggered with a special pose if a pre-defined pose template is manually selected in the beginning of composing (Figure~\ref{fig:trigger}). The pose triggering is implemented by comparing the similarity of pose key-point \cite{Wei2016} coordinates between the person in template and the person in webcam view. A set of candidate photos are taken during the photo acquisition. The user can manually select the best photo among the candidates, or a best frame selection model such as~\cite{Ren2018} can automatically decide the best photo for the user based on frame or face scores.
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.6\linewidth]{images/pose_trigger}
\caption{Triggering camera shutter with a pose. The template in (a) shows a pre-defined pose template. The photo in (b) shows the target making the pose to trigger the webcam shutter. }
\label{fig:trigger}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.99\linewidth]{images/composer_block}
\caption{The composer module architecture.}
\label{fig:composer_block}
\end{figure}
\subsection{Interaction Node}
\label{sec:interaction}
The \textit{Interaction Node} is deployed on an iPad as an iOS application written in Objective-C. It provides the GUI to the user for Human-Robot Interaction. The application has two modes: the \textit{Following Mode}, and the \textit{Photographing Mode}. When the \textit{Following Mode} is activated, the \textit{Tracker Module} is executed. When the \textit{Photographing Mode} is activated, the \textit{Composer Module} is executed. The touch screen takes user input and gives the user on screen graphical output or text-to-speech prompts. The details of the Interaction Node is shown in the complementary video.
\section{Experiments}\label{sec:experiment}
The virtual training environment for Robot Template Matching View Adjustment was set up with synthetic images cropped and processed from panorama photos sampled in real scenes. We trained the view adjustment network with two implementations of DRL. The model was optimized by using action memory and adaptive velocity. We tested the robot photographer in 5 real scenes.
\subsection{Data Preparation}
A Ricoh Theta S 360 camera was used to take panorama photos on a $5\times5$ grid mat (Figure~\ref{fig:data_collection}). One photo was taken at each grid point. The distance between adjacent grid points was $20$ cm. In each scene, 1-5 printed QR codes were placed. A total of 25 photos were collected at each scene; and a total of 15 indoor scenes were selected.
Figure~\ref{fig:training_data} indicates the procedures of setting up the virtual training environment. Each $360^{\circ}$ photo (Figure~\ref{fig:training_data}a) was cropped every $15^{\circ}$ into 24 "rotation images" (Figure~\ref{fig:training_data}b). For each scene, there are $600$ images. A person in a short video (Figure~\ref{fig:training_data}c) was cropped out as a collection of frames with minor differences on poses. 30 person videos were processed. One random frame of the cropped person was attached to the location of the QR code for each rotation image (Figure~\ref{fig:training_data}d). The frames of a same person were used for the same QR code in a scene. The scale of the person frame was determined by the size of the QR code in the photo. OpenAI Gym \cite{Brockman2016} virtual environment was set up with $600$ synthesis images for each scene (Figure~\ref{fig:training_data}e). The robot rotation action was simulated with the 24 rotation images from the same $360^{\circ}$ photo. The robot translation action was simulated with the crops of adjacent $360^{\circ}$ photo with the same yaw angle. Nearest neighbour snapping was used in translation simulation. Equation~\ref{eqn:translation} shows the location calculation of a robot translation action.
\begin{eqnarray}
\begin{bmatrix} x\\ y \end{bmatrix} &=& \nint{\begin{bmatrix} x^\prime\\y^\prime\end{bmatrix} + \delta \begin{bmatrix} \sin\theta\\\cos\theta\end{bmatrix}}
\label{eqn:translation}
\end{eqnarray}
where $[x^\prime,y^\prime]^T$ present the robot initial location coordinate, and $[x, y]^T$ present the robot location coordinate after a translation action. $\theta$ is the yaw angle, and $\delta$ is the signed step scalar which presents the size and direction (forward and backward) of one translation action.
\subsection{Training}
The DRL model was trained with Advantage Actor Critic (A2C) method, which is a synchronous deterministic variant of Asynchronous Advantage Actor Critic (A3C)~\cite{Mnih2016}, and Actor Critic using Kronecker-Factored Trust Region (ACKTR)~\cite{Wu2017} method respectively using a PyTorch implementation \cite{Kostrikov2018}. The environment observation was set to be a vector that contains the selected pose key-point coordinates extracted with OpenPose~\cite{Wei2016}. The robot translation and rotation actions were expressed as one-hot vector array.
In the training, we noticed that the average total rewards of A2C and ACKTR were always on the same level. However, ACKTR tended to converge at an earlier stage than A2C (Figure~\ref{fig:reward_vel} and Figure~\ref{fig:reward_mem}). We experimentally improved the average total rewards by using two methods: (a) adaptive velocity and (b) action memory. The robot actions were reflected as linear and angular velocities in one hot vector array. If we increase the array size by setting different levels of velocities to allow the agent to sample, a slightly higher average total reward can be reached for both A2C and ACKTR as Figure~\ref{fig:reward_vel} shows. Also, if we add previous actions made by the agent to the current observation as memory, both A2C and ACKTR can get higher average total rewards as Figure~\ref{fig:reward_mem} shows.
\subsection{Testing}
We tested the robot photographer at three indoor scenes, and used it to take 20 photos at each scene (ten using pre-defined template and ten using dynamically generated template). Two of the 60 tests failed to conduct the final capture within 30 actions (1 using pre-defined template, 1 using dynamically generated template) and thus are removed from the test samples. The linear velocity of the robot was set to be no more than $0.15$ m/s; and the angular velocity was set to be no more than $0.5$ rad/s. For dynamically generated template tests, the number of actions has mean $\bar{X}=12.76$, standard deviation $SD=4.67$; and the composing time has $\bar{X}=24.10, SD=8.57$. For pre-defined template tests, the number of action has $\bar{X}=11.20, SD=4.10$; and the composing time has $\bar{X}=22.11, SD=7.47$. The resulting photo evaluation was omitted, and the related user study refers to the off-the-shelf composition model study~\cite{Wei2018}.
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.99\linewidth]{images/data_collection}
\caption{Training data collection with a 360 camera on a $5\times5$ grid mat.}
\label{fig:data_collection}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[width=\linewidth]{images/training_data}
\caption{Virtual training environment setup.}
\label{fig:training_data}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.99\linewidth]{images/reward_vel}
\caption{Improvements on average total rewards in training. The figure shows a comparison of average total rewards with constant velocity and with 3-level adaptive velocity using A2C and ACKTR. }
\label{fig:reward_vel}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.99\linewidth]{images/reward_mem}
\caption{Improvements on average total rewards in training. The figure shows a comparison of average total rewards with and without previous 5 actions as memory using A2C and ACKTR. }
\label{fig:reward_mem}
\end{figure}
\section{Conclusions}
We have developed a novel learning based modular framework for robot photography. The framework allows the robot to take well-composed photographs of a person. The robot photographer has a GUI displayed on an attached iPad that gives voice prompts to the user. Our robot can track the user to a desired location. Then, it starts to adjust its position to form the view in a webcam that matches the best template portrait to capture. The best template is searched by using a modular photo evaluation aesthetic model on cropped images of a panorama photo. The view adjustment is driven by a DRL model basing on template matching. A synthetic virtual environment for navigation training solution was provided.
The system has several \textit{limitations}. It has limited on-board computation power, so that it relies on a powerful remote PC to run DNN models. The Turtlebot that serves as the basis of our system is relatively simple with few degrees of freedom.
\textit{Future work} includes testing the solution by using a more complex robot. Also, our system currently supports only a single person portrait. New policies would need to be re-trained to get better support on taking group photos. In future work, we also would like to test different photo evaluation aesthetic models, and extend the work to outdoor scenes.
\section*{Acknowledgment}
This work was supported by Adobe Research. The authors would like to thank Suren Deepak Rajasekaran for the help on video editing, and Booker Smith for useful discussions. The authors also would like to thank the anonymous reviewers for valuable feedback.
| {
"attr-fineweb-edu": 1.630859,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcfDxK1fBGwCLL987 | \section{Introduction} \label{Sec1}
The UEFA Champions League is probably the most prestigious annual association football (henceforth football) club competition around the world. Since the 2003/04 season, the tournament contains a single group stage with 32 teams, divided into eight groups of four. This phase is played in a double round-robin format, that is, each team meets the other three teams in its group once home and once away. The top two clubs from each group progress to the Round of 16. The third-placed clubs go to the UEFA Europa League, the second-tier competition of European club football, while the fourth-placed clubs are eliminated.
One of the most important responsibilities of sports governing bodies is to set the right incentives for the contestants \citep{Szymanski2003}. At first sight, they are almost guaranteed in the group stage of the Champions League since---according to the rules described above---every team benefits from being ranked higher in its group. However, the situation is not so simple as the following illustrations reveal.
\begin{table}[ht!]
\begin{threeparttable}
\centering
\caption{Ranking in Group B of the 2021/22 UEFA Champions League after Matchday 4}
\label{Table1}
\rowcolors{3}{}{gray!20}
\begin{tabularx}{\linewidth}{Cl CCC CCC >{\bfseries}C} \toprule \hiderowcolors
Pos & Team & W & D & L & GF & GA & GD & Pts \\ \bottomrule \showrowcolors
1 & Liverpool FC & 4 & 0 & 0 & 13 & 5 & $+8$ & 12 \\
2 & FC Porto & 1 & 2 & 1 & 3 & 6 & $-3$ & 5 \\
3 & Club Atl\'etico de Madrid & 1 & 1 & 2 & 4 & 6 & $-2$ & 4 \\
4 & AC Milan & 0 & 1 & 4 & 4 & 7 & $-3$ & 1 \\ \bottomrule
\end{tabularx}
\begin{tablenotes} \footnotesize
\item
Pos = Position; W = Won; D = Drawn; L = Lost; GF = Goals for; GA = Goals against; GD = Goal difference; Pts = Points. All teams have played four matches.
\end{tablenotes}
\end{threeparttable}
\end{table}
\begin{example} \label{Examp1}
Table~\ref{Table1} presents the standing of Group B in the 2021/22 Champions League with two rounds still to be played. Liverpool leads by seven points over Porto, therefore, it will certainly win the group. Thus, Liverpool is probable to play with little enthusiasm against its opponents on the last two matchdays, Porto (home) and Milan (away).
\end{example}
\begin{table}[ht!]
\begin{threeparttable}
\centering
\caption{Ranking in Group C of the 2021/22 UEFA Champions League after Matchday 5}
\label{Table2}
\rowcolors{3}{}{gray!20}
\begin{tabularx}{\linewidth}{Cl CCC CCC >{\bfseries}C} \toprule \hiderowcolors
Pos & Team & W & D & L & GF & GA & GD & Pts \\ \bottomrule \showrowcolors
1 & AFC Ajax & 5 & 0 & 0 & 16 & 3 & $+13$ & 15 \\
2 & Sporting Clube de Portugal & 3 & 0 & 2 & 12 & 8 & $+4$ & 9 \\
3 & Borussia Dortmund & 2 & 0 & 3 & 5 & 11 & $-6$ & 6 \\
4 & Be{\c s}ikta{\c s} JK & 0 & 0 & 5 & 3 & 14 & $-11$ & 0 \\ \bottomrule
\end{tabularx}
\begin{tablenotes} \footnotesize
\item
Pos = Position; W = Won; D = Drawn; L = Lost; GF = Goals for; GA = Goals against; GD = Goal difference; Pts = Points. All teams have played five matches.
\end{tablenotes}
\end{threeparttable}
\end{table}
\begin{example} \label{Examp2}
Table~\ref{Table2} presents the standing of Group C in the 2021/22 Champions League with one round still to be played. If two or more teams are equal on points on completion of the group matches, their ranking is determined by higher number of points obtained in the matches played among the teams in question, followed by superior goal difference from the group matches played among the teams in question \citep[Article~17.01]{UEFA2021c}.
Since the result of Borussia Dortmund vs.\ Sporting CP (Sporting CP vs.\ Borussia Dortmund) has been 1-0 (3-1), and Sporting CP has an advantage of three points over Borussia Dortmund, Sporting CP is guaranteed to be the runner-up. Furthermore, Ajax is the group winner and Be{\c s}ikta{\c s} is the fourth-placed team. Consequently, the outcomes of the games played in the last round do not influence the group ranking at all.
\end{example}
According to Examples~\ref{Examp1} and \ref{Examp2}, a club might lose a powerful incentive to exert full effort in some matches towards the end of the competition, especially if it focuses mainly on qualification. Therefore, team $i$ may field weaker players and take into account other factors such as resting before the next match in its domestic championship. Since this would be unfair for the teams that played against team $i$ when it had stronger incentives to win, the organiser---the Union of European Football Associations (UEFA)---should avoid these games to the extent possible.
This study aims to find the schedule for the group stage of the UEFA Champions League that is optimal for competitiveness by minimising the probability of stakeless games where one or both clubs cannot achieve a higher rank.
Our main contributions can be summarised as follows:
\begin{itemize}
\item
A reasonable statistical method is chosen to simulate group matches in the Champions League (Section~\ref{Sec32});
\item
Games are classified into three categories based on their level of competitiveness (Section~\ref{Sec33});
\item
In the lack of further information, five candidate schedules are identified by ``reverse engineering'' in the 2021/22 Champions League (Section~\ref{Sec34});
\item
The alternative schedules are compared with respect to the probability of each match type (Section~\ref{Sec4}).
\end{itemize}
The remainder of the paper is organised as follows.
Section~\ref{Sec2} gives a concise overview of literature. The theoretical background is detailed in Section~\ref{Sec3}. Section~\ref{Sec4} provides the results of the simulations, while Section~\ref{Sec5} contains conclusions and reflections.
\section{Related literature} \label{Sec2}
The Operational Research (OR) community devotes increasing attention to optimising the design of sports tournaments \citep{Csato2021a, KendallLenten2017, LentenKendall2021, Wright2014}. One of the challenges is choosing a schedule that is fair for all contestants both before and after the matches are played \citep{GoossensYiVanBulck2020}. The traditional issues of fairness in scheduling are the number of breaks (two consecutive home or away games), the carry-over effect (which is related to the previous game of the opponent), and the number of rest days between consecutive games. They are discussed in several survey articles \citep{GoossensSpieksma2012b, KendallKnustRibeiroUrrutia2010, RasmussenTrick2008, Ribeiro2012}.
The referred studies usually consider the teams as nodes in graphs. However, they are strategic actors and should allocate their limited effort throughout the contest, or even across several contests. Researchers have recently begun to take similar considerations into account. \citet{KrumerMegidishSela2017a} investigate round-robin tournaments with a single prize and either three or four symmetric players. In the subgame perfect equilibrium of the contest with three players, the probability of winning is maximised for the player who competes in the first and the last rounds. This result holds independently of whether the asymmetry is weak or strong, but the probability of winning is the highest for the player who competes in the second and the third rounds if there are two prizes \citep{KrumerMegidishSela2020a}. In the subgame perfect equilibrium of the contest with four players, the probability of winning is maximised for the player who competes in the first game of both rounds. These theoretical findings are reinforced by an empirical analysis, which includes the FIFA World Cups and the UEFA European Championships, as well as two Olympic wrestling events \citep{KrumerLechner2017}.
Some papers have attempted to determine the best schedule for the FIFA World Cup, a global sporting event that attracts one of the highest audiences. \citet{Stronka2020} focuses on the temptation to lose, resulting from the desire to play against a weaker opponent in the first round of the knockout stage. This danger is found to be the lowest if the strongest and the weakest competitors meet in the last (third) round.
Inspired by the format of the 2026 FIFA World Cup, \citet{Guyon2020a} quantifies the risk of collusion in groups of three teams, where the two teams playing the last game know exactly what results let them advance. The author identifies the match sequence that minimises the risk of collusion.
\citet{ChaterArrondelGayantLaslier2021} develop a general method to evaluate the probability of any situation in which the two opposing teams do not play competitively, and apply it to the current format of the FIFA World Cup (a single round-robin contest of four teams). The scheduling of matches, in particular, the choice of teams playing each other in the last round, turns out to be crucial for obtaining exciting and fair games.
Analogously, the design of the UEFA Champions League has been the subject of several academic studies. \citet{ScarfYusofBilbao2009} compare alternative formats with 32 teams via simulations. \citet{KlossnerBecker2013} assess the financial consequences of the distorted mechanism used for the Round of 16 draw such that the strengths of the teams are measured by the UEFA club coefficients. \citet{DagaevRudyak2019} estimate the competitiveness changes caused by the seeding reform in the Champions League from the 2015/16 season. \citet{CoronaForrestTenaWiper2019} follow a Bayesian approach to uncover how the new seeding regime has increased the uncertainty over progression to the knockout stage. \citet{Csato2022b} analyses the impact of changing the Champions League qualification system from the 2018/19 season.
Regarding the identification of unwanted games, \citet{FaellaSauro2021} introduce the concept of irrelevant match---which does not influence the ultimate ranking of the teams involved---and prove that a contest always contains an irrelevant match if the schedule is static and there are at least five contestants. This notion is somewhat akin to our classes of matches discussed in Section~\ref{Sec1}.
A more serious form of a match-fixing opportunity is incentive incompatibility when a team can be strictly better off by losing. The first example of such a rule has been provided in \citet{DagaevSonin2018}. The same misallocation of vacant slots has been present in the UEFA Champions League qualification between the seasons of 2015/16 and 2017/18 \citep{Csato2019c} and ruins the seeding policy of the Champions League group stage since the 2015/16 season \citep{Csato2020a}. Even though all these instances can be eliminated by minor modifications, the lack of strategy-proofness in the recent qualifications for the UEFA European Championship \citep{HaugenKrumer2021} and FIFA World Cup can be mitigated only by additional draw constraints \citep{Csato2022a}. The current paper joins this line of research by emphasising that a good schedule is also able to reduce the threat of tacit collusion.
\section{Methodology} \label{Sec3}
For any sports competition, historical data represent only a single realisation of several random factors. Therefore, the analysis of tournament designs usually starts by finding a simulation technique that can generate the required number of reasonable results \citep{ScarfYusofBilbao2009}.
To that end, it is necessary to connect the teams playing in the tournament studied to the teams whose performance is already known. It is achieved by rating the teams, namely, by assigning a value to each team to measure its strength \citep{VanEetveldeLey2019}. This approach allows the identification of the teams by their ratings instead of their names.
UEFA widely uses such a measure, the UEFA club coefficient, to determine seeding in its club competitions. First, we provide some information on this statistics and the draw of the UEFA Champions League group stage in Section~\ref{Sec31}. After that, Section~\ref{Sec32} introduces and evaluates some simulation models, and Section~\ref{Sec33} describes how the games suffering from potential incentive problems can be selected. Finally, Section~\ref{Sec34} outlines some valid schedules of the Champions League groups.
\subsection{Seeding in the UEFA Champions League} \label{Sec31}
The UEFA club coefficient depends on the results achieved in the previous five seasons of the UEFA Champions League, the UEFA Europa League, and the UEFA Europa Conference League, including their qualifying \citep{UEFA2018g}. In order to support emerging clubs, the coefficient equals the association coefficient over the same period if it is higher than the sum of all points won in the previous five years.
For the draw of the Champions League group stage, a seeding procedure is followed to ensure homogeneity across groups. The 32 clubs are divided into four pots and one team is assigned from each pot to a group, subject to some restrictions: two teams from the same national association cannot play against each other, certain clashes are prohibited due to political reasons, and some clubs from the same country play on separate days where possible \citep{UEFA2021c}.
Seeding is based primarily on the UEFA club coefficients prior to the tournament. Before the 2015/16 season, Pot 1 consisted of the eight strongest teams according to the coefficients, Pot 2 contained the next eight, and so on. The only exception was the titleholder, guaranteed to be in Pot 1. In the three seasons between 2015/16 and 2017/18, the reigning champion and the champions of the top seven associations were in Pot 1. Since the 2018/19 season, Pot 1 contains the titleholders of both the Champions League and the Europa League, together with the champions of the top six associations. The other three pots are composed in accordance with the club coefficient ranking. The seeding rules are discussed in \citet{Csato2020a} and \citet[Chapter~2.3]{Csato2021a}. \citet{EngistMerkusSchafmeister2021} estimate the effect of seeding on tournament outcomes in European club football.
\input{Figure1_UEFA_club_coefficients_pots}
Figure~\ref{Fig1} plots the club coefficients for the 32 teams that participated in the Champions League group stage in four different seasons. As it has already been mentioned, Pot 1 does not contain the teams with the highest ratings in the 2015/16 and 2021/22 seasons.
\subsection{The simulation of match outcomes} \label{Sec32}
In football, the number of goals scored is usually described by Poisson distribution \citep{Maher1982, VanEetveldeLey2019}.
\citet{DagaevRudyak2019} propose such a model to evaluate the effects of the seeding system reform in the Champions League, introduced in 2015. Consider a single match between two clubs, and denote by $\lambda_{H,A}$ and $\lambda_{A,H}$ the expected number of goals scored by the home team $H$ and the away team $A$, respectively. The probability of team $H$ scoring $n_{H,A}$ goals against team $A$ is given by
\[
P_H \left( n_{H,A} \right) = \frac{\lambda_{H,A}^{n_{H,A}}\exp{\left( -\lambda_{H,A} \right)}}{n_{H,A}!},
\]
whereas the probability of team $A$ scoring $n_{A,H}$ goals against team $H$ is
\[
P_H \left( n_{A,H} \right) = \frac{\lambda_{A,H}^{n_{A,H}}\exp{\left( -\lambda_{A,H} \right)}}{n_{A,H}!}.
\]
In order to determine the outcome of the match, parameters $\lambda_{H,A}$ and $\lambda_{A,H}$ need to be estimated. \citet{DagaevRudyak2019} use the following specification:
\[
\log \left( \lambda_{H,A} \right) = \alpha_H + \beta_H \cdot \left( R_H - \gamma_H R_A \right),
\]
\[
\log \left( \lambda_{A,H} \right) = \alpha_A + \beta_A \cdot \left( R_A - \gamma_A R_H \right),
\]
with $R_H$ and $R_A$ being the UEFA club coefficients of the corresponding teams, and $\alpha_i, \beta_i, \gamma_i$ ($i \in \{ A,H \}$) being parameters to be optimised on a historical sample.
A simpler version containing four parameters can be derived by setting $\gamma_H = \gamma_A = 1$.
We have studied two options for quantifying the strength of a club: the UEFA club coefficient and seeding pot from which the team is drawn in the Champions League group stage. The latter can take only four different values. Furthermore, each group is guaranteed to consist of one team from each pot, consequently, the dataset contains the same number of teams for each possible value, as well as the same number of matches for any pair of ratings.
Assuming the scores to be independent may be too restrictive because the two opposing teams compete against each other. Thus, if one team scores, then the other will exert more effort into scoring \citep{KarlisNtzoufras2003}. This correlation between the number of goals scored can be accounted for by bivariate Poisson distribution, which introduces an additional covariance parameter $c$ that reflects the connection between the scores of teams $H$ and $A$ \citep{VanEetveldeLey2019}.
To sum up, five model variants are considered:
\begin{itemize}
\item
6-parameter Poisson model based on UEFA club coefficients (6p coeff);
\item
4-parameter Poisson model based on UEFA club coefficients (4p coeff);
\item
6-parameter Poisson model based on pot allocation (6p pot);
\item
4-parameter Poisson model based on pot allocation (4p pot);
\item
7-parameter bivariate Poisson model based on UEFA club coefficients (Bivariate).
\end{itemize}
\begin{table}[t!]
\centering
\caption{Model parameters estimated by the maximum likelihood method \\ on the basis of Champions League seasons between 2003/04 and 2019/20}
\label{Table3}
\rowcolors{3}{}{gray!20}
\begin{tabularx}{\textwidth}{l CCCCCC c} \toprule
Model & $\alpha_H$ & $\alpha_A$ & $\beta_H$ & $\beta_A$ & $\gamma_H$ & $\gamma_A$ & $c$ \\ \bottomrule
6p coeff & 0.335 & 0.087 & 0.006 & 0.006 & 0.833 & 0.963 & --- \\
4p coeff & 0.409 & 0.102 & 0.006 & 0.006 & --- & --- & --- \\
6p pot & 0.464 & 0.143 & $-0.177$ & $-0.182$ & 0.91 & 0.922 & --- \\
4p pot & 0.424 & 0.108 & $-0.169$ & $-0.175$ & --- & --- & --- \\
Bivariate & 0.335 & 0.087 & 0.006 & 0.006 & 0.833 & 0.963 & $\exp \left( -12.458 \right)$ \\ \toprule
\end{tabularx}
\end{table}
All parameters have been estimated by the maximum likelihood approach on the set of $8 \times 12 \times 17 = 1632$ matches played in the 17 seasons from 2003/04 to 2019/20. They are presented in Table~\ref{Table3}.
The optimal value of $c$, the correlation parameter of the bivariate model is positive but almost zero, hence, the bivariate Poisson model does not improve accuracy. This is in accordance with the finding of \citet{ChaterArrondelGayantLaslier2021} for the group stage of the FIFA World Cup. The reason is that the bivariate Poisson model is not able to grab a negative correlation between its components, however, the goals scored by home and away teams are slightly negatively correlated in our dataset.
The performance of the models has been evaluated on two disjoint test sets, the seasons of 2020/21 and 2021/22. They are treated separately because most games in the 2020/21 edition were played behind closed doors owing to the COVID-19 pandemic, which might significantly affect home advantage \citep{BenzLopez2021, BrysonDoltonReadeSchreyerSingleton2021, FischerHaucap2021}.
Two metrics have been calculated to compare the statistical models. \emph{Average hit probability} measures how accurately a model can determine the exact score of a match: we compute the probability of the actual outcome, sum up these probabilities across all matches in the investigated dataset, and normalise this value by the number of seasons. A simple baseline model serves as a benchmark, where the chances are determined by relative frequencies in the seasons from 2003/04 to 2018/19.
\begin{table}[t!]
\centering
\caption{Average hit probability for the statistical models (\%)}
\label{Table4}
\begin{threeparttable}
\rowcolors{3}{gray!20}{}
\begin{tabularx}{0.8\textwidth}{L c CC} \toprule \hiderowcolors
\multirow{2}[0]{*}{Model} & \multicolumn{3}{c}{Season(s)} \\
& 2003/04--2019/20 & 2020/21 & 2021/22 \\ \bottomrule \showrowcolors
6p coeff & 7.016 (1) & 6.682 (2) & 6.148 (3) \\
6p pot & 6.869 (4) & 6.443 (5) & 6.297 (1) \\
4p coeff & 7.012 (3) & 6.683 (1) & 6.146 (5) \\
4p pot & 6.868 (5) & 6.456 (4) & 6.297 (1) \\
Bivariate & 7.016 (1) & 6.682 (2) & 6.148 (3) \\ \toprule
Baseline & 6.123 (6) & 5.482 (6) & 5.485 (6) \\ \bottomrule
\end{tabularx}
\begin{tablenotes} \footnotesize
\item
\emph{Baseline model}: The probability of any match outcome is determined by the relative frequency of this result in the training set (all seasons between 2003/04 and 2019/20).
\item
The ranks of the models are indicated in bracket.
\end{tablenotes}
\end{threeparttable}
\end{table}
The results are provided in Table~\ref{Table4}. The baseline model shows the worst performance, which is a basic criterion for the validity of the proposed methods. The bivariate Poisson variant does not outperform the 6-parameter Poisson based on UEFA club coefficients. Even though the club coefficient provides a finer measure of strength than the pot allocation, it does not result in a substantial improvement with respect to average hit probability.
The average hit probability does not count whether the prediction fails by a small margin (the forecast is 2-2 and the actual result is 1-1) or it is completely wrong (the forecast is 4-0 and the actual result is 1-3). However, there exists no straightforward ``distance'' among the possible outcomes. If the differences in the predicted and actual goals scored by the home and away teams are simply added, then the result of 2-2 will be farther from 1-1 than 2-1. But 1-1 and 2-2 are more similar than 1-1 and 2-1 from a sporting perspective since both 1-1 and 2-2 represent a draw. To resolve this issue, we have devised a distance metric for the outcome of the matches generated by the scalar product with a specific matrix, which has been inspired by the concept of Mahalanobis distance \citep{deMaesschalckJouan-RimbaudMassart2000}.
\input{Table5_match_result_distance}
Let the final score of the game be $R_1 = (h_1, a_1)$, where $h_1$ is the number of goals for the home team, and $a_1$ is the number of goals for the away team. Analogously, denote by $R_2 = (h_2, a_2)$ the predicted result of this game. The distance between the two outcomes equals
\[
\Delta(R_1, R_2) = \sqrt{
\begin{bmatrix}h_1-h_2 & a_1-a_2\end{bmatrix}
\begin{bmatrix}1 & -9/10 \\ -9/10 & 1\end{bmatrix}
\begin{bmatrix}h_1-h_2\\a_1-a_2\end{bmatrix}
}.\footnote{~The value of $-9/10$ controls the relative cost of adding one goal for both teams. If it would be equal to $-1$, then the prediction of 1-1 (2-1) instead of 0-0 (1-0) would not be punished.}
\]
For instance, with the final score of 2-0 and the forecast of 1-2, $h_1 - h_2 = 1$ and $a_1 - a_2 = -2$, which leads to
\[
\Delta(R_1, R_2) = \sqrt{
\begin{bmatrix} 1 & -2 \end{bmatrix}
\begin{bmatrix} 1 + 2 \times 0.9 \\ -0.9 - 2 \times 1 \end{bmatrix}
} = \sqrt{2.8 - 2 \times (-2.9)} = \sqrt{8.6} \approx 2.933.
\]
The distances between the outcomes defined by this metric can be seen in Table \ref{Table5}. The measure is called \emph{distance of match scores} in the following.
\begin{table}[t!]
\centering
\caption{Average distance of match scores for the statistical models}
\label{Table6}
\begin{threeparttable}
\rowcolors{3}{gray!20}{}
\begin{tabularx}{0.8\textwidth}{L c CC} \toprule \hiderowcolors
\multirow{2}[0]{*}{Model} & \multicolumn{3}{c}{Season(s)} \\
& 2003/04--2019/20 & 2020/21 & 2021/22 \\ \bottomrule \showrowcolors
6p coeff & 2.041 (3) & 2.199 (3) & 2.163 (3) \\
6p pot & 1.958 (2) & 2.068 (2) & 2.013 (2) \\
4p coeff & 2.054 (5) & 2.218 (5) & 2.175 (5) \\
4p pot & 1.957 (1) & 2.066 (1) & 2.012 (1) \\
Bivariate & 2.041 (3) & 2.199 (3) & 2.163 (3) \\ \toprule
Baseline & 2.095 (6) & 2.247 (6) & 2.21 (6) \\ \bottomrule
\end{tabularx}
\begin{tablenotes} \footnotesize
\item
\emph{Baseline model}: The probability of any match outcome is determined by the relative frequency of this result in the training set (all seasons between 2003/04 and 2019/20).
\item
The ranks of the models are indicated in bracket.
\end{tablenotes}
\end{threeparttable}
\end{table}
Table~\ref{Table6} evaluates the six statistical models (including the baseline) according to the average distances of match scores over three sets of games. In contrast to the average hit probability, now a lower value is preferred. There is only a minimal difference between the performance of variants based on UEFA club coefficients and pot allocation. Using six parameters instead of four does not improve accuracy. Since the schedule of group matches will depend on the pots of the teams and UEFA club coefficients are not able to increase the predictive power, we have decided for the 4-parameter Poisson model based on pot allocation to simulate the group matches played in the Champions League.
\input{Figure2_distribution_of_goals}
\begin{table}[t!]
\caption{Number of matches with a given outcome in the sample}
\label{Table7}
\begin{subtable}{\linewidth}
\centering
\caption{Seasons between 2003/04 and 2019/20}
\label{Table7a}
\rowcolors{1}{}{gray!20}
\begin{tabularx}{0.9\linewidth}{l CCCCC} \toprule
Final score & 0 & 1 & 2 & 3 & 4 \\ \bottomrule
0 & 115 & 109 & 83 & 48 & 20 \\
1 & 157 & 175 & 96 & 38 & 19 \\
2 & 138 & 139 & 76 & 25 & 5 \\
3 & 84 & 72 & 36 & 14 & 2 \\
4 & 44 & 22 & 18 & 5 & 2 \\ \toprule
\end{tabularx}
\end{subtable}
\vspace{0.5cm}
\begin{subtable}{\linewidth}
\centering
\caption{Season 2020/21}
\label{Table7b}
\rowcolors{1}{}{gray!20}
\begin{tabularx}{0.9\linewidth}{l CCCCC} \toprule
Final score & 0 & 1 & 2 & 3 & 4 \\ \bottomrule
0 & 5 & 3 & 8 & 4 & 4 \\
1 & 6 & 9 & 7 & 3 & 1 \\
2 & 7 & 5 & 6 & 2 & 0 \\
3 & 7 & 5 & 4 & 0 & 1 \\
4 & 2 & 1 & 0 & 0 & 0 \\ \toprule
\end{tabularx}
\end{subtable}
\vspace{0.5cm}
\begin{subtable}{\linewidth}
\centering
\caption{Season 2021/22}
\label{Table7c}
\begin{threeparttable}
\rowcolors{1}{}{gray!20}
\begin{tabularx}{0.9\linewidth}{l CCCCC} \toprule
Final score & 0 & 1 & 2 & 3 & 4 \\ \bottomrule
0 & 6 & 5 & 1 & 3 & 1 \\
1 & 9 & 7 & 8 & 4 & 2 \\
2 & 10 & 7 & 3 & 2 & 0 \\
3 & 2 & 3 & 3 & 2 & 0 \\
4 & 5 & 2 & 2 & 0 & 0 \\ \toprule
\end{tabularx}
\begin{tablenotes} \footnotesize
\item
Goals scored by the home team are in the rows, goals scored by the away team are in the columns.
\item
Games where one team scored at least five goals are not presented.
\end{tablenotes}
\end{threeparttable}
\end{subtable}
\end{table}
\begin{table}[t!]
\caption{Expected number of matches from the simulation model chosen}
\label{Table8}
\begin{subtable}{\linewidth}
\centering
\caption{Seasons between 2003/04 and 2019/20}
\label{Table8a}
\rowcolors{1}{}{gray!20}
\begin{tabularx}{0.9\linewidth}{l CCCCC} \toprule
Final score & 0 & 1 & 2 & 3 & 4 \\ \bottomrule
0 & $103.0 \pm 9.8$ & $124.6 \pm 10.7$ & $81.2 \pm 8.7$ & $38.1 \pm 6.0$ & $14.0 \pm 3.7$ \\
1 & $159.4 \pm 12.0$ & $175.8 \pm 12.5$ & $106.4 \pm 9.9$ & $46.8 \pm 6.7$ & $16.6 \pm 4.0$ \\
2 & $133.8 \pm 11.0$ & $135.2 \pm 11.1$ & $74.7 \pm 8.4$ & $30.3 \pm 5.4$ & $9.9 \pm 3.1$ \\
3 & $80.5 \pm 8.6$ & $75.6 \pm 8.5$ & $38.5 \pm 6.1$ & $14.3 \pm 3.8$ & $4.1 \pm 2.0$ \\
4 & $38.9 \pm 6.1$ & $34.0 \pm 5.7$ & $16.0 \pm 4.0$ & $5.4 \pm 2.3$ & $1.6 \pm 1.3$ \\ \toprule
\end{tabularx}
\end{subtable}
\vspace{0.5cm}
\begin{subtable}{\linewidth}
\centering
\caption{Seasons 2020/21 and 2021/22}
\label{Table8b}
\begin{threeparttable}
\rowcolors{1}{}{gray!20}
\begin{tabularx}{0.9\linewidth}{l CCCCC} \toprule
Final score & 0 & 1 & 2 & 3 & 4 \\ \bottomrule
0 & $6.1 \pm 2.4$ & $7.3 \pm 2.6$ & $4.8 \pm 2.1$ & $2.2 \pm 1.5$ & $0.8 \pm 0.9$ \\
1 & $9.4 \pm 2.9$ & $10.3 \pm 3.0$ & $6.3 \pm 2.4$ & $2.8 \pm 1.6$ & $1.0 \pm 1.0$ \\
2 & $7.9 \pm 2.7$ & $8.0 \pm 2.7$ & $4.4 \pm 2.0$ & $1.8 \pm 1.3$ & $0.6 \pm 0.8$ \\
3 & $4.7 \pm 2.1$ & $4.4 \pm 2.1$ & $2.3 \pm 1.5$ & $0.8 \pm 0.9$ & $0.2 \pm 0.5$ \\
4 & $2.3 \pm 1.5$ & $2.0 \pm 1.4$ & $0.9 \pm 1.0$ & $0.3 \pm 0.6$ & $0.1 \pm 0.3$ \\ \toprule
\end{tabularx}
\begin{tablenotes} \footnotesize
\item
The numbers indicate the average number of occurrences based on simulations $\pm$ standard deviations.
\item
Goals scored by the home team are in the rows, goals scored by the away team are in the columns.
\item
Games where one team scored at least five goals are not presented.
\end{tablenotes}
\end{threeparttable}
\end{subtable}
\end{table}
Finally, the chosen specification is demonstrated to describe well the unknown score-generating process.
First, Figure~\ref{Fig2} shows the real goal distributions and the one implied by our Poisson model that gives the same forecast for each season as the teams are identified by the pot from which they are drawn.
Second, the final scores of the games are analysed: Table~\ref{Table7} presents the number of matches with the given outcome in the corresponding season(s), while the number of occurrences for these events according to the chosen model is provided in Table~\ref{Table8}. Again, the forecast is the same for any season since the groups cannot be distinguished by the strengths of the clubs. The 4-parameter Poisson model based on pot allocation provides a good approximation to the empirical data.
\subsection{Classification of games} \label{Sec33}
As has been presented in Section~\ref{Sec1}, a team might be indifferent with respect to the outcome of the match(es) played on the last matchday(s) since its position in the final ranking is already secured.
In the group stage of the UEFA Champions League, there are six matchdays. The position of a team in the group ranking can be known first after Matchday~4. In particular, a club is guaranteed to win its group if it has at least seven points more than the runner-up or if
\begin{itemize}
\item
it leads by six points over the runner-up; and
\item
it leads by at least seven points over the third-placed team; and
\item
it has played two matches against the runner-up.
\end{itemize}
The second- and third-placed clubs cannot be fixed after Matchday 4.
A club will certainly be fourth in the final ranking if it has at least seven points less than the third-placed team or if
\begin{itemize}
\item
it has six points less than the third-placed team; and
\item
it has at least seven points less than the runner-up; and
\item
it has played two matches against the third-placed team.
\end{itemize}
Note that these definitions are more complicated than the ones appearing in the previous works \citep{ChaterArrondelGayantLaslier2021, Guyon2020a} because of two reasons:
(a) there are more matches due to organising the groups in a double round-robin format; and
(b) tie-breaking is based on head-to-head results instead of goal difference.
It would be difficult to determine all possible cases by similar criteria after Matchday 5. Hence, we consider only the extreme cases as follows. The results of the two games played on Matchday 6 are assumed to be: (a) M-0, M-0; (b) M-0, 0-M; (c) 0-M, M-0; and (d) 0-M, 0-M, where M is a high number.
The position of a team is known if it is the same in all scenarios (a) to (d).
Depending on whether the final position of a team is already secured or not, three categories of matches can be distinguished:
\begin{itemize}[label=$\bullet$]
\item
\emph{Competitive game}:
Neither team is indifferent because they can achieve a higher rank through a better performance on the field with a positive probability.
\item
\emph{Weakly stakeless game}:
One of the teams is completely indifferent as its position in the final group ranking is independent of the outcomes of the matches still to be played. However, it has a positive probability that the other team can obtain a higher rank through a better performance on the field.
\item
\emph{Strongly stakeless game}:
Both teams are completely indifferent since their positions in the final group ranking are not influenced by the results of the remaining matches.
\end{itemize}
In the situation outlined in Example~\ref{Examp1}, Liverpool is indifferent in its last two matches, thus at least two weakly stakeless games will be played in the group.
In Example~\ref{Examp2}, all teams are indifferent before Matchday 6, hence, there will be two strongly stakeless games.
Our classification differs from the definitions given in the existing literature. For example, \citet{ChaterArrondelGayantLaslier2021} call a match stakeless if at least one team becomes indifferent between winning, drawing, or even losing by 5 goals difference with respect to qualification. This notion does not consider the incentives of the opponent, which is an important factor for the competitiveness of the game.
\subsection{Candidate schedules} \label{Sec34}
The regulation of the UEFA Champions League provides surprisingly little information on how the group matches are scheduled \citep[Article~16.02]{UEFA2021c}: ``\emph{A club does not play more than two home or two away matches in a row and each club plays one home match and one away match on the first and last two matchdays.}''\footnote{~At first sight, one might conclude that UEFA has fixed the schedule of group matches until the 2020/21 season. For instance, the regulation of the competition for 2020/21 \citep[Article~16.02]{UEFA2020a} says that: \\
``\emph{The following match sequence applies: \\
Matchday 1: 2 v 3, 4 v 1; \\
Matchday 2: 1 v 2, 3 v 4; \\
Matchday 3: 3 v 1, 2 v 4; \\
Matchday 4: 1 v 3, 4 v 2; \\
Matchday 5: 3 v 2, 1 v 4; \\
Matchday 6: 2 v 1, 4 v 3}.'' \\
However, the meaning of the numbers remains unknown. They certainly do \emph{not} correspond to the pots from which the teams have been drawn since, on Matchday 1 in the 2020/21 season, FC Bayern M\"unchen (Pot 1) hosted Club Atl\'etico Madrid (Pot 2) in Group A and FC Dynamo Kyiv (Pot 3) hosted Juventus (Pot 1) in Group G. Furthermore, the match sequence cannot be determined before the identity of the clubs are known due to the other---obvious but unannounced---constraints.}
Therefore, the eight schedules of group matches used in the 2021/22 Champions League are regarded as valid solutions and options available for the tournament organiser.
Since each group consists of one team from each of the four pots, the clubs are identified by their pot in the following, that is, team $i$ represents the team drawn from Pot $i$.
\begin{table}[t!]
\begin{threeparttable}
\centering
\caption{Group schedules in the 2021/22 UEFA Champions League}
\label{Table9}
\rowcolors{3}{gray!20}{}
\begin{tabularx}{\textwidth}{l CC CC CC CC CC CC CC CC} \toprule \hiderowcolors
& \multicolumn{2}{c}{\textbf{Gr.\ A}} & \multicolumn{2}{c}{\textbf{Gr.\ B}} & \multicolumn{2}{c}{\textbf{Gr.\ C}} & \multicolumn{2}{c}{\textbf{Gr.\ D}} & \multicolumn{2}{c}{\textbf{Gr.\ E}} & \multicolumn{2}{c}{\textbf{Gr.\ F}} & \multicolumn{2}{c}{\textbf{Gr.\ G}} & \multicolumn{2}{c}{\textbf{Gr.\ H}} \\
& H & A & H & A & H & A & H & A & H & A & H & A & H & A & H & A \\ \bottomrule \showrowcolors
Matchday 1 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 2 & 2 & 1 & 1 & 3 & 1 & 4 & 1 & 3 \\
& 4 & 2 & 2 & 4 & 4 & 2 & 4 & 3 & 4 & 3 & 4 & 2 & 2 & 3 & 4 & 2 \\ \hline
Matchday 2 & 2 & 1 & 4 & 1 & 2 & 1 & 3 & 1 & 1 & 4 & 2 & 1 & 3 & 1 & 2 & 1 \\
& 3 & 4 & 3 & 2 & 3 & 4 & 2 & 4 & 3 & 2 & 3 & 4 & 4 & 2 & 3 & 4 \\ \hline
Matchday 3 & 4 & 1 & 1 & 2 & 4 & 1 & 1 & 4 & 3 & 1 & 4 & 1 & 1 & 2 & 1 & 4 \\
& 2 & 3 & 3 & 4 & 3 & 2 & 3 & 2 & 2 & 4 & 2 & 3 & 3 & 4 & 3 & 2 \\ \hline
Matchday 4 & 1 & 4 & 2 & 1 & 1 & 4 & 4 & 1 & 1 & 3 & 1 & 4 & 2 & 1 & 4 & 1 \\
& 3 & 2 & 4 & 3 & 2 & 3 & 2 & 3 & 4 & 2 & 3 & 2 & 4 & 3 & 2 & 3 \\ \hline
Matchday 5 & 1 & 2 & 1 & 4 & 1 & 2 & 1 & 3 & 4 & 1 & 1 & 2 & 1 & 3 & 1 & 2 \\
& 4 & 3 & 2 & 3 & 4 & 3 & 4 & 2 & 2 & 3 & 4 & 3 & 2 & 4 & 4 & 3 \\ \hline
Matchday 6 & 3 & 1 & 3 & 1 & 3 & 1 & 2 & 1 & 1 & 2 & 3 & 1 & 4 & 1 & 3 & 1 \\
& 2 & 4 & 4 & 2 & 2 & 4 & 3 & 4 & 3 & 4 & 2 & 4 & 3 & 2 & 2 & 4 \\ \bottomrule
\end{tabularx}
\begin{tablenotes} \footnotesize
\item
The numbers indicate the pots from which the teams are drawn.
\end{tablenotes}
\end{threeparttable}
\end{table}
Table~\ref{Table9} outlines these alternatives.\footnote{~In the Champions League seasons from 2003/04 to 2020/21, Matchday 4/5/6 was the mirror image of Matchday 3/1/2, respectively. Consequently, the same two teams played at home on the first and last matchday in the previous seasons. This arrangement has been changed in the 2021/22 season such that Matchday 4/5/6 is the mirror image of Matchday 3/2/1, see Table~\ref{Table9}.}
From our perspective, five different patterns exist as the prediction of match outcome does not depend on the schedule, and game classification starts after Matchday 4:
\begin{itemize}
\item
The schedules of Groups A and C differ only in one game played on Matchdays 3 and (consequently) 4;
\item
The schedules of Groups A and F coincide;
\item
The schedules of Groups A and H differ in the two games played on Matchdays 3 and (consequently) 4.
\end{itemize}
On the other hand, the schedules of Groups A, B, D, E, and G are worth assessing for the frequency of weakly and strongly stakeless games.
\begin{table}[t!]
\centering
\caption{Candidate schedules from the 2021/22 UEFA Champions League}
\label{Table10}
\rowcolors{3}{gray!20}{}
\begin{tabularx}{\textwidth}{l CC CC} \toprule \hiderowcolors
& \multicolumn{2}{c}{\textbf{Matchday 5}} & \multicolumn{2}{c}{\textbf{Matchday 6}} \\
& Home & Away & Home & Away \\ \bottomrule \showrowcolors
Schedule A & Pot 1 & Pot 2 & Pot 3 & Pot 1 \\
& Pot 4 & Pot 3 & Pot 2 & Pot 4 \\ \hline
Schedule B & Pot 1 & Pot 4 & Pot 3 & Pot 1 \\
& Pot 2 & Pot 3 & Pot 4 & Pot 2 \\ \hline
Schedule D & Pot 1 & Pot 3 & Pot 2 & Pot 1 \\
& Pot 4 & Pot 2 & Pot 3 & Pot 4 \\ \hline
Schedule E & Pot 4 & Pot 1 & Pot 1 & Pot 2 \\
& Pot 2 & Pot 3 & Pot 3 & Pot 4 \\ \hline
Schedule G & Pot 1 & Pot 3 & Pot 4 & Pot 1 \\
& Pot 2 & Pot 4 & Pot 3 & Pot 2 \\ \bottomrule
\end{tabularx}
\end{table}
The five scheduling options are summarised in Table~\ref{Table10}. Note that only the last two matchdays count from our perspective.
\section{Results} \label{Sec4}
We focus on the probability of a stakeless game as the function of the group schedule. For sample size $N$, the error of a simulated probability $P$ is $\sqrt{P(1-P) / N}$. Since even the smallest $P$ exceeds 2.5\% and 1 million simulation runs are implemented, the error always remains below 0.016\%. Therefore, confidence intervals will not be provided because the averages differ reliably between the candidate schedules.
\input{Figure3_WSM_R5}
Figure~\ref{Fig3} plots the likelihood of a weakly stakeless game at the first point where it might occur, on Matchday 5. The probability varies between 2.5\% and 4\%, it is the lowest for schedules A and D, while schedules B and G are poor choices to avoid these matches.
\input{Figure4_WSM_R6}
The probability of a weakly stakeless game in the last round is given in Figure~\ref{Fig4}. The solutions differ to a high degree, the worst schedules (B and G) increase the danger of a weakly stakeless match by 35\% (more than 10 percentage points). The most widely used option, schedule A---followed in four groups of the 2021/22 Champions League---becomes unfavourable from this point of view.
\input{Figure5_SSM_R6}
However, according to Figure~\ref{Fig5}, schedule A is the best alternative to minimise the chance of strongly stakeless games that are totally unimportant with respect to the group ranking. Now the scheduling options vary less in absolute terms, the probability of such a situation remains between 8\% and 10.5\%.
Consequently, there are three objectives to be optimised, depicted in Figures~\ref{Fig3}--\ref{Fig5}. While schedule A dominates both schedules B and G, the remaining three alternatives can be optimal depending on the preferences of the decision-maker. In order to evaluate them, it is worth considering a weighting scheme. The cost of a weakly stakeless game played on Matchday 6 can be fixed at 1 without losing generality. It is reasonable to assume that the cost of a weakly stakeless game played on Matchday 5 is not lower than 1. Analogously, a strongly stakeless game is certainly more threatening than a weakly stakeless game, perhaps even by an order of magnitude.
\input{Figure6_stakeless_game_cost}
Figure~\ref{Fig6} calculates the price of candidate schedules as the function of the cost ratio between a strongly and weakly stakeless game played in the last round. The price of a weakly stakeless game on Matchday 5 is 1 in the first chart and 2 in the second chart. Schedule E is the best alternative if the relative cost of a strongly stakeless game is moderated, namely, at most 5, but schedule A should be chosen if this ratio is higher. If the goal is to avoid strongly stakeless games, schedules D and E are unfavourable options.
The key findings can be summed up as follows:
\begin{itemize}
\item
Schedule A dominates schedules B and G with respect to stakeless games;
\item
Schedules B and G have approximately the same cost, which can be explained by their similarity on the last two matchdays, where only the positions of the teams drawn from Pots 3 and 4 are interchanged (see Table~\ref{Table10});
\item
Schedule E is preferred to schedule D except for the case when an unlikely high weight is assigned to weakly stakeless games played on Matchday 5;
\item
Schedule E (and, to some extent, D) should be followed if strongly stakeless games are not judged to be substantially more harmful than weakly stakeless games;
\item
Schedule A can decrease the probability of a strongly stakeless game by at least 10\%, hence, it needs to be implemented to reduce the number of matches that do not affect the final group ranking.
\end{itemize}
In the 2021/22 Champions League, UEFA has essentially used schedule A in four groups and schedules B, D, E, and G in one group, respectively. Therefore, it is likely that minimising the number of strongly stakeless games has been a crucial goal in the computer draw of the fixture, which remains unknown to the public.
\section{Discussion} \label{Sec5}
Motivated by recent examples from the most prestigious European club football competition, this paper has proposed a novel classification method for games played in a round-robin tournament. The selection criterion is whether the position of a team in the final ranking is already known, independently of the outcomes of matches still to be played. A game is called (1) competitive if neither opposing team is indifferent; (2) weakly stakeless if exactly one of the opposing teams is indifferent; or (3) strongly stakeless if both teams are indifferent. Avoiding stakeless game should be an imperative aim of the organiser because a team might play with little enthusiasm if the outcome of the match cannot affect its final position.
We have built a simulation model to compare some sequences for the group matches of the UEFA Champions League according to the probability of games where one or both clubs cannot achieve a higher rank. Five candidate schedules have been identified based on the 2021/22 season. The prevailing schedule---applied in half of the eight groups---is optimal under a wide set of costs assigned to different stakeless games. Reducing the number of strongly stakeless games has probably been a crucial goal in the computer draw of the fixture.
Our study has some limitations. The simulation model may be refined. There are other aspects of scheduling, for example, selecting the kick-off time, which might influence the performance of teams \citep{Krumer2020a}. The suggested classification scheme does not deal with the sequence of matches played in the first four rounds. Furthermore, stakeless games are identified in a deterministic framework but a team may exert lower effort still if its position is known with a high probability.
Finally, it has not been taken into account that the group fixtures cannot be determined independently of each other. For example, FC Internazionale Milano (drawn from Pot 1 to Group D) and AC Milan (drawn from Pot 4 to Group B) share the same stadium, but both of them should have played at home on Matchday 5 if their groups would have been organised according to schedule A, which is clearly impossible. Similar constraints might prevent choosing the optimal schedule in all groups, and can (partially) explain the variance of schedules used in the 2021/22 season.
UEFA is strongly encouraged to increase the transparency of how its competitions are scheduled by announcing these restrictions. Then some of them can probably be addressed by the draw procedure: \citet{Csato2022a} has recently demonstrated the role of draw constraints in avoiding unfair situations that might include stakeless games.
Despite these caveats, the current work has hopefully managed to uncover some important aspects of tournament design and can inspire further research by scheduling experts to optimise various aspects of competitiveness beyond the classical criteria of fairness.
\section*{Acknowledgements}
\addcontentsline{toc}{section}{Acknowledgements}
\noindent
This paper could not have been written without the \emph{father} of the first author (also called \emph{L\'aszl\'o Csat\'o}), who has helped to code the simulations in Python. \\
We are grateful to \emph{Dries Goossens} and \emph{Stephan Westphal} for useful advice. \\
We are indebted to the \href{https://en.wikipedia.org/wiki/Wikipedia_community}{Wikipedia community} for summarising important details of the sports competition discussed in the paper. \\
The research was supported by the MTA Premium Postdoctoral Research Program grant PPD2019-9/2019.
The research reported in this paper is part of project no.~BME-NVA-02, implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the TKP2021 funding scheme. The work of \emph{Roland Molontay} is supported by the NKFIH K123782 research grant.
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 1.637695,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdfg5qsBDH82z6nA- |
\section{Introduction} \label{sec:introduction}
Sports are an exciting domain for machine learning due to their global appeal, well-defined rules, and multitude of machine learning tasks, such as win probability estimation, trajectory prediction, and tactic identification. Especially in the last decade, sports analytics has garnered much interest from teams, leagues, and fans~\cite{DBLP:journals/bigdata/AssuncaoP19}. However, while conventional sports, like soccer, basketball, and baseball, produce large and varied data, such as videos or player locations, the data suffer from various drawbacks that hamper its usability in research~\cite{DBLP:journals/csur/GudmundssonH17}.
Most conventional sports tracking data is acquired using player-embedded sensors or computer vision-based approaches~\cite{DBLP:journals/csur/GudmundssonH17}. While the former often produces clean data, it is difficult to scale sensor-based tracking to acquire large amounts of data, due to factors such as privacy concerns from players or financial constraints in acquiring and calibrating a sufficient amount of sensors~\cite{rana2020wearable}. Computer vision-based techniques often require significant cleaning and complex computational workflows~\cite{DBLP:journals/tomccap/StenslandGTHNAMLLLGHSJ14, DBLP:conf/ism/TennoeHNASGJGH13}. Furthermore, one must collect the video themselves, which imposes further time and cost constraints.
Sports data is also often private or hard to access. Oftentimes, this is due to business factors, such as data exclusivity agreements in professional leagues which prohibit data sharing to organizations outside the league~\cite{longenhagen_mcdaniel_2019, streeter_2019}. Additionally, access to professional sports organizations and players is seldom granted to the public which prohibits researchers from sharing data~\cite{socolow2017game}. While some tracking data has been released for some sports, it is not well-maintained nor well-documented. Furthermore, the data is oftentimes small and suitable mostly for demo use, which limits its usability for machine learning research.
Esports offer a unique opportunity to produce clean and granular sports data at scale~\cite{DBLP:journals/intr/HamariS17}. In conventional sports, optics and sensors may be directed towards a playing surface, which capture player locations and in-game events. In esports, players connect to a game server to play a video game. While there is no physical world to monitor, such as through peripheral sensors or optics required by conventional sports, the game server generates a server log. Server-side logs provide the ability to reconstruct a game, and the game server typically writes to the log at a rate of dozens of times per second. Thus, esports data is not only of high quality but can also be collected at a high frequency. Furthermore, esports game logs are often publicly available, since players, leagues, and tournaments routinely upload server logs for verification and playback purposes.
Some popular esports, like StarCraft II, Defense of the Ancients 2, and League of Legends, are played from an isometric perspective. Counter-Strike: Global Offensive (CSGO), however, is played in a first-person capacity. Thus, a player navigates through the map from the first-person view of their in-game agent, as opposed to navigation from a top-down view. In this sense, first-person esports like CSGO are quite similar to conventional sports, particularly in the data that they generate. Thus, a large spatiotemporal dataset derived from esports would not only be useful for the esports analytics community, but also for the general sports analytics community, given that prediction tasks, like win prediction, are shared across sports~\cite{DBLP:conf/bigdataconf/XenopoulosDS20, DBLP:conf/bigdataconf/XenopoulosS21, DBLP:conf/www/XenopoulosFS22, DBLP:conf/aist/MakarovSLI17, yurko2020going, DBLP:conf/kdd/RobberechtsHD21}. In this work, we make the following contributions:
\begin{enumerate}
\item \textbf{\texttt{awpy} Python library.} We introduce \texttt{awpy}, a Python library to parse, analyze, and visualize Counter-Strike game replay files. The parsed JSON output contains game replay, server, and parser metadata, along with a list of ``game round'' objects that contain player actions and locations. Awpy is able to accept user-specified parsing arguments to control the parsed data. We make awpy available on PyPI through \texttt{pip install awpy}.
\item \textbf{Trajectories and Actions dataset.} Using awpy, we create the \textbf{ES}ports \textbf{T}rajectories and \textbf{A}ctions (ESTA) dataset. ESTA contains parsed demo and match information from 1,558 professional Counter-Strike games. ESTA contains 8.6m player actions, 7.9m total frames, and 417k player trajectories. ESTA represents one of the largest publicly available sports datasets where tracking and event data are coupled. ESTA is made available through Github.
\item \textbf{Benchmark tasks.} Using the ESTA data, we provide benchmarks for sports outcome prediction, and in particular, win probability prediction~\cite{DBLP:conf/kdd/RobberechtsHD21, DBLP:conf/bigdataconf/XenopoulosS21}. Win probability prediction is a fundamental prediction task in sports and has multiple applications, such as player valuation~\cite{DBLP:conf/bigdataconf/XenopoulosDS20, DBLP:conf/kdd/SiciliaPG19, DBLP:conf/kdd/DecroosBHD19}, and game understanding~\cite{DBLP:conf/www/XenopoulosFS22, DBLP:conf/kdd/PowerRWL17}.
\end{enumerate}
\section{Related Work} \label{sec:related-work}
Spatiotemporal data in sports, particularly in the form of player actions or player trajectories, has garnered much interest. A common task is to predict an outcome in some period of a given game. To do so, the prediction model typically takes a ``game state'' as input, which contains the context of the game at prediction time. For example, Decroos~et~al. represent a soccer game as a series of on-the-ball actions~\cite{DBLP:conf/kdd/DecroosBHD19}. They define a game state as the previous three actions, and predict if a goal will occur in the next ten actions. For esports, Xenopoulos~et~al. perform a similar procedure, but rather define a game state as a snapshot of global and team-specific information at prediction time~\cite{DBLP:conf/bigdataconf/XenopoulosDS20}. In each of the aforementioned works, event or trajectory data was processed to create spatial features used in prediction, and the game state was represented by a vector.
Game states are increasingly being defined in structures which require techniques such as recurrent or graph neural networks. For example, Yurko~et~al. use a sequence of game states as input to a recurrent model, to predict the total yards gained in an American football play~\cite{yurko2020going}. Sicilia~et~al. train a multiclass sequence prediction model to predict the outcomes of a basketball possession~\cite{DBLP:conf/kdd/SiciliaPG19}. Their unit of concern is a basketball possession, which they define as a sequence of $n$ moments, each described by player locations. Xenopoulos~et~al. define a game's context using a graph representation, where players are nodes in a fully connected graph~\cite{DBLP:conf/bigdataconf/XenopoulosS21}.
Predicting player movement, rather than a specific sports outcome, is also a common task in sports. Yeh~et~al. proposed a graph variational recurrent neural network approach to predict player movement in basketball and soccer~\cite{DBLP:conf/cvpr/YehSH019}. Falsen~et~al. use a conditional variance autoencoder to create a generative model which predicts basketball player movement~\cite{DBLP:conf/eccv/FelsenLG18}. Omidshafiei~et~al. use graph networks and variational autoencoders to impute missing trajectories~\cite{deepmind_2022}. To do so, they use a proprietary tracking dataset from the English Premier League.
Although sports data can oftentimes be large, it is rarely publicly available or well-documented, especially since many sports data acquisition systems are proprietary. A popular large and public sports dataset is a collection of around 600 games from the 2015-2016 NBA season~\cite{nba-data}. Each game contains tracking data for both the players and the ball, collected at 20Hz. The NBA dataset is often downsampled in practice. For example, Yeh~et~al., using the NBA dataset, create trajectories of 50 frames parsed at 6 Hz, which corresponds to roughly 121k trajectories in total~\cite{DBLP:conf/cvpr/YehSH019}. One downside to the NBA dataset is that it lacks a dedicated maintainer and reflects older tracking technology. Pettersen~et~al. introduce a collection of both tracking and video data for about 200 minutes of professional soccer~\cite{DBLP:conf/mmsys/PettersenJJBGMLGSH14}. The data is well-documented and contains roughly 2.5 million samples, not only including player locations but also kinetic information such as speed and acceleration. However, the data is geared towards computer vision-based tasks, rather than outcome prediction, and lacks player actions, such as passes, shots, or tackles.
There also exist a limited number of esports-specific datasets, given that some game data is easily accessible at scale. Lin~et~al. introduce \textit{STARDATA}, a dataset containing millions of game states and player actions from StarCraft: Brood War~\cite{DBLP:conf/aiide/LinGKS17}. Smerdov~et~al. propose a dataset containing not only in-game data but also physiological measurements from professional and amateur League of Legends players for around two dozen matches~\cite{smerdov-esports}. For Counter-Strike: Global Offensive (CSGO), there exists the PureSkill.gg dataset of parsed amateur matches~\cite{pureskill}. While the aforementioned dataset is large, it is hosted on AWS Data Exchange, and thus requires financial resources if the data is to be used outside of AWS resources. Furthermore, the parser used to generate the datasets is not public. Finally, one may also find esports datasets on Kaggle, however, these datasets are oftentimes undocumented, deprecated, or old, which hampers their use for reproducible research. Some video games, like Defense of the Ancients 2 (Dota 2), have a range of fan sites which also host data~\cite{opendota}.
Many esports are played from an isometric perspective. In that sense, the trajectory and action data attained in them is somewhat different than the data found in conventional sports. First-person shooter (FPS) is a popular video game genre in which a player controls their character in a first-person capacity, as opposed to the top-down view found in games like StarCraft 2 or Dota 2. Thus, the data generated by FPS-style games is more similar to conventional sports than data from StarCraft 2 or Dota 2. In this work, we focus on CSGO, a popular FPS esport. At the time of writing, CSGO achieves roughly one million daily peak players compared to Dota 2, which attains roughly 700,000 daily peak players~\cite{steamcharts}. In particular, our proposed dataset aims to address issues regarding accessibility, documentation, and size in existing esports datasets.
\section{The \texttt{awpy} package} \label{sec:awpy}
One of the main objectives of this work is to expand the public tools by which people can parse widely available esports data into a format conducive for analysis. To that end, we introduce the awpy Python library, which we use to create the ESTA dataset. In this section, we detail CSGO, its data, and the awpy library. Our library can be installed via \texttt{pip install awpy} and is available at \url{https://github.com/pnxenopoulos/awpy}. We provide example Jupyter Notebooks and detailed documentation both for awpy's functionality, as well as for its JSON output, in the linked Github repository.
\subsection{Counter-Strike Background} \label{sec:csgo-background}
Counter-Strike is a long-running video game series, with the most recent rendition, CSGO, attracting roughly one million peak users at time of writing. CSGO has a robust competitive scene, with organized tournaments occurring year-round both in-person (denoted LAN for local area network) and online. CSGO is a round-based game, whereby two teams of five players each attempt to win each round by reaching a win condition. Teams play as one of two ``sides'', denoted CT and T, and then switch sides after fifteen rounds. The T side can win a round by eliminating members of the opposing side, or by planting a bomb at one of two bombsites. The CT side can win a round by eliminating members of the opposing side, defusing the bomb, or by preventing the T side from planting the bomb in the allotted round time. Rounds last around one to two minutes each.
Broadly, there are four different phases of play. When a new round starts, a game phase called ``freeze time'' begins. In this phase, which lasts 20 seconds, players are frozen in place and may not move, but they may buy equipment, such as guns, grenades, and armor using virtual money earned from doing well in previous rounds. The next phase of the game is the ``default'' game phase where players can move around and work towards one of their side's win conditions. This game phase, which may last up to one minute and 55 seconds, constitutes the majority of a round. The default game phase ends when either team is eliminated, the clock time runs out, or the bomb is planted. When the bomb is planted, the ``bomb'' phase begins and lasts 40 seconds. During this time, the T side (which planted the bomb) can still win the game by eliminating all CT players. Once a win condition has been met, regardless of the previous phase (default or bomb planted), the ``round end'' phase begins, in which players can still move around and interact for five seconds until a new round begins.
Players start each round with 100 health points (HP) and are eliminated from a round when they reach 0 HP. Players lose HP when they are damaged -- typically from gunfire and grenades (also called utility) from the opposing side. Each CSGO game takes place in a virtual world called a ``map''. Competitive CSGO events typically have seven standard maps on which players may play. At the beginning of a competitive match, the two competing teams undergo a process to determine which map(s) they will play. Typically, competitive CSGO matches are structured as a best-of-one, best-of-three, or rarely, best-of-five maps.
\subsection{Counter-Strike Data} \label{sec:csgo-data}
In multiplayer video games, players (clients) connect to a game server. When players use peripherals, like a mouse or keyboard, they change their local ``game state'', which is what the player sees on their screen. The clients send these changes to the server, which reconciles client inputs from connected players and returns a unified, global game state to clients. These updates between the clients and game server happen at a predefined \textit{tick rate}, which is usually 128 in competitive play, meaning the clients and server update 128 times a second. In fact, the game server dispatches ``events'', which are game state deltas that the client uses to construct the game state.
The game server records the aforementioned updates and saves a file called a demo file (colloquially known as a demo), which is effectively a serialization of the client-server network communication. Demo files allow one to recreate the game as it happened from the game server's point-of-view, and can be loaded in the CSGO application itself to view past matches. A demo file is restricted to the performance on a single map. Thus, a competitive CSGO match may produce multiple demos. Bednarek~et~al. describe CSGO demo files in further detail~\cite{DBLP:conf/data/BednarekKYZ17}. CSGO demo files are easy to acquire, and can be found in the game itself, or on many third-party matchmaking, tournament, and fan sites. Thus, CSGO demo files constitute an abundant source of multi-agent interaction data. We show an example of demo file generation and parsing made possible by awpy in Figure~\ref{fig:parsing-csgo-file}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figures/parsing-csgo-demofiles.png}
\caption{Players (clients) send updates to the game server, which returns a unified global game state back to clients through game event dispatches. This process occurs over 100 times a second. As the game server sends updates to clients, it records these event dispatches to a log called a demo file. The awpy Python library allows a user to parse CSGO demo files into a JSON structure which contains player actions and locations.}
\label{fig:parsing-csgo-file}
\end{figure}
\subsection{Using awpy to parse CSGO demos}
While there exist a few CSGO demo file parsers, they are often written in languages that are uncommon to many data science or machine learning workflows, such as Go~\cite{parser-golang}, Node.js~\cite{parser-node}, or C\#~\cite{parser-csharp}. Furthermore, one must often extend upon the aforementioned libraries in order to output the data into a commonly used format, such as CSV or JSON. Thus, we created awpy to provide simple CSGO parsing functionality in Python. We show an example of awpy in practice in Listing~\ref{listing:awpy_example}. There are four main pieces of information in the parsed JSON output. First, many top-level keys indicate match and demo metadata, such as the map, tick rate, or total number of frames. Second, the ``parserParameters'' key contains the parameters used to parse the demo file, such as the user-specified rate at which game snapshots are parsed. Third, ``serverVars'' contains server-side information on game rules, such as the round length or bomb timer. Lastly, ``gameRounds'' is a list containing information on each round of the game. While each element in ``gameRounds'' contains round metadata, like the starting equipment values or the round end reason, it also contains lists of player events and lists of game frames, which we discuss in detail in Section~\ref{sec:dataset}.
In addition to parsing, awpy also contains functions to calculate player statistics or visualize player trajectories and actions. The awpy library hosts four modules: (1) analytics, (2) data, (3) parser, and (4) visualization. The analytics module contains functions to generate summary statistics for a single demo. The data module contains useful data on each competitive map's geometry and navigation meshes, which are used by in-game computer-controlled bots to move in-game. The parser module contains the functions to both parse CSGO demo files and clean the output. Although CSGO demos may contain errant rounds or incorrect scores, usually caused by third-party server plugins, they are relatively straightforward to identify and remedy by following CSGO game logic, which awpy's parse module addresses. Finally, the visualization module contains functions to plot frame and round data in both static and dynamic fashion.
\begin{lstlisting}[language=Python,caption={awpy can be used to parse, analyze, and visualize CSGO demo files.},captionpos=b]
# pip install awpy
from awpy.parser import DemoParser
from awpy.analytics.stats import player_stats
from awpy.visualization.plot import plot_round
# Parse the demo
p = DemoParser(demofile="faze-vs-cloud9.dem", parse_rate=128)
demo_data = p.parse()
# Analyze and aggregate player statistics
player_data = player_stats(demo_data["gameRounds"])
# Visualize the first frame of the first round
frame = demo_data["gameRounds"][1]["frames"][0]
map_name = demo_data["mapName"]
plot_frame(frame, map_name)
\end{lstlisting}
\label{listing:awpy_example}
\section{The ESTA Dataset} \label{sec:dataset}
The \textbf{ES}ports \textbf{T}rajectories and \textbf{A}ctions (ESTA) dataset contains parsed CSGO game demos. Each parsed demo is a compressed JSON file and contains granular player actions (Section~\ref{sec:actions}) and game frames (Section~\ref{sec:frames}). Game frames, which are effectively game ``snapshots'', contain all game information at a given time, and are parsed at a rate of 2 frames per second. Each JSON file also contains details on six player action types: damages, kills, flashes, bomb plants, grenades, and weapon fires. ESTA is released under a CC BY-SA 4.0 license\footnote{ESTA is available at \url{https://github.com/pnxenopoulos/esta}}.
The ESTA dataset includes games from important CSGO tournaments. Each of these tournaments was held in a local area network (LAN) environment between January 2021 and May 2022. We collated the list of tournaments using \url{https://www.hltv.org/}, a popular CSGO fan site. From this list of tournaments and their associated matches, we obtained the corresponding demo files and parsed them using awpy. The players involved in each parsed demo, as well as the demo files themselves, are already public. Therefore, there are no privacy concerns with regards to player data. We disregarded parsed demos which had an incorrect number of parsed rounds. In total, ESTA contains 1,558 parsed JSONs which contain 8.6m actions, 7.9m frames, and 417k trajectories. We have the following counts of demos for each map: Dust 2 (197), Mirage (278), Inferno (289), Nuke (260), Overpass (181), Vertigo (169), Ancient (132), Train (52). While Train is not part of the \textit{current} competitive map pool, it was part of the map pool for a portion of the time frame which ESTA covers, until Ancient replaced it.
\subsection{Player actions} \label{sec:actions}
Player actions are crucial because they fundamentally change the game's state. Many of the event dispatches that the server sends are due to player actions. Broadly, player actions can be categorized as \textit{local}, if the action involves some engagement between two players, or \textit{global}, if the action has an effect on the global attributes of the game state. Examples of local actions include player damages, kills, or flashes, which involve some interaction between players. Global actions include bomb events, like plants or defuses, grenade throws, or weapon fires. These events typically change the global state by changing the bomb status (e.g. bomb plant) or by imposing temporary constraints on the map through fires or smokes that reduce visibility. On average, about 210 of the six parsed actions occur per CSGO round. Weapon fire events make up 68\% of the total actions, damages 13\%, grenade throws 10\%, flashes 6\%, kills 3\%, and bomb events less than 1\%. Each event contains locations of each player involved in the action (e.g., damages actions involve a player doing the damage, and a player receiving the damage). In Figure~\ref{fig:actions-summarized}, we show an example of actions from Dust 2 demos summarized by location. It is evident that there are areas where actions often occur (light-shaded areas), which is indicative of the common fight locations. These patterns may be leveraged by users to identify tactics or create advanced features. For example, certain grenades or fights may be indicative of a team's future intentions.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/dust2_actions.png}
\caption{Heatmaps showing the shooter location for weapon fire events (left), the landing position for grenades (center), and the victim coordinates for damage events (right) on Dust 2 demo files. Bright color indicates higher action density. We can see distinct regions of high density for each action. For example, for damages, there are clear ``conflict zones'', indicating regions of the map where fights often occur.}
\label{fig:actions-summarized}
\end{figure}
\subsection{Game frames} \label{sec:frames}
Game frames record the state of the game at a specific tick. Within each frame, awpy parses a combination of global, team, and player-specific information. Globally, a game frame contains temporal information, such as the tick and clock time. Additionally, each snapshot contains a list of active ``fires'' and ``smokes'', which are temporary changes to the map incurred by the players using their equipment, namely, their grenades. These may temporarily block locations or lower visibility in certain parts of the map. Lastly, information on the bomb, such as if and where it is planted, is also provided. On average, 188 game frames occur per round when parsed at a rate of 2 frames per second. Thus, the average CSGO round in ESTA lasts 94 seconds. About 30\% of rounds contain a bomb plant, and 37\% end in a CT elimination win, 12\% in bomb defusal, 5\% in target saved, 19\% in bomb exploded, and 27\% in T elimination.
In addition to global information, game frames contain information on both the T and CT side. Namely, each team consists of a list of players, along with other team-specific attributes which are typically aggregated from the list of players, such as the number of alive players for a team. In each frame, each player is represented by over 40 different attributes, such as their location, view direction, inventory, and ping, a measure of network latency to the game server. We show a visual representation of three game frames from three different maps in Figure~\ref{fig:game_frame}. Within awpy, one can create both static (for a single frame) and animated (for a sequence of frames) visualizations.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/csgo_game_frames.png}
\caption{Game frames on demos on three maps: ``Inferno'' (left), ``Mirage'' (center), and ``Dust 2'' (right). CT players are represented by cyan and T players by orange. Above each player is a bar showing their HP. Each player also has a black line indicating the direction they are facing. The bomb is identified by a white triangle, active fires by a red circle, and active smokes by a gray circle.}
\label{fig:game_frame}
\end{figure}
\section{Benchmarks} \label{sec:experiments}
\subsection{Problem Formulation}
Predicting the outcome of a game, also known as win probability prediction, is an important task for a variety of applications, such as player valuation, sports betting, fan engagement, and game understanding. As such, it is a commonly researched prediction task in the sports analytics community in both conventional sports and esports~\cite{DBLP:conf/bigdataconf/XenopoulosDS20, DBLP:conf/bigdataconf/XenopoulosS21, DBLP:conf/www/XenopoulosFS22, DBLP:conf/aist/MakarovSLI17, yurko2020going, DBLP:conf/kdd/RobberechtsHD21, DBLP:conf/kdd/DecroosBHD19}. Broadly, the goal is to predict $Y_i$, the outcome of game period $i$. To do so, past literature typically uses the state of the game time $t$, which occurs in period $i$, as input~\cite{DBLP:conf/bigdataconf/XenopoulosDS20, DBLP:conf/bigdataconf/XenopoulosS21, DBLP:conf/www/XenopoulosFS22, yurko2020going, DBLP:conf/aist/MakarovSLI17, DBLP:conf/kdd/RobberechtsHD21}. We refer to this input as $g_t$. In CSGO, we are most concerned with the outcome of a round, so we set $Y_i = 1$ when the CT side wins round $i$ and 0 otherwise. Thus, we want to train a model to predict $\mathbb{P}(Y_i = 1 \mid g_t)$, where $g_t$ is an object that describes the state of a game at time $t$. The aforementioned formulation, using a game state as input, is common when predicting other game outcomes, such as the end yardline of a play in American football or the probability of scoring in soccer~\cite{yurko2020going, DBLP:conf/kdd/DecroosBHD19}.
\subsection{Representing Game States}
There are many ways to represent $g_t$, such as through a single vector containing aggregated team information~\cite{DBLP:conf/bigdataconf/XenopoulosDS20, DBLP:conf/www/XenopoulosFS22} or graphs~\cite{DBLP:conf/bigdataconf/XenopoulosS21}. We provide benchmarks for two game state representations. First, we provide benchmarks for vector representations of $g_t$. In this setup, we define $g_{t}$ as a collection of global and team-specific information, based on past literature~\cite{DBLP:conf/bigdataconf/XenopoulosDS20, DBLP:conf/www/XenopoulosFS22}, namely: the time since the last game phase change, total number of active fires and smokes, where the bomb is planted (A, B, or None), the total number of defuse kits on alive players, the starting equipment values of each side and each side's total number of alive players, current equipment value, starting equipment value, HP, armor, helmets, grenades remaining, and count of players in a bomb zone. Additionally, we also include a flag indicating if the bomb is in a T-side player's inventory. Notably, these vector representations lack player position information, since player-specific information would need to be presented in a permutation invariant manner, such as through an aggregation (e.g., mean or sum). While one could consider more complex representations of a game state, using a single vector with both global and aggregated team information is a common approach in many sports. Thus, it is an important game state representation to benchmark against.
We also provide benchmarks for models that consider $g_t$ as a set, where each $p_{i,t} \in g_t$ represents the information of player $i$ at time $t$. Each $p_{i,t}$ contains global features like the time since the last game phase change, total number of active fires and smokes, where the bomb is planted (A, B, or None), as well as features broadly derived from the player information (location, velocity, view direction, HP, equipment), and flags indicating player-states, such as if the player is alive, blinded, or in the bomb zone. A set representation of $g_t$ has yet to be used for win probability prediction in any sport.
\subsection{Models and Training Setup}
Boosted tree ensembles are commonly used for win probability prediction due to their ease-of-use, strong performance, and built-in feature importance calculations. For vector representations of $g_t$, we consider XGBoost~\cite{DBLP:conf/kdd/ChenG16}, LightGBM~\cite{DBLP:conf/nips/KeMFWCMYL17}, and a multilayer perceptron (MLP) as candidate models. For set representations of $g_t$, we consider Deep Sets~\cite{DBLP:conf/nips/ZaheerKRPSS17} and Set Transformers~\cite{DBLP:conf/icml/LeeLKKCT19}. For each parsed demo, we randomly sample one game state from each round. Then, we randomly select 70\% of these game states for the train set, 10\% for the validation set, and the remaining 20\% for the test set. The label for each game state is determined by the outcome of the round in which the game state took place (i.e., $Y_i = 1$ if the CT side won, 0 otherwise). All benchmarks are available in a Google Colab notebook and require a high-RAM instance with a GPU.
We separate our prediction tasks by map due to the unique geometry imposed by each map. From prior work, we know that CSGO maps often carry unique characteristics which influence win probability~\cite{DBLP:conf/www/XenopoulosFS22}. Furthermore, CSGO maps are not played at equal rates -- some maps, such as Mirage, Inferno, or Dust 2, are selected and played more often than maps like Vertigo or Ancient. Thus, we do not consider demos that occurred on the map Train due to its small sample size (52 games). In total, we have seven maps $\times$ five candidate models for a total of 35 benchmark models.
For LightGBM and XGBoost, we use the default parameters provided by their respective packages, as well as 10 early stopping rounds. For the MLP, we use the default sklearn parameters and scale each feature to be between 0 and 1. Our Deep Sets model uses one fully connected layer in the encoder and one fully connected layer in the decoder. We use the mean as the final aggregation. Our set transformer model uses one induced set attention block with one attention head for the encoder. For its decoder, we use a pooling-by-multihead-attention block~\cite{DBLP:conf/icml/LeeLKKCT19}. When training our Deep Sets and Set Transformer models, we use a batch size of 32 and a hidden vector size of 128 for all layers. We maximize the log likelihood of both models using Adam~\cite{DBLP:journals/corr/KingmaB14} with a learning rate of 0.001, and we train over 100 epochs with 10 early stopping rounds.
\begin{table}[]
\centering
\caption{Benchmark results by map, measured by log loss (LL) and calibration error (ECE). Results are averaged from 10 runs and reported with their standard error. The best log loss and calibration error for each map are in \textbf{bold}.}
\begin{tabular}{@{}ccccccc@{}}
\toprule
\multicolumn{2}{c}{\multirow{2}{*}{\textbf{Map}}} & \multicolumn{3}{c}{\textit{Vector-based}} & \multicolumn{2}{c}{\textit{Set-based}} \\ \cmidrule(l){3-7}
\multicolumn{2}{c}{} &
\textbf{LightGBM} &
\textbf{XGBoost} &
\textbf{MLP} &
\textbf{\begin{tabular}[c]{@{}c@{}}Deep\\ Sets\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}c@{}}Set\\ Trans.\end{tabular}} \\ \midrule
\multirow{2}{*}{Dust2} & \multicolumn{1}{c|}{LL} & 0.433±0.005 & 0.437±0.005 & \textbf{0.415±0.003} & 0.435±0.005 & 0.458±0.007 \\
& \multicolumn{1}{c|}{ECE} & 0.042±0.002 & 0.044±0.002 & \textbf{0.034±0.003} & 0.047±0.004 & 0.043±0.003 \\ \midrule
\multirow{2}{*}{Mirage} & \multicolumn{1}{c|}{LL} & 0.439±0.004 & 0.440±0.004 & \textbf{0.430±0.004} & 0.440±0.003 & 0.447±0.003 \\
& \multicolumn{1}{c|}{ECE} & 0.041±0.001 & 0.039±0.002 & 0.038±0.004 & 0.040±0.002 & \textbf{0.035±0.004} \\ \midrule
\multirow{2}{*}{Inferno} & \multicolumn{1}{c|}{LL} & 0.453±0.004 & 0.453±0.005 & \textbf{0.442±0.005} & 0.460±0.005 & 0.472±0.005 \\
& \multicolumn{1}{c|}{ECE} & 0.040±0.002 & \textbf{0.036±0.003} & 0.038±0.002 & 0.043±0.002 & 0.041±0.003 \\ \midrule
\multirow{2}{*}{Nuke} & \multicolumn{1}{c|}{LL} & 0.429±0.004 & 0.432±0.004 & \textbf{0.416±0.004} & 0.427±.004 & 0.447±0.006 \\
& \multicolumn{1}{c|}{ECE} & 0.037±0.002 & \textbf{0.033±0.003} & 0.038±0.001 & 0.036±0.001 & 0.038±0.003 \\ \midrule
\multirow{2}{*}{Overpass} & \multicolumn{1}{c|}{LL} & 0.457±0.004 & 0.460±0.004 & \textbf{0.438±0.005} & 0.452±0.004 & 0.470±0.007 \\
& \multicolumn{1}{c|}{ECE} & \textbf{0.040±0.002} & 0.043±0.004 & 0.040±0.004 & 0.040±0.003 & 0.043±0.004 \\ \midrule
\multirow{2}{*}{Vertigo} & \multicolumn{1}{c|}{LL} & 0.442±0.006 & 0.441±0.005 & \textbf{0.424±0.005} & 0.441±0.005 & 0.456±0.005 \\
& \multicolumn{1}{c|}{ECE} & 0.048±0.002 & 0.045±0.002 & 0.046±0.002 & 0.046±0.002 & \textbf{0.040±0.003} \\ \midrule
\multirow{2}{*}{Ancient} & \multicolumn{1}{c|}{LL} & 0.456±0.009 & 0.457±0.009 & \textbf{0.431±0.008} & 0.458±0.007 & 0.470±0.006 \\
& \multicolumn{1}{c|}{ECE} & 0.061±0.005 & 0.055±0.002 & 0.052±0.003 & 0.051±0.003 & \textbf{0.047±0.003} \\ \bottomrule
\end{tabular}
\label{tab:benchmark-results}
\end{table}
\subsection{Assessing Win Probability Models}
To assess our benchmarked models, we use log loss (LL) and expected calibration error (ECE). ECE is a quantitative measurement of model calibration, and attempts to measure how closely a model's predicted probabilities track true outcomes. ECE is a common metric to assess win probability models since for many applications, stakeholders use the probabilities directly~\cite{DBLP:conf/bigdataconf/XenopoulosDS20, DBLP:conf/www/XenopoulosFS22}. Formally, ECE is defined as
\begin{equation}
ECE = \sum_{w=1}^W \frac{|B_w|}{N} | \textrm{acc}(B_w) - \textrm{conf}(B_w) |
\end{equation}
\noindent where $W$ is the number of equal width bins between 0 and 1, $B_w$ is the set of points in bin $w$, $\textrm{acc}(B_w)$ is the true proportion of the positive class in $B_w$, $\textrm{conf}(B_w)$ is the average prediction in $B_w$, and $N$ is the total number of samples. When assessing our candidate models, we set $W = 10$.
\subsection{Results}
We report the results of the log loss and expected calibration error by CSGO map and model in Table~\ref{tab:benchmark-results}. No model performed the best across all maps and metrics. We note that, in general, the models considering vector-based input, such as LightGBM, XGBoost, and in particular, the MLP achieved strong performance against our set-based approaches. Concerning the set-based models, we observed that the Deep Sets slightly outperformed the Set Transformer.
In Figure~\ref{fig:win-prob-example}, we show an example of each model's predicted win probability for a CSGO round on the Mirage map. We see that, although our game state formulation is simple, each of the models is able to capture the impact of important game events, such as damages. These events have clear impacts on the round, and especially so for the features with the highest importance in the gradient boosting models, such as each team's HP or if the bomb is planted. Otherwise, we also notice that the gradient boosted tree models have relatively ``flat'' predictions in certain intervals, which is sensible given that the gradient boosted models should return non-smooth decision surfaces.
During the round shown in Figure~\ref{fig:win-prob-example}, there is a large fight around the 60\textsuperscript{th} game state which causes the game situation to change from 5 CT versus 5 T to 2 CT versus 4 T. Although the bomb is planted around the 80\textsuperscript{th} frame, the effect is small given the already low prior predicted win probability. When the bomb is planted in CSGO, the CT side must decide whether to attempt to defuse the bomb or to ``save'', meaning the team elects to purposely lose the round by avoiding a bomb defusal (thus, forcing their ``true'' win probability to 0). The reason for the latter strategy is so that the saving team keeps their equipment for the next round, so they do not have to spend money. In the situation posed in Figure~\ref{fig:win-prob-example}, the bomb is planted when there are 2 CT players and 4 T players remaining. However, the CT players decide to save in this instance. In the process of saving, they eliminate 3 T players. We can see that the models diverge past the 130\textsuperscript{th} game state, and that, due to the perceived 2-on-1 scenario, the vector-based models produce high win probability predictions. These predictions run counter to conventional game knowledge. Conversely, the set-based models, which account for player locations, are less sensitive to such an increase, which aligns with conventional game knowledge.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/round_win_disagree.png}
\caption{Win probability predictions for a CSGO round, colored by model. We find that in general, the models capture the effects of important game events, such as player damage events or the bomb plant. Although the set-based models perform slightly worse in terms of log loss, they are able to accurately account for player location-based game phenomena, such as the ``save'', whereby the CT side elects to avoid defusing a planted bomb.}
\label{fig:win-prob-example}
\end{figure}
\section{Concluding Remarks} \label{sec:discussion}
We present ESTA, a large-scale and granular dataset containing millions of player actions and trajectories from professional CSGO matches. ESTA is made available via Github and contains roughly 1.5k compressed JSONs, where each JSON represents one game replay. We create ESTA using \texttt{awpy}, an open-source Python library which can parse, analyze, and visualize Counter-Strike game replays. Additionally, we provide benchmarks for win probability prediction, a foundational sports-specific prediction task. As part of these benchmarks, we apply set learning techniques, such as Set Transformers, to win probability prediction. Although the set-based techniques do not outperform the vector-based models, they are potentially able to elucidate some high-level game phenomena. We do not foresee any negative societal impact of our work.
\textbf{Limitations}. While we have made an effort to address limitations during the creation of ESTA, there may still be some constraints in using ESTA. First, while CSGO demos are often recorded at 64 or 128Hz, we record game states at 2Hz. While this may impose difficulties in understanding fine-grained movements, CSGO players generally do not move much within the space of one second. Additionally, the demos were recorded on servers with third-party plugins, which are common for servers used in tournaments. Generally, these plugins help facilitate the CSGO match. For example, in the rare event that a player disconnects during a round, the round will need to be restarted, and these server plugins can help reset the server to a prior state. While the awpy package does contain functionality to clean out such occurrences, some situations may not be fully cleaned by the parser. However, the number of rounds in the final parsed demo has been verified with the true number of rounds, so it is unlikely that such a problem is pervasive in ESTA.
\textbf{Uses of ESTA}. ESTA has multiple uses beyond those benchmarked and portrayed in this work, not only in sports-specific machine learning, but also to the general machine learning community. For example, ESTA provides a large amount of trajectories which one can use for trajectory prediction. Furthermore, unlike traditional sports trajectory data, the trajectory data in ESTA occurs on various maps, each with different geometry, which can provide multiple unique prediction tasks. ESTA can also be used for tasks outside of sports outcome prediction or trajectory prediction. For example, the use of reinforcement learning in combination with video games is well-documented~\cite{jaderberg2019human}. The ESTA dataset may be useful in priming the model for online learning. ESTA may also be of use to the visualization community, which works extensively with sports applications due to the multivariate, temporal, and spatial aspects of sports data which necessitate visual analytics solutions~\cite{DBLP:journals/cga/BasoleS16, ggviz, DBLP:conf/iwec/HorstZD21, DBLP:journals/csur/GudmundssonH17}. Thus, the same sports data challenges that affect the machine learning community, which the ESTA dataset addresses, also affect the visualization community. Finally, the ESTA dataset may also prove useful in education, as it is representative of real-world data and can be used for a variety of tasks.
| {
"attr-fineweb-edu": 1.955078,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeifxK1UJ-2RwXX29 | \section{Introduction}
People love watching competitions. Who will snatch victory? Who will be vanquished? Thus, the business of sports\footnote{FIFA generated more than 4.6 billion USD revenue in 2018, mainly on 2018 FIFA World Cup, according to https://www.investopedia.com/articles/investing/070915/how-does-fifa-make-money.asp}, e-sports\footnote{ Douyu, one of the major e-sports live streaming platform in China generated about 6.6 billion CNY of revenue (around 1 billion USD) in 2019, according to https://www.statista.com/statistics/1222790/china-douyu-live-streaming-revenue/}, and live streaming platforms\footnote{The worldwide video streaming market size reached 50.11 billion USD in 2020, according to https://www.grandviewresearch.com/industry-analysis/video-streaming-market} create a huge amount of revenue.
When people watch a game, their belief for who would win in the end changes over the duration of the game. Intuitively, the \emph{surprise}, measures how much audience's beliefs change over time \cite{ely2015suspense}. An important question for both theorists and practitioners is how to design winner selection schemes that maximize the amount of surprise in a competition and, consequently, improve the entertainment utility of a competition and increase its potential revenue.
In practice, many games use point systems to determine the winner. For some games, the point value remains static throughout the game, such as soccer, cricket, and tennis. However, in other games, the point value in the final round is higher than in others.
Mind King, a very popular two-player quiz game on WeChat,\footnote{over 1 million daily users according to an author interview with Tencent staff in July 2021.} has 5 rounds. For each round, each player receives points depending on the correctness and speed of their answer. The final round doubles the points. Since 2015, IndyCar racing has doubled the points for the final race of the season.\footnote{According to http://www.champcarstats.com/points.htm} The Diamond League,\footnote{According to Wikipedia: https://en.wikipedia.org/wiki/Diamond\_League\#Scoring\_system} a track and field league, from 2010-2016, determined the Diamond Race winner by a point system over a season of 7 meets, and the points for the final tournament are doubled.\footnote{The scoring system was substantially changed in 2017 to, in particular, include a final restricted to the top-scoring athletes.}
Another very popular example is the quidditch matches in Harry Potter\footnote{Harry Potter is a very popular fantasy novels series that have sold out more than 500 million copies according to https://www.wizardingworld.com/news/500-million-harry-potter-books-have-now-been-sold-worldwide}. The game concludes when the golden snitch, which is worth 15 times a normal goal, is captured by one team. While this point structure makes sense for the plot of the books (the title character often catches the snitch), most of the action is ancillary to the games' outcome. Tellingly, ``muggle'' quidditch is now a real-life game, however, the golden snitch is only worth 3 times as much as a normal goal.
\paragraph{Key Question}
In a simple model, we study the effect of the final round's point value on the overall surprise and what point value maximizes the surprise. In our setting, there are two teams and a fixed number of rounds. The winner in each of the first $n-1$ rounds is awarded 1 point and the winner in the final round can possibly receive more points, e.g., double or triple the points earned in previous rounds. The team which accumulates the most points wins.
\paragraph{Types of Prior Beliefs} To measure surprise, we need to model audience belief. The audience's belief of who will win (and thus the surprise) depends on the audience's prior belief about the two contestants' chances of winning in each round. Note that this belief may be updated as the match progresses. Intuitively, in a game where audiences believe that the two players' ability is highly asymmetric, e.g., a strong vs. a weak player, we should set up a high bonus score, otherwise, the weaker player will likely be eliminated before the game's conclusion. In contrast, if the two players are perfectly evenly matched and the total number of rounds is odd, we will show the optimal bonus value is 1, the same point value as in the previous rounds. Below, we consider three special cases for prior beliefs and then introduce the general case.
The first case is that the audience has a fixed and unchanging belief about the chance that each competitor wins for each round. We call this the \emph{certain} case. The size of the optimal bonus depends on the belief in the difference between two contestants' ability levels.
The second case is where the audience has no prior knowledge about the two players' abilities.\footnote{It is worthwhile to note that this is different from the case where the prior belief is that both players have an equal chance to win. The reason is that no belief updating exists in the prior case while belief updating exists in the later one. } We call this the \emph{uniform} case. Specifically, we model the audience's prior over the two players as a uniform distribution and derive the optimal bonus size accordingly. A slight generalization of the uniform case is the \emph{symmetric} case when the audience has a certain amount of prior knowledge but no prior knowledge favoring one contestant over another. In this case, we model the audience's prior as a symmetric Beta distribution. In the general case, we model the prior as a general Beta distribution, $\mathcal{B}e(\alpha,\beta)$\footnote{When Alice and Bob have played before and Alice wins $n_A$ rounds and Bob wins $n_B$, we can use $\mathcal{B}e(n_A+1,n_B+1)$ to model the prior belief. In general, we allow $\alpha,\beta$ to be non-integers. }, a family of continuous probability distributions on the interval [0, 1] parameterized by two positive shape parameters, $\alpha,\beta\geq 1$. The figures above Table~\ref{table:basic} illustrate the above cases.
\paragraph{Techniques} Our analysis is built on two insights. First, for Beta prior beliefs, we show that for $i \leq n - 1$, the ratio of the expected surprise in round $i$ and $i -1$ is a constant that only depends on $i$ and the prior. This allows us to reduce the entire analysis to the trade-off between the final round's surprise amount and the penultimate round's surprise amount. Second, we find that the final round's surprise amount increases when the bonus increases while the penultimate round's surprise consists of two parts where one part increases with the bonus size and the other part decreases with the bonus size. Optimally trading off these terms yields our results. Third, we show how to optimize this trade-off.
\begin{table}[!ht]
\center
\begin{minipage}{\linewidth}
\begin{tabular}{cccc}\centering
&\begin{minipage}{.25\linewidth}\includegraphics[width=1\linewidth]{figures/distribution_symmetric.pdf}\end{minipage} & \begin{minipage}{.25\linewidth}\includegraphics[width=1\linewidth]{figures/distribution_certain.pdf}\end{minipage} & \begin{minipage}{.25\linewidth}\includegraphics[width=1\linewidth]{figures/distribution_general.pdf}\end{minipage}\\
\hline
& Symmetric & Certain & General\\
& $\alpha=\beta$ & $\alpha=\lambda p,\beta=\lambda (1-p),\lambda\rightarrow\infty$ & \\
\hline
Finite & \begin{minipage}{.25\linewidth}\[\textsc{rd}(\frac{n-1}{2\alpha\mathbb{H}-\frac{n-1}{n+2\alpha-1}})\]\end{minipage} & \begin{minipage}{.25\linewidth}\[\textsc{rd}(\text{Solution of $F(x)=0$})\]\end{minipage} & \begin{minipage}{.25\linewidth}\[O(n)\text{ algorithm}\]\end{minipage}\\
& \begin{minipage}{.25\linewidth}\end{minipage} & \begin{minipage}{.25\linewidth}\[\approx n \frac{\alpha-\beta}{\alpha+\beta}\]\end{minipage} & \begin{minipage}{.25\linewidth}\end{minipage}\\
\specialrule{0em}{2pt}{2pt}
Asymptotic & \begin{minipage}{.15\linewidth}\[n\frac{1}{2\alpha \mathbb{H}-1} \approx \frac{\frac{n}{2\alpha}}{\ln (\frac{n}{2\alpha})}\]\end{minipage} & \begin{minipage}{.25\linewidth}\[n\frac{(\alpha-\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}\approx n \frac{\alpha-\beta}{\alpha+\beta}\]\end{minipage} & \begin{minipage}{.25\linewidth}\[n*\text{Solution of $G(\mu)=0$}\]\end{minipage} \\
\specialrule{0em}{2pt}{2pt}
\hline
\end{tabular}
\caption{\textbf{Optimal bonus}}
\begin{minipage}{\linewidth}
\footnotesize
\[\text{Without loss of generality, we assume $\alpha\geq\beta$ which implies that $p\geq \frac12$.}\]
\[\textsc{rd}(x):=\text{the nearest integer to $x$ that has the same parity as the number of rounds $n$.}\footnote{When there is a tie, we pick the smaller one.}\]
\[\mathbb{H}=\mathbb{H}_{\alpha+\beta}(n-1):=\sum_{i=1}^{n-1}\frac{1}{i+\alpha+\beta-1},F(x):=(2np-n-(x-1))p^{x-1}+(n-2np-(x-1))(1-p)^{x-1}, x\in [1,n+1)\footnote{When $p=\frac12$ or $n\leq \frac{1}{(\frac{1}{2}-p)\ln(\frac{1-p}{p})}$, $F(x)$ only has a trivial solution $x=1$ and the optimal bonus is $\textsc{rd}(1)$, otherwise, the optimal bonus is $\textsc{rd}(\Tilde{x})$ where $\Tilde{x}$ is the unique non-trivial solution.}\]
\[G(\mu):=(1+\mu)^{\alpha-\beta}\left(\frac{(\alpha-\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}-\mu\right)+(1-\mu)^{\alpha-\beta}\left(\frac{(-\alpha+\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}-\mu\right),\mu\in(0,1)\]
\end{minipage}
\label{table:basic}
\end{minipage}
\end{table}
\begin{figure}[!ht]\centering
\includegraphics[width=.95\linewidth]{figures/contourf_infinity_10000_1024_large.pdf}
\caption{\textbf{Optimal bonus in asymptotic case}: for all $\alpha,\beta\geq 1$, we use $\frac{\alpha-\beta}{\alpha+\beta}$ to measure the skewness of the prior and $\frac{1}{\alpha+\beta}$ to measure the uncertainty of the prior. The figure shows that when $n$ is sufficiently large (we use $n=10000$ here), the relationship between the optimal bonus size and skewness/uncertainty.}
\label{fig:infinite}
\end{figure}
\subsection{Results}
Table~\ref{table:basic} shows the optimal bonus size in each case of the prior---symmetric, certain, and general---for both any finite number of rounds and the asymptotic value as the number of rounds increases. Note that we have closed-form formulae for the symmetric case and the asymptotic certain case.
We obtain the following interesting insights:
\paragraph{Insight 1. In the certain case, more uneven match-ups lead to a larger optimal bonus} When the match is more uneven, i.e., when the prior has a higher skewness $\frac{\alpha-\beta}{\alpha+\beta}$, the optimal bonus is larger. Interestingly, in the certain case, we find that the optimal bonus is around $\frac{\alpha-\beta}{\alpha+\beta}n$ which is the expected amount of points the weaker player needs to come back, which we call the ``expected lead''. For example, we let Marvel superheroes compete in multi-rounds. When Thor fights Hawkeye, there should be a large bonus than when Thor fights Iron Man.
\paragraph{Insight 2. In the symmetric case, more uncertainty leads to a higher optimal bonus.} In the symmetric case when the prior has more uncertainty, i.e., when $\frac{1}{\alpha+\beta}=\frac{1}{2\alpha}$ is larger, the optimal bonus is larger. In particular, the uniform case has a higher bonus than the certain case. For example, if Thor fights Superman, no one really knows the relative abilities of the heroes, since they come from different worlds. There should be a larger bonus than if Thor fights Iron Man. Since Thor has fought Iron Man before and it is known that they will be a good match-up. Note that in general, the optimal bonus size is not monotone in the uncertainty while holding the skewness fixed.
Figure~\ref{fig:infinite} shows the optimal bonus's for the asymptotic case. For small $\alpha-\beta$, it is similar to the symmetric case (the $y$-axis); and, analogously, for large $\alpha-\beta$, it is similar to the certain case (the $x$-axis). Thus, we can use the special cases' results to approximate the general case.
\subsection{Related Work}
\citet{ely2015suspense} provide a clear definition of surprise amount. Starting from \citet{ely2015suspense}, a growing literature examines the relationship between the surprise and the perceived quality in different games and presents empirical support that audiences' perceived quality of a game is partly determined by the amount of surprise generated in this process, such as tennis~\cite{bizzozero2016importance}, soccer~\cite{buraimo2020unscripted,LUCAS201758}, and rugby~\cite{scarf2019outcome}. \citet{ely2015suspense} additionally study what number of rounds maximizes surprise in our certain case. In contrast, we study how to derive the optimal number of points for the final round and consider a more general prior model.
In addition to the surprise-related work, there has been a growing number of theoretical and empirical results studying how game rules affect different properties of the game, primarily fairness. For example, \citet{brams2018making} study how to make penalty shootouts rule fairer. \citet{braverman2008mafia} study how to make a popular party game, mafia, fair by tuning the number of different characters in the game. \citet{percy2015mathematical} shows that the change of badminton scoring system in 2006, in an attempt to make the game faster paced, did not adversely affect the fairness.
One conception of fairness corresponds to the ``better'' team winning. However, in some cases, like the certain case, this makes for a very dull game as there will be no surprise possible. Thus, the goal of optimizing surprise and making the ``better'' team consistently win are sometimes in tension.
In addition to fairness, \citet{percy2015mathematical} also studies the influence of changes in the badminton scoring system implemented in 2006 on the entertainment value which is related to the number of rallies. However, in contrast to our work, \citet{percy2015mathematical} focuses on comparing two scoring rules rather than optimization and does not formally model the entertainment value. \citet{kovacs2009effect} studies the effect of changes in the volleyball scoring system and empirically shows that the change may make the length of the matches more predictable.
Results from \citet{mossel2014majority} based on Fourier analysis find that if one team wins each round with probability $p > 1/2$ then they win the match with probability at least $p$. Moreover, the only way for the winning probability to be exactly $p$ is for the match to be decided by the outcome of one round. (They were studying opinion aggregating on social networks and this result applies to all deterministic symmetric and monotone Boolean functions with i.i.d. inputs.) This directly relates to the present work because, ideally, each team would start with a prior probability of winning close to 1/2. This result says that the dictator function (which can be enacted in our setting with a bonus of size $n$) yields a prior as close to 1/2 as possible. Our results in the certain case can be seen as trading off the two goals of having a prior probability close to 1/2 but also revealing information more slowly over time than a dictator function.
\section{Problem Statement}
We consider an $n$ round competition between two players, Alice and Bob. In round $i$, a task is assigned to the players and the winner receives points. After the $n$th round, the player with the higher accumulated score wins the competition. In our setting, we assume that each of the first $n-1$ rounds is worth one point. However, the point value $x$ for the final round might be different, and we call $x$ the \emph{bonus}. Setting the final round bonus is a special, but interesting case to study. Without loss of generality, we consider $x$ to be an integer where $0\leq x\leq n$.\footnote{Mathematically, $x>n$ is equivalent to $x=n$ where only the final round matters. Any non-integer $x$ is equivalent to an integer. For example, when $n=4$, $x=1.3$ is equivalent to $x=2$.} To ensure there is no tie, we additionally require $x$ has the same parity as $n$. Let $\mathcal{X}(n)$ represent our considered range for $x$, that is
\[
\mathcal{X}(n) = \begin{cases}
\{1,3,5, \ldots, n\}\text{ , when n is odd}\\
\{0,2,4, \ldots, n\}\text{ , when n is even}
\end{cases} .
\]
\subsection{Optimization Goal}
To introduce our optimization goal formally, we first introduce the concept of belief curve.
\paragraph{Belief Curve}The audience's belief curve is a sequence of random variables \[\mathcal{\bel}:=(B_0,B_1,\ldots,B_n)\] where $B_i,i\in[n]$ is the audience's belief for the probability that Alice wins the whole competition after round $i$. $B_0$ is the initial belief. $B_n$ is either zero or one since the final outcome must be revealed in the end.
We define $O\in\{0,1\}$ as the outcome of the whole competition, that is \[O=\begin{cases}0 & \text{Alice loses the whole competition} \\ 1 & \text{Alice wins the whole competition}\end{cases},\] then we define random variables
\begin{align*}
H_i&=\begin{cases}- & \text{Alice loses i-th round} \\ + & \text{Alice wins i-th round}\end{cases}\\
\mathcal{\his}^{(i)}&=(H_1,H_2,\ldots,H_{i})
\end{align*}
i.e., $\mathcal{\his}^{(i)}$ consists of the history of the first $i$ rounds, we call $\mathcal{\his}^{(i)}$ an \emph{i-history}.
Next, we define random variables
\begin{align*}
B_i&=\Pr[O=1|\mathcal{\his}^{(i)}]\\
\mathcal{\bel}^{(i)}&=(B_0,B_1,\ldots,B_i)
\end{align*}
i.e., $B_i$ is the conditional probability that Alice wins the whole competition. Additionally, we use $\mathcal{\bel}$ as a shorthand for $\mathcal{\bel}^{(n)}$. Note that $\mathcal{\bel}$ is a doob martingale~\cite{1940Regularity} thus $\forall i, \mathrm{E}[B_{i+1}|\mathcal{\his}^{(i)}]=B_i$.
\begin{definition}[Surprise] \cite{ely2015suspense}
Given the belief curve $\mathcal{\bel}$, we define $\Delta_\mathcal{\bel}^i:=|B_i-B_{i-1}|$ as the \emph{amount of surprise generated by round $i$}.
We define the \emph{overall surprise} for a given belief curve to be \[\Delta_\mathcal{\bel}:= \sum_i \Delta_\mathcal{\bel}^i. \]
\end{definition}
\paragraph{Maximizing the Overall Expected Surprise}
We aim to compute the bonus size $x$ which, in expectation, maximizes the audience's overall surprise. That is, we aim to find $x$ to maximize the overall surprise \[ \mathop{\arg\max}_{x\in \mathcal{X}(n)} \mathrm{E}[\Delta_\mathcal{\bel}(x)] \] where $\Delta_\mathcal{\bel}(x)$ is the overall surprise when the bonus round's point value is equal to $x$.
\begin{figure}[!ht]\centering
\includegraphics[width=.8\linewidth]{figures/two_example_curves.pdf}
\caption{\textbf{Belief curves with low/high overall surprise}}
\label{fig:two_example_curves}
\end{figure}
\subsection{Model of Prior Belief}
We introduce a natural model for the audience's prior. We will maximize the overall expected surprise in this model.
We assume that each player's winning probability across rounds is constant, i.e., Alice wins with the \emph{same} probability $p$ in each round. Moreover, we assume the outcomes of these tasks are independent.
\paragraph{Prior over $p$} We use Beta distribution $\mathcal{B}e(\alpha,\beta)$ to model the audience's prior about Alice's winning probability $p$ in each round. The family of Beta distributions is sufficiently rich to cover a variety of important scenarios including the uniform case ($\alpha=\beta=1$), the symmetric case ($\alpha=\beta$) and the certain cases ($\alpha=\lambda p,\beta = \lambda(1-p), \lambda\rightarrow \infty$). A key property of the Beta distribution is that, if $p$ is drawn from a Beta distribution and then we see the outcome of a coin which lands heads with probability $p$, the posterior of $p$ after observing a coin flip is still a Beta distribution.
\begin{claim}[Beta's posterior is Beta]
If the prior $p$ follows $\mathcal{B}e(\alpha ,\beta)$, then conditioning on Alice winning the first round, the posterior distribution over p is $\mathcal{B}e(\alpha+1,\beta)$, and conditioning on Alice losing the first round, the posterior distribution over p is $\mathcal{B}e(\alpha,\beta+1)$.
\label{cla:betaproperty}
\end{claim} Let $p|\mathcal{\his}^{(i)}$ be a random variable which follows the posterior distribution of $p$ conditioning on i-history $\mathcal{\his}^{(i)}$. For all $i\leq n-1$, we define an induced random variable $S_i:=\mathrm{COUNT}(\mathcal{\his}^{(i)})$ as the number of rounds Alice wins in the first $i$ rounds, called the \emph{state} after $i$ rounds. $p|S_i$ is a random variable which follows the posterior distribution of $p$ conditioning on state $S_i$. The property of Beta distribution (Claim~\ref{cla:betaproperty}) directly induces the following claim.
\begin{claim}[Order Independence]\label{cla:beta}
For all $i\leq n-1$, for all $h\in\{+,-\}^i$, $p|(\mathcal{\his}^{(i)}=h)$ follows distribution $\mathcal{B}e(\alpha+\mathrm{COUNT}(h), \beta+i-\mathrm{COUNT}(h))$.
\end{claim}
This immediately implies the following corollary.
\begin{corollary}
[State Dependence]\label{cor:beta}
For all $i\leq n-1$, for all $h\in\{+,-\}^i$, $p|(\mathcal{\his}^{(i)}=h)$ follows the same distribution as $p|(S_i = \mathrm{COUNT}(h))$ and $\Pr[O=1|\mathcal{\his}^{(i)}=h]=\Pr[O=1|S_i=\mathrm{COUNT}(h)]$.
\end{corollary}
That is, the posterior distribution of $p$ or $O$ only depends on the state, i.e., the number of rounds Alice wins and the order does not matter. For example, history $++-$ and history $-++$ induce the same posterior.
The above properties make our prior model tractable. In this model, the optimal bonus size $x^*$ depends on $\alpha,\beta,n$ thus is denoted by $x^*(\alpha,\beta,n)$.
\subsection{Method Overview}
\paragraph{Technical Challenge} The key technical challenge is that we do not have a clean format for the belief value across all rounds (especially early rounds). For example, for the asymmetric case, it is even difficult to represent the initial belief $B_0$ for different $x$. A naive way to compute all belief values is using backward induction. We can then enumerate all possible $x$ to find the optimal one. However, this method has $O(n^3)$ time complexity.
To overcome this challenge, we utilize the properties of the Beta distributions. First, we show that we only need to analyze the belief values in the final two rounds and choosing $x$ becomes a trade-off between the final and penultimate rounds\footnote{When $x$ increases, the final round generates more surprise while the penultimate round generates less surprise.}; Second, we study a few important special cases (asymptotic, symmetric, certain) which can further simplify the final two rounds' analysis significantly. Third, instead of actually computing $\mathrm{E}[\Delta_\mathcal{\bel}(x)]$, we only analyze how $\mathrm{E}[\Delta_\mathcal{\bel}(x)]$ changes with $x$.
\paragraph{Method Overview} Our method has three steps.
\begin{figure}[!ht]\centering
\includegraphics[width=.9\linewidth]{figures/last2example.PNG}
\caption{\textbf{Overview of our method} }
\label{fig:last2example}
\end{figure}
\begin{description}
\item [Step 1 (Main Technical Lemma)] We show that fixing $n,\alpha, \beta$, $\forall$ $x$, there exists a constant $C$ such that
\[\mathrm{E}[\Delta_\mathcal{\bel}(x)]=\mathrm{E}[\Delta_\mathcal{\bel}^{n-1}(x)]*C+\mathrm{E}[\Delta_\mathcal{\bel}^{n}(x)]\]
Thus, the choice of $x$ only depends on the trade-off of the penultimate and the final round's surprise. This significantly simplifies our analysis since we have a close form expression for the belief in the final two rounds.
\item [Step 2 (Final \& Penultimate Rounds)]
We define $\Delta_{\mathcal{\his}^{(i-1)}}^i$ as the expected amount of surprise generated by round $i$ given the history $\mathcal{\his}^{(i-1)}$. $\Delta_{S_{i-1}}^i$ is the amount of surprise generated by round $i$ given that Alice wins $S_{i-1}$ rounds in the first $i-1$ rounds. Corollary~\ref{cor:beta} shows that $\Delta_{\mathcal{\his}^{(i-1)}}^i$ only depends on the state of the history, that is, for any history $h\in \{+,-\}^{i-1}$, $\Delta_{S_{i-1}=\mathrm{COUNT}(h)}^i=\Delta_{\mathcal{\his}^{(i-1)}=h}^i$.
\begin{description}
\item [The final round:] In the final round, we notice when the difference between Alice and Bob is strictly greater than $x$, the outcome of the whole competition does not change regardless of who wins the final round, and no surprise is generated in the final round. Formally, we define $L_{n-1}:=\frac{n-x}2$ and $U_{n-1}:=\frac{n+x-2}2$. Only states $S_{n-1}\in[L_{n-1},U_{n-1}]$ generate surprise. Thus, the total surprise generated in round $n$ is
\begin{align} \label{eq:lastround}
\mathrm{E}[\Delta_\mathcal{\bel}^{n}(x)]=\overbrace{\sum_{j=L_{n-1}}^{U_{n-1}}\Pr[S_{n-1}=j]*\Delta_{S_{n-1}=j}^n}^\text{final round}
\end{align}
In any state $S_{n-1}=j\in[L_{n-1},U_{n-1}]$, whoever wins the final round wins the whole competition and the analysis for all $\Delta_{S_{n-1}=j}^n$ is identical.
\item [The penultimate round:] In the penultimate round, similarly, we define $L_{n-2}:=\frac{n-x-2}2$ and $U_{n-2}:=\frac{n+x-2}2$. Similarly, only states $S_{n-2}\in[L_{n-2},U_{n-2}]$ generate surprise. The states in $(L_{n-2},U_{n-2})$ are similar. However, unlike the final round, here the states at the endpoints require different analysis. Therefore, we divide the analysis into three parts \begin{align}
\mathrm{E}[\Delta_\mathcal{\bel}^{n-1}(x)]=&\overbrace{\Pr[S_{n-2}=L_{n-2}]*\Delta^{n-1}_{S_{n-2}=L_{n-2}}}^\text{penultimate round (at point $L_{n-2}$)}+\overbrace{\Pr[S_{n-2}=U_{n-2}]*\Delta^{n-1}_{S_{n-2}=U_{n-2}}}^\text{penultimate round (at point $U_{n-2}$)}+ \notag\\
&\underbrace{\sum_{j=L_{n-2}+1}^{U_{n-2}-1}\Pr[S_{n-2}=j]*\Delta^{n-1}_{S_{n-2}=j}}_\text{penultimate round (between $L_{n-2}$ and $U_{n-2}$)} \label{eq:2tolast}
\end{align}
\end{description}
\item [Step 3 (Local Maximum): ] To calculate the optimal bonus $x$, we need to analyze the how $\mathrm{E}[\Delta_\mathcal{\bel}(x)]$ changes with $x$.
\begin{description}
\item [Finite Case] Since we require the bonus $x$ to be an integer with the same parity as $n$, we calculate the change of the function $\mathrm{E}[\Delta_\mathcal{\bel}(x)]$ when $x$ is increased/decreased by a step size $2$. We find that there is only one local (and thus global) maximum.
\item [Asymptotic Case] We calculate the derivative of $\mathrm{E}[\Delta_\mathcal{\bel}(x)]$ with respect to $x$, and find that it only has one zero solution which is a local (and also global) maximum.
\end{description}
\end{description}
\section{Main Technical Lemma}
In this section, we introduce our main technical lemma: regardless of the final round's point value $x$, the expected surprise in the first $n-1$ rounds has a fixed relative ratio. Thus, the overall surprise can be rewritten as a linear combination of the final and the penultimate rounds' expected surprise, where the coefficients are independent of $x$. This simplifies our analysis significantly since the choice of $x$ only depends on the trade-off between the final and the penultimate rounds' expected surprise.
\begin{lemma}[Main Technical Lemma]\label{lem:ratio}
When the prior is $\mathcal{B}e(\alpha,\beta)$, the ratio of the surprise of round $i$ and round $i+1(i+1<n)$ is independent of the final round's point value $x$:
\[
\frac{\mathrm{E}[\Delta_\mathcal{\bel}^{i}]}{\mathrm{E}[\Delta_\mathcal{\bel}^{i+1}]}=\frac{i+\alpha+\beta}{i+\alpha+\beta-1},
\]
thus the overall surprise is a linear combination of the final and the penultimate round's surprise,
\[
\mathrm{E}[\Delta_\mathcal{\bel}]=\sum_{i=1}^{n}\mathrm{E}[\Delta_\mathcal{\bel}^i]=\mathrm{E}[\Delta_\mathcal{\bel}^{n-1}]*(n+\alpha+\beta-2)*\mathbb{H}_{\alpha+\beta}(n-1)+\mathrm{E}[\Delta_\mathcal{\bel}^{n}]
\]
where $\mathbb{H}_{\alpha+\beta}(n-1):=\sum_{i=1}^{n-1}\frac{1}{i+\alpha+\beta-1}$. We use $\mathbb{H}$ as shorthand for $\mathbb{H}_{\alpha+\beta}(n-1)$.
\end{lemma}
\paragraph{Proof Sketch} To prove this lemma, we first introduce Claim~\ref{cla:roundsurp} which gives a simple format of the expected surprise generated by a single round initialized from any history $h$. We then apply the claim to analyze two consecutive rounds initialized from any history and show that the expected surprise produced by these two rounds has a fixed relative ratio which only depends on the round number, $\alpha$, and $\beta$. We then extend the results to the expectation over all possible histories.
To simplify the notation in the proof, we introduce shorthand notations for expectation of $p$, the belief values, and the difference between belief values here:
\[
\begin{cases}
q:=\mathrm{E}[p|\mathcal{\his}^{(i-1)}=h]\\
q^+:=\mathrm{E}[p|\mathcal{\his}^{(i)}=h+]\\
q^-:=\mathrm{E}[p|\mathcal{\his}^{(i)}=h-]\\
\end{cases} \begin{cases}
b:=\Pr[O=1|\mathcal{\his}^{(i-1)}=h]\\
b^+:=\Pr[O=1|\mathcal{\his}^{(i)}=h+]\\
b^-:=\Pr[O=1|\mathcal{\his}^{(i)}=h-]\\
\end{cases} \begin{cases}
d:=b^+\minus b^-\\
d^+:=b^{++}-b^{+-}\\
d^-:=b^{-+}-b^{--}\\
\end{cases}
\]
where $ b^{++},b^{+-},b^{-+},b^{--}$ are defined analogously. These notations are also illustrated in Figure~\ref{fig:surpcalc} and Figure~\ref{fig:surprise_example_2}.
\begin{figure}[!ht]\centering
\includegraphics[width=.65\linewidth]{figures/surpcalc.PNG}
\caption{\textbf{A single round} }
\label{fig:surpcalc}
\end{figure}
\begin{claim}\label{cla:roundsurp}
Given any history $\mathcal{\his}^{(i-1)}=h$, we have \[\Delta_{\mathcal{\his}^{(i-1)}=h}^i=d*2q(1-q).\]
\end{claim}
\begin{proof}[Proof of Claim~\ref{cla:roundsurp}]
The expected amount of surprise generated in round $i$ is
\begin{align} \label{eq:delta-i}
\Delta_{\mathcal{\his}^{(i-1)}=h}^i=&q * (b^+ - b)+(1-q)*(b -b^-)
\end{align}
Since $\mathcal{\bel}$ is a martingale, we have
\begin{align} \label{eq:b}
B_i=&\mathrm{E}[B_{i+1}|\mathcal{\his}^{(i-1)}=h] \nonumber\\
b =& q*b^++(1-q)*b^-
\end{align}
By substituting $b$ from \eqref{eq:b} into \eqref{eq:delta-i} and simplifying the equation, we have
\[\Delta_{\mathcal{\his}^{(i-1)}=h}^i=d*2q(1-q)\]
\end{proof}
\begin{figure}[!ht]\centering
\includegraphics[width=.9\linewidth]{figures/2roundsurp_d.PNG}
\caption{\textbf{Two rounds} }
\label{fig:surprise_example_2}
\end{figure}
\begin{proof}[Proof of Lemma~\ref{lem:ratio}]
Figure~\ref{fig:surprise_example_2} shows two consecutive rounds starting from history $\mathcal{\his}^{(i-1)}=h$. We show that the ratio between these two rounds' surprise is fixed.
By Claim~\ref{cla:roundsurp}, we have \[\Delta_{\mathcal{\his}^{(i-1)}=h}^i=d*2q(1-q)\]
and the expected amount of surprise generated in round $i+1$ is
\begin{align*}
\mathrm{E}[\Delta_{\mathcal{\his}^{(i)}|\mathcal{\his}^{(i-1)}=h}^{i+1}] &= q*\Delta^{i+1}_{\mathcal{\his}^{(i)}=h+}+(1-q)*\Delta^{i+1}_{\mathcal{\his}^{(i)}=h-}\\
&=q*d^+*2q^+(1-q^+)+(1-q)*d^-*2q^-(1-q^-)
\end{align*}
Properties of Beta distribution imply that the belief given any history only depends on the state of the history (Claim~\ref{cla:beta}), thus $b^{+-}=b^{-+}$. Then we have $d=d^+*q^++d^-*(1-q^-)$ (see Figure~\ref{fig:surprise_example_2}) and
\begin{align*}
\frac{\Delta_{\mathcal{\his}^{(i-1)}=h}^i}{\mathrm{E}[\Delta_{\mathcal{\his}^{(i)}|\mathcal{\his}^{(i-1)}=h}^{i+1}]} &= \frac{d*2q(1-q)}{q*d^+*2q^+(1-q^+)+(1-q)*d^-*2q^-(1-q^-)}\\ &= \frac{(d^+*q^++d^-*(1-q^-))*2q(1-q)}{q*d^+*2q^+(1-q^+)+(1-q)*d^-*2q^-(1-q^-)}\\
&= \frac{d^+q^+q*(1-q)+d^-(1-q^-)(1-q)*q}{d^+q^+q*(1-q^+)+d^-(1-q^-)(1-q)*q^-}
\end{align*}
We observe that if $q(1-q^+)=\Pr[h+-|h]=\Pr[h-+|h]=(1-q)q^-$, $\frac{1-q}{1-q^+}=\frac{q}{q^-}$, and the ratio becomes $\frac{q}{q^-}$.
The posterior of Beta distribution is still a Beta distribution. We denote the distribution over $p|(\mathcal{\his}^{(i-1)}=h)$ by $\mathcal{B}e(\alpha',\beta')$. Then we have
\[
\begin{cases}
q=\frac{\alpha'}{\alpha'+\beta'}\\
q^-=\frac{\alpha'}{\alpha'+\beta'+1}\\
q^+=\frac{\alpha'+1}{\alpha'+\beta'+1}
\end{cases}
\]
Thus, we have $\Pr[h+-|h]=\Pr[h-+|h]$, and the ratio becomes $\frac{q}{q^-}=\frac{\alpha'+\beta'+1}{\alpha'+\beta'}$, i.e.,
\[
\frac{\Delta_{\mathcal{\his}^{(i-1)}=h}^{i}}{\mathrm{E}[\Delta_{\mathcal{\his}^{(i)}|\mathcal{\his}^{(i-1)}=h}^{i+1}]}=\frac{\alpha'+\beta'+1}{\alpha'+\beta'}
\]
Since the prior $p$ follows $\mathcal{B}e(\alpha,\beta)$, for any history $h$ of first $i-1$ rounds, the distribution $\mathcal{B}e(\alpha',\beta')$ over $p|(\mathcal{\his}^{(i-1)}=h)$ satisfies that $\alpha'+\beta'=i+\alpha+\beta-1$. So starting from any history of first $i-1$ rounds,
\begin{align}
\frac{\Delta_{\mathcal{\his}^{(i-1)}=h}^{i}}{\mathrm{E}[\Delta_{\mathcal{\his}^{(i)}|\mathcal{\his}^{(i-1)}=h}^{i+1}]} = \frac{i+\alpha+\beta}{i+\alpha+\beta-1} \label{eq:singleratio}
\end{align}
which only depends on $i$, given fixed $\alpha,\beta$.
Therefore
\begin{align*}
\frac{\mathrm{E}[\Delta_\mathcal{\bel}^{i}]}{\mathrm{E}[\Delta_\mathcal{\bel}^{i+1}]}=&\frac{\mathrm{E}[\Delta_{\mathcal{\his}^{(i-1)}}^{i}]}{\mathrm{E}[\Delta_{\mathcal{\his}^{(i)}}^{i+1}]}\\
=&\frac{\mathrm{E}[\Delta_{\mathcal{\his}^{(i-1)}}^{i}]}{\mathrm{E}_{\mathcal{\his}^{(i-1)}}[\mathrm{E}_{\mathcal{\his}^{(i)}}[\Delta_{\mathcal{\his}^{(i)}|\mathcal{\his}^{(i-1)}}^{i+1}|\mathcal{\his}^{(i-1)}]]}\tag{chain rule}\\
=&\frac{\mathrm{E}_{\mathcal{\his}^{(i-1)}}[\frac{i+\alpha+\beta}{i+\alpha+\beta-1}*\mathrm{E}_{\mathcal{\his}^{(i)}}[\Delta_{\mathcal{\his}^{(i)}|\mathcal{\his}^{(i-1)}}^{i+1}]]}{\mathrm{E}_{\mathcal{\his}^{(i-1)}}[\mathrm{E}_{\mathcal{\his}^{(i)}}[\Delta_{\mathcal{\his}^{(i)}|\mathcal{\his}^{(i-1)}}^{i+1}|\mathcal{\his}^{(i-1)}]]}\tag{due to formula~\eqref{eq:singleratio} }\\
=&\frac{i+\alpha+\beta}{i+\alpha+\beta-1}\\
\end{align*}
Then we can use the expected surprise in round $n-1$ to represent the amount of surprise in any round $i\leq n-1$:
\begin{align*}
\frac{\mathrm{E}[\Delta_\mathcal{\bel}^{i}]}{\mathrm{E}[\Delta_\mathcal{\bel}^{n-1}]} &=\prod_{j=i}^{n-2}\frac{\mathrm{E}[\Delta_\mathcal{\bel}^{j}]}{\mathrm{E}[\Delta_\mathcal{\bel}^{j+1}]}\\
\mathrm{E}[\Delta_\mathcal{\bel}^{i}] &=\frac{n+\alpha+\beta-2}{i+\alpha+\beta-1}*\mathrm{E}[\Delta_\mathcal{\bel}^{n-1}]\\
\end{align*}
Therefore, the sum of the expected surprise in the first $n-1$ rounds is
\begin{align*}
\sum_{i=1}^{n-1}\mathrm{E}[\Delta_\mathcal{\bel}^i] &=\mathrm{E}[\Delta_\mathcal{\bel}^{n-1}]*(\sum_{i=1}^{n-1}\frac{n+\alpha+\beta-2}{i+\alpha+\beta-1})\\
&=\mathrm{E}[\Delta_\mathcal{\bel}^{n-1}]*(n+\alpha+\beta-2)*\mathbb{H}_{\alpha+\beta}(n-1)\\
\end{align*}
\end{proof}
\section{Finite Case}
In this section, we follow our method overview to study the finite case. We first present our results in Section~\ref{sec:finiteresults}. We then show a general analysis in Section~\ref{sec:analyzegeneral} and apply the results of the general analysis to study two special cases: 1) symmetric case: $\alpha=\beta$ including the uniform case: $\alpha=\beta=1$; 2) certain case $\alpha=\lambda p,\beta = \lambda(1-p), \lambda\rightarrow \infty$. Finally, we apply the results in Section~\ref{sec:analyzegeneral} to show a linear algorithm for general beta prior setting.
\subsection{Results in Finite Case}\label{sec:finiteresults}
Recall that $\textsc{rd}(x):=$the nearest\footnote{When there is a tie, we pick the smaller one.} integer to $x$ that has the same parity as $n$, and $\mathbb{H}:=\mathbb{H}_{\alpha+\beta}(n-1)=\sum_{i=1}^{n-1}\frac{1}{i+\alpha+\beta-1}$.
\begin{theorem}
\label{thm:spc}
For all $\alpha\geq\beta\geq 1$ \footnote{Note that assuming $\alpha\geq\beta$ does not lose generality since we can exchange Alice and Bob.}, $n>1$,
\begin{itemize}
\item \textbf{Symmetric $\alpha=\beta$}
\[
x^*(\alpha,\alpha,n)=\textsc{rd}(\frac{n-1}{2\alpha\mathbb{H}-\frac{n-1}{n+2\alpha-1}})
\]
\begin{itemize}
\item \textbf{Uniform $\alpha=\beta=1$}
\[
x^*(1,1,n)=\textsc{rd}(\frac{n-1}{2\mathbb{H}-\frac{n-1}{n+1}})
\]
\end{itemize}
\item \textbf{Certain $\alpha=\lambda p,\beta = \lambda(1-p), \lambda\rightarrow \infty$}
Let $F(x):=(2np-n-(x-1))p^{x-1}+(n-2np-(x-1))(1-p)^{x-1}$, $x\in [1,n-1]$, $F(x)=0$ has a trivial solution at $x=1$ and a unique non-trivial solution $\Tilde{x}$ when $p>\frac12$ and $n>\frac{1}{(\frac{1}{2}-p)\ln(\frac{1-p}{p})}$,
\[
x^*(\alpha,\beta,n) = \begin{cases}
\textsc{rd}(\Tilde{x}) & \text{if $p>\frac12$ and $n>\frac{1}{(\frac{1}{2}-p)\ln(\frac{1-p}{p})}$}\\
\textsc{rd}(1) & \text{otherwise}\\
\end{cases}
\]
Moreover, if
$
p>\frac{1}{1+(a+1)^{-\frac{1}{a}}} \text{ where } a=2np-n-2>0 \footnote{ $\frac{1}{1+(a+1)^{-\frac{1}{a}}}<\frac{1}{1+e^{-1}}$ and when $a\rightarrow+\infty$, $\frac{1}{1+(a+1)^{-\frac{1}{a}}}\rightarrow\frac{1}{2}$}
$,
then \[x^*(\alpha,\beta,n)\in[\textsc{rd}(2np-n)-2,\textsc{rd}(2np-n)+2],\] that is, the optimal bonus is around the ``expected lead''.
\item \textbf{General} There exists an $O(n)$ algorithm to compute the optimal bonus $x^*(\alpha,\beta,n)$.
\end{itemize}
\end{theorem}
We then present corresponding numerical results here. Based on Theorem~\ref{thm:spc}, we draw the contours of $\Tilde{x}$ for varies of cases. The optimal $x^*$ is $\textsc{rd}(\Tilde{x})$. Though $n$ can only be positive integers, we also smooth the contours for other $n$. In the symmetric case, the optimal bonus size increases as the number of rounds $n$ increases and the amount of uncertainty $\frac{1}{2\alpha}$ decreases. In the certain case, as we predicted in theory, the optimal bonus size is close to ``expected lead''. In the general case, as $n$ gets larger, the result becomes closer to the asymptotic case (see Figure~\ref{fig:infinite}).
Finally, we provide additional numerical results illustrating how the overall surprise depends on the bonus size in Figure~\ref{fig:numericalv}. For different settings, for all $x$, we directly compute $\mathrm{E}[\Delta_\mathcal{\bel}(x)]$ by using backward induction to compute all belief curves. We also annotate our theoretical optimal bonus $x^*=\textsc{rd}(\Tilde{x})$ based on Theorem~\ref{thm:spc}. The overall surprise varies with the bonus size and in some cases (e.g. certain, $n=20$, $p=0.7$), the optimal bonus creates surprise that doubles the amount of surprise created by the trivial settings ($x=\textsc{rd}(0)$ or $x=\textsc{rd}(n)$). Moreover, the optimal bonus depends on the properties of the setting: number of rounds and uniform, symmetric, or skewed. Additionally, in the figures, we see that the curves all have a single peak, so the local and global optima coincide.
\begin{figure}[!ht]\centering
\includegraphics[width=.48\linewidth]{figures/contourf_beta_prior_symmetric.pdf}
\caption{\textbf{Symmetric case}: Optimal bonus $x^*=\textsc{rd}(\Tilde{x})$}
\label{fig:symmetric}
\end{figure}
\begin{figure}[!ht]\centering
\subfigure[Optimal bonus $x^*=\textsc{rd}(\Tilde{x})$] {\includegraphics[width=.48\linewidth]{figures/contourf_certain_prior_100_1024.pdf}\label{fig:certain_opt}}
\subfigure[$2np-n$]{\includegraphics[width=.48\linewidth]{figures/contourf_certain_prior_100_1024_near.pdf}\label{fig:certain_2np}}
\caption{\textbf{Certain case}}
\label{fig:certain}
\end{figure}
\begin{figure}[!ht]\centering
\subfigure[$n=5$]{\includegraphics[width=.48\linewidth]{figures/contourf_beta_prior_5_1024.pdf}\label{fig:finite_5}}
\subfigure[$n=10$]{\includegraphics[width=.48\linewidth]{figures/contourf_beta_prior_10_1024.pdf}\label{fig:finite_10}}
\subfigure[$n=20$]{\includegraphics[width=.48\linewidth]{figures/contourf_beta_prior_20_1024.pdf}\label{fig:finite_20}}
\subfigure[$n=40$]{\includegraphics[width=.48\linewidth]{figures/contourf_beta_prior_40_1024.pdf}\label{fig:finite_40}}
\caption{\textbf{General case}: Optimal bonus $x^*=\textsc{rd}(\Tilde{x})$. Here each area between two contour lines has the same optimal bonus $x^*$. For example, in $n=5$, the red area's optimal bonus size is $5$, yellow is $3$, cyan is $1$.}
\label{fig:contourf_finite}
\end{figure}
\begin{figure}[!ht]\centering
\subfigure[Uniform,n=10]{\includegraphics[width=.32\linewidth]{figures/curve_surprise_uniform_10.pdf}\label{fig:curve_uniform_10}}
\subfigure[Symmetric,n=10]{\includegraphics[width=.32\linewidth]{figures/curve_surprise_symmetric_10.pdf}\label{fig:curve_symmetric_10}}
\subfigure[Certain,n=10]{\includegraphics[width=.32\linewidth]{figures/curve_surprise_certain_10.pdf}\label{fig:curve_certain_10}}
\subfigure[Uniform,n=15]{\includegraphics[width=.32\linewidth]{figures/curve_surprise_uniform_15.pdf}\label{fig:curve_uniform_15}}
\subfigure[Symmetric,n=15]{\includegraphics[width=.32\linewidth]{figures/curve_surprise_symmetric_15.pdf}\label{fig:curve_symmetric_15}}
\subfigure[Certain,n=15]{\includegraphics[width=.32\linewidth]{figures/curve_surprise_certain_15.pdf}\label{fig:curve_certain_15}}
\subfigure[Uniform,n=20]{\includegraphics[width=.32\linewidth]{figures/curve_surprise_uniform_20.pdf}\label{fig:curve_uniform_20}}
\subfigure[Symmetric,n=20]{\includegraphics[width=.32\linewidth]{figures/curve_surprise_symmetric_20.pdf}\label{fig:curve_symmetric_20}}
\subfigure[Certain,n=20]{\includegraphics[width=.32\linewidth]{figures/curve_surprise_certain_20.pdf}\label{fig:curve_certain_20}}
\caption{\textbf{Relation between bonus size and overall surprise}}
\label{fig:numericalv}
\end{figure}
\subsection{General Analysis}\label{sec:analyzegeneral}
In this subsection, we derive a formula which can be applied to all settings in the later sections.
\begin{align}
\mathrm{E}[\Delta_\mathcal{\bel}(x)] =& \mathrm{E}[\Delta_\mathcal{\bel}^{n-1}(x)]*(n+\alpha+\beta-2)*\mathbb{H}+\mathrm{E}[\Delta_\mathcal{\bel}^n(x)]\tag{Lemma~\ref{lem:ratio}}\\
=& \bigg(\overbrace{\Pr[S_{n-2}=L_{n-2}]*\Delta^{n-1}_{S_{n-2}=L_{n-2}}}^\text{penultimate round (at point $L_{n-2}$)}+\overbrace{\Pr[S_{n-2}=U_{n-2}]*\Delta^{n-1}_{S_{n-2}=U_{n-2}}}^\text{penultimate round (at point $U_{n-2}$)}\label{eq:mainali} \notag\\
&+\underbrace{\sum_{j=L_{n-2}+1}^{U_{n-2}-1}\Pr[S_{n-2}=j]*\Delta^{n-1}_{S_{n-2}=j}}_\text{penultimate round (between $L_{n-2}$ and $U_{n-2}$)}\bigg)*(n+\alpha+\beta-2)*\mathbb{H}\notag\\
&+\underbrace{\sum_{j=L_{n-1}}^{U_{n-1}}\Pr[S_{n-1}=j]*\Delta_{S_{n-1}=j}^n}_\text{final round} \tag{recall \eqref{eq:lastround} and \eqref{eq:2tolast} in method overview}
\end{align}
\begin{figure}[!ht]\centering
\includegraphics[width=.55\linewidth]{figures/general2rounds.PNG}
\label{fig:shorthand}
\caption{\textbf{Illustration for the shorthand}}
\end{figure}
Here are shorthand notations:
\[\begin{cases}
q^i_j := \mathrm{E}[p|(S_i=j)], &0\leq i\leq n-1\\
b^i_j := \Pr[O=1|(S_i=j)], &0\leq i\leq n-1\\
d^i_j := b^{i+1}_{j+1} - b^{i+1}_{j}, &0\leq i\leq n-2\\
\end{cases}\]
The above definition for $d^i_j$ is only for $0\leq i\leq n-2$ since $d^{n-1}_j$ involves $b^n_j$ but the definition for $b^i_j $ is for $0\leq i\leq n-1$. The final round's belief value is either 0 or 1 and depends on both the number of rounds Alice wins among the first $n-1$ rounds ($S_{n-1}$) and whether Alice wins the final round ($H_n=+$ or $H_n=-$). Thus, we define the belief change in the final round directly as follows. \[
d^{n-1}_j := \Pr[O=1|(H_n=+)\wedge (S_{n-1}=j)]-\Pr[O=1|(H_n=-)\wedge (S_{n-1}=j)]\\
\]
In fact, for the no-surprise red/blue points in the final round (Figure~\ref{fig:last2example}), the belief change is 0, for other grey points, the belief change is 1.
By substituting the above shorthand, we have
\begin{align}
\mathrm{E}[\Delta_\mathcal{\bel}(x)] =& \bigg(\overbrace{\Pr[S_{n-2}=L_{n-2}]*2 q^{n-2}_{L_{n-2}}*(1-q^{n-2}_{L_{n-2}})*d_{L_{n-2}}^{n-2}}^\text{penultimate round (at point $L_{n-2}$)}\notag\\
&+\overbrace{\Pr[S_{n-2}=U_{n-2}]*2q^{n-2}_{U_{n-2}}*(1-q^{n-2}_{U_{n-2}})*d_{U_{n-2}}^{n-2}}^\text{penultimate round (at point $U_{n-2}$)}\notag\\
&+\overbrace{\sum_{j=L_{n-2}+1}^{U_{n-2}-1}\Pr[S_{n-2}=j]*2q^{n-2}_{j}*(1-q^{n-2}_{j})*d_{j}^{n-2}}^\text{penultimate round (between $L_{n-2}$ and $U_{n-2}$)}\bigg)*(n+\alpha+\beta-2)*\mathbb{H}\notag\\
&+\overbrace{\sum_{j=L_{n-1}}^{U_{n-1}}\Pr[S_{n-1}=j]*2q^{n-1}_{j}*(1-q^{n-1}_{j})*d_{j}^{n-1}}^\text{final round} \label{eq:general}
\end{align}
We further introduce a shorthand $Q^i_j :=\Pr[S_i=j]*2q^i_j*(1-q^i_j), 0\leq i\leq n-1$ and by substituting this shorthand, we have
\begin{align}
\mathrm{E}[\Delta_\mathcal{\bel}(x)] =& \bigg(\overbrace{Q_{L_{n-2}}^{n-2}*d_{L_{n-2}}^{n-2}}^\text{penultimate round (at point $L_{n-2}$)}+\overbrace{Q_{U_{n-2}}^{n-2}*d_{U_{n-2}}^{n-2}}^\text{penultimate round (at point $U_{n-2}$)}\label{eq:symmainali}\\ &+\underbrace{\sum_{j=L_{n-2}+1}^{U_{n-2}-1}Q_{j}^{n-2}*d_{j}^{n-2}}_\text{penultimate round (between $L_{n-2}$ and $U_{n-2}$)}\bigg)*(n+\alpha+\beta-2)*\mathbb{H}+\underbrace{\sum_{j=L_{n-1}}^{U_{n-1}}Q_{j}^{n-1}*d_{j}^{n-1}}_\text{final round} \notag
\end{align}
As we mentioned in the overview, we pick the final round and the penultimate round to represent the overall expected surprise since the belief change, in these two rounds, has a simple representation, as stated in the following lemma.
\begin{lemma}
We have \[
q^{i}_j=\frac{j+\alpha}{i+\alpha+\beta}
\]
\[
\Pr[S_i=j]=\frac{(\alpha+\beta-1)\binom{i}{j}\binom{\alpha+\beta-2}{\alpha-1}}{(\alpha+\beta+i-1)\binom{i+\alpha+\beta-2}{j+\alpha-1}}\footnote{When $\alpha+\beta$ is not an integer, we use $\binom{n}{k}:=\frac{\Gamma(n+1)}{\Gamma(k+1)\Gamma(n-k+1)}$ as the continuous generalization.}
\]
\[
Q^i_j=\frac{2(\alpha+\beta-1)\binom{i}{j}\binom{\alpha+\beta-2}{\alpha-1}}{(\alpha+\beta+i)\binom{i+\alpha+\beta}{j+\alpha}}
\]
and
\[
d^{n-2}_{j}=
\begin{cases}
0, & j<L_{n-2}\\
\frac{L_{n-1}+\alpha}{n+\alpha+\beta-1}, & j=L_{n-2}\\
\frac{1}{n+\alpha+\beta-1}, & L_{n-2}<j<U_{n-2}\\
\frac{n-1-U_{n-1}+\beta}{n+\alpha+\beta-1}, & j=U_{n-2}\\
0, & j>U_{n-2}
\end{cases}
\qquad
d^{n-1}_{j}=
\begin{cases}
0, & j<L_{n-1}\\
1, & L_{n-1}\leq j\leq U_{n-1}\\
0, & j>U_{n-1}
\end{cases}
\]
\label{lem:generald}
\end{lemma}
We prove the lemma by delicate analysis based on the properties of Beta distribution. We defer the proof to appendix. Based on Lemma~\ref{lem:generald}, we substitute the final two rounds' belief change $d^{n-1}_j,d^{n-2}_j$ into formula~\eqref{eq:symmainali}:
\begin{align}
\mathrm{E}[\Delta_\mathcal{\bel}(x)]=&\bigg(\overbrace{Q^{n-2}_{L_{n-2}}*\frac{L_{n-1}+\alpha}{n+\alpha+\beta-1}}^\text{penultimate round (at point $L_{n-2}$)}+\overbrace{Q^{n-2}_{U_{n-2}}*\frac{n-1-U_{n-1}+\beta}{n+\alpha+\beta-1}}^\text{penultimate round (at point $U_{n-2}$)}\notag\\
&+\underbrace{\sum_{j=L_{n-2}+1}^{U_{n-2}-1}Q^{n-2}_j*\frac{1}{n+\alpha+\beta-1}}_\text{penultimate round (between $L_{n-2}$ and $U_{n-2}$)}\bigg)*(n+\alpha+\beta-2)*\mathbb{H}+\underbrace{\sum_{j=L_{n-1}}^{U_{n-1}}Q^{n-1}_j}_\text{final round}\notag\\
=&\bigg(Q^{n-2}_{\frac{n-x-2}2}*\frac{\frac{n-x}2+\alpha}{n+\alpha+\beta-1}+Q^{n-2}_{\frac{n+x-2}2}*\frac{\frac{n-x}2+\beta}{n+\alpha+\beta-1}\notag\\
&+\sum_{j=\frac{n-x}2}^{\frac{n+x-4}2}Q^{n-2}_j*\frac{1}{n+\alpha+\beta-1}\bigg)*(n+\alpha+\beta-2)*\mathbb{H}+\sum_{j=\frac{n-x}2}^{\frac{n+x-2}2}Q^{n-1}_j \label{eq:gensurp}
\end{align}
\footnote{For $x<2$, we have $L_{n-2}+1>U_{n-2}-1$, and for $x=0$, we have $L_{n-1}>U_{n-1}$. We define the summation from a larger subscript to a smaller superscript as zero. This definition is valid since in those cases, no surprise is generated.}
In order to find the optimal $x$, we calculate
\begin{align}
&\mathrm{E}[\Delta_\mathcal{\bel}(x+1)]-\mathrm{E}[\Delta_\mathcal{\bel}(x-1)] \notag\\
=&\bigg(Q^{n-2}_{\frac{n-x-3}2}*(\frac{n-x-1}2+\alpha)+Q^{n-2}_{\frac{n+x-1}2}*(\frac{n-x-1}2+\beta)-Q^{n-2}_{\frac{n-x-1}2}*(\frac{n-x+1}2+\alpha)\notag\\
&-Q^{n-2}_{\frac{n+x-3}2}*(\frac{n-x+1}2+\beta)+Q^{n-2}_{\frac{n-x-1}2}+Q^{n-2}_{\frac{n+x-3}2}\bigg)*\frac{(n+\alpha+\beta-2)\mathbb{H}}{n+\alpha+\beta-1}+Q^{n-1}_{\frac{n+x-1}2}+Q^{n-1}_{\frac{n-x-1}2}\notag\\
=&\bigg(\left(Q^{n-2}_{\frac{n-x-3}2}-Q^{n-2}_{\frac{n-x-1}2}\right)*(\frac{n-x-1}2+\alpha)+\left(Q^{n-2}_{\frac{n+x-1}2}-Q^{n-2}_{\frac{n+x-3}2}\right)*(\frac{n-x-1}2+\beta)\bigg)\notag\\
&*\frac{(n+\alpha+\beta-2)\mathbb{H}}{n+\alpha+\beta-1}+Q^{n-1}_{\frac{n+x-1}2}+Q^{n-1}_{\frac{n-x-1}2}
\label{eq:gendif}
\end{align}
Then the following claim shows that we can obtain the optimal bonus by finding ``local maximum'' $\Tilde{x}$. We defer the proof to appendix.
\begin{claim}[Local Maximum $\rightarrow$ Optimal Bonus]
\label{cla:roundopt}
If there exists $\Tilde{x}\in(0,n+1)$ such that for all $1\leq x<\Tilde{x}$, $\mathrm{E}[\Delta_\mathcal{\bel}(x + 1)]\geq \mathrm{E}[\Delta_\mathcal{\bel}(x - 1)]$ and when $\Tilde{x}\leq n-1$, for all $\Tilde{x}\leq x\leq n-1$, $\mathrm{E}[\Delta_\mathcal{\bel}(x + 1)]\leq \mathrm{E}[\Delta_\mathcal{\bel}(x - 1)]$, then $\textsc{rd}(\Tilde{x})$ is the optimal bonus.
\end{claim}
Later we show that formula~\eqref{eq:gendif} induces a linear algorithm to find the optimal bonus size in general and can be significantly simplified in the symmetric case and certain case.
\subsection{Symmetric Case}
\label{subsec:sym}
We start to analyze the symmetric case.
\begin{observation}\label{ob:symmetric}
In the symmetric case, i.e. $\alpha=\beta$, $Q^i_j$ is also symmetric, that is $Q^i_j=Q^i_{i-j}$.
\end{observation}
We defer the proof to appendix. Based on the above observation, we can further simplify the formula~\eqref{eq:gendif}
\begin{align}
&\mathrm{E}[\Delta_\mathcal{\bel}(x+1)]-\mathrm{E}[\Delta_\mathcal{\bel}(x-1)] \notag\\
=&\bigg(\left(Q^{n-2}_{\frac{n-x-3}2}-Q^{n-2}_{\frac{n-x-1}2}\right)*(\frac{n-x-1}2+\alpha)+\left(Q^{n-2}_{\frac{n+x-1}2}-Q^{n-2}_{\frac{n+x-3}2}\right)*(\frac{n-x-1}2+\alpha)\bigg)\notag\\ \tag{based on formula~\eqref{eq:gendif}}
&*\frac{(n+2\alpha-2)\mathbb{H}}{n+2\alpha-1}+Q^{n-1}_{\frac{n+x-1}2}+Q^{n-1}_{\frac{n-x-1}2}\notag \\ \tag{$Q^{n-2}_{\frac{n+x-1}2}=Q^{n-2}_{\frac{n-x-3}2},Q^{n-2}_{\frac{n+x-3}2}=Q^{n-2}_{\frac{n-x-1}2}$ according to Observation~\ref{ob:symmetric}}
=&\bigg(\left(Q^{n-2}_{\frac{n-x-3}2}-Q^{n-2}_{\frac{n-x-1}2}\right)*(\frac{n-x-1}2+\alpha)+\left(Q^{n-2}_{\frac{n-x-3}2}-Q^{n-2}_{\frac{n-x-1}2}\right)*(\frac{n-x-1}2+\alpha)\bigg)\notag\\ \tag{$Q^{n-1}_{\frac{n+x-1}2}=Q^{n-1}_{\frac{n-x-1}2}$ according to Observation~\ref{ob:symmetric}}
&*\frac{(n+2\alpha-2)\mathbb{H}}{n+2\alpha-1}+Q^{n-1}_{\frac{n-x-1}2}+Q^{n-1}_{\frac{n-x-1}2}\notag\\
=&\left(Q^{n-2}_{\frac{n-x-3}2}-Q^{n-2}_{\frac{n-x-1}2}\right)*(n-x-1+2\alpha)*\frac{(n+2\alpha-2)\mathbb{H}}{n+2\alpha-1}+2Q^{n-1}_{\frac{n-x-1}2}\label{eq:symdif}
\end{align}
The remaining task is to analyze $Q^i_j$s in the above formula. They share some components which can be used for further simplification. We start from the uniform case and analyze it step by step.
\paragraph{Uniform Case} In this case $\alpha=\beta=1$ and we can substitute $\alpha, \beta$ in Lemma~\ref{lem:generald} and obtains that the probability that Alice wins any number of rounds in the first $i$ rounds is equal, that is
\[
\Pr[S_i=j]=\frac{1}{i+1}
\]
and $Q^i_j$ is
\[
Q^i_j=\frac{2(j+1)(i+1-j)}{(i+1)(i+2)^2}
\]
We substitute $Q^i_j$ into formula~\eqref{eq:symdif}
\begin{align*}
&\mathrm{E}[\Delta_\mathcal{\bel}(x+1)]-\mathrm{E}[\Delta_\mathcal{\bel}(x-1)]\\
=&\left(Q^{n-2}_{\frac{n-x-3}2}-Q^{n-2}_{\frac{n-x-1}2}\right)*(n-x-1+2\alpha)*\frac{(n+2\alpha-2)\mathbb{H}}{n+2\alpha-1}+2Q^{n-1}_{\frac{n-x-1}2}\tag{based on formula~\eqref{eq:symdif}}\\
=&\left(\frac{(n-x-1)(n+x+1)}{2(n-1)n^2}-\frac{(n-x+1)(n+x-1)}{2(n-1)n^2}\right)*(n-x+1)*\frac{n\mathbb{H}}{n+1}\\
&+\frac{(n-x+1)(n+x-1)}{n(n+1)^2}\\
=&\frac{(-4x)*(n-x+1)}{2(n-1)n^2}*\frac{n\mathbb{H}}{n+1}+\frac{(n-x+1)(n+x-1)}{n(n+1)^2}\\
=&\frac{(n-x+1)((n-1-2(1+n)\mathbb{H})x+n^2-1)}{(n-1)n(n+1)^2}
\end{align*}
To find $\Tilde{x}$ that satisfies conditions in Claim~\ref{cla:roundopt}, we solve the equation \[\frac{(n-x+1)((n-1-2(1+n)\mathbb{H})x+n^2-1)}{(n-1)n(n+1)^2}=0,\] and get \[x=\begin{cases}\frac{n^2-1}{2 (1 + n)\mathbb{H}-n+1}\\ n+1\end{cases}.\]
Recall that $x\leq n$, we discard the solution of $x=n+1$, and then pick $\Tilde{x}:=\frac{n^2-1}{2 (1 + n)\mathbb{H}-n+1}=\frac{n-1}{2 \mathbb{H}-\frac{n-1}{n+1}}$. When $x<\Tilde{x}$, we have $(n-1-2(1+n)\mathbb{H})x+n^2-1>0$, thus $\mathrm{E}[\Delta_\mathcal{\bel}(x+1)]-\mathrm{E}[\Delta_\mathcal{\bel}(x-1)]>0$. Otherwise, $\mathrm{E}[\Delta_\mathcal{\bel}(x+1)]-\mathrm{E}[\Delta_\mathcal{\bel}(x-1)]\leq 0$. Based on Claim~\ref{cla:roundopt}, the optimal bonus is \[x^*(1,1,n)=\textsc{rd}(\frac{n-1}{2 \mathbb{H}-\frac{n-1}{n+1}}).\]
\paragraph{Symmetric Case}
Lemma~\ref{lem:generald} shows that if the prior $p$ follows $\mathcal{B}e(\alpha,\alpha)$, then the probability of Alice wins $j$ rounds in the first $i$ rounds is
\[
\Pr[S_i=j]=\frac{(2\alpha-1)\binom{i}{j}\binom{2\alpha-2}{\alpha-1}}{(2\alpha+i-1)\binom{i+2\alpha-2}{j+\alpha-1}}
\]
and $Q^i_j$ is
\[
Q^i_j=\frac{2(2\alpha-1)\binom{i}{j}\binom{2\alpha-2}{\alpha-1}}{(2\alpha+i)\binom{i+2\alpha}{j+\alpha}}
\]
Then we substitute $Q^i_j$ into formula~\eqref{eq:symdif}.
\begin{align*}
&\mathrm{E}[\Delta_\mathcal{\bel}(x + 1)] - \mathrm{E}[\Delta_\mathcal{\bel}(x - 1)]\\
=&\left(Q^{n-2}_{\frac{n-x-3}2}-Q^{n-2}_{\frac{n-x-1}2}\right)*(n-x-1+2\alpha)*\frac{(n+2\alpha-2)\mathbb{H}}{n+2\alpha-1}+2Q^{n-1}_{\frac{n-x-1}2}\tag{based on formula~\eqref{eq:symdif}}\\
=&\left(\frac{2(2\alpha-1)\binom{n-2}{\frac{n-x-3}2}\binom{2\alpha-2}{\alpha-1}}{(n+2\alpha-2)\binom{n+2\alpha-2}{\frac{n-x-3}2+\alpha}}-\frac{2(2\alpha-1)\binom{n-2}{\frac{n-x-1}2}\binom{2\alpha-2}{\alpha-1}}{(n+2\alpha-2)\binom{n+2\alpha-2}{\frac{n-x-1}2+\alpha}}\right)*(n-x-1+2\alpha)*\frac{(n+2\alpha-2)\mathbb{H}}{n+2\alpha-1}\\
&+\frac{4(2\alpha-1)\binom{n-1}{\frac{n-x-1}2}\binom{2\alpha-2}{\alpha-1}}{(n+2\alpha-1)\binom{n+2\alpha-1}{\frac{n-x-1}2+\alpha}}\tag{substitute $Q^i_j$}\\
=&\frac{2(2\alpha-1)\binom{n-1}{\frac{n-x-1}2}\binom{2\alpha-2}{\alpha-1}}{\binom{n+2\alpha-1}{\frac{n-x-1}2+\alpha}}\bigg(\frac{(n-x-1+2\alpha)\mathbb{H}}{n-1}*\left(\frac{n-x-1}{n-x-1+2\alpha}-\frac{n+x-1}{n+x-1+2\alpha}\right)+\frac{2}{n+2\alpha-1}\bigg)\\
\propto&\frac{(n-x-1+2\alpha)\mathbb{H}}{n-1}*\left(\frac{n-x-1}{n-x-1+2\alpha}-\frac{n+x-1}{n+x-1+2\alpha}\right)+\frac{2}{n+2\alpha-1}\tag{Since $\alpha\geq 1$, $2\alpha-1>0$ thus the coefficient is positive}\\
\propto& \mathbb{H}(n+2\alpha-1)\left((n-x-1)(n+x-1+2\alpha)-(n-x-1+2\alpha)(n+x-1)\right)+2(n-1)\tag{Since the denominators $n-1$, $n-x-1+2\alpha$, $n+x-1+2\alpha$ and $n+2\alpha-1$ are positive}\\
=& \mathbb{H}(n+2\alpha-1)(-4\alpha x)+2(n-1)(n+x-1+2\alpha)\\
=& (2(n-1)-4\alpha(n+2\alpha-1)\mathbb{H})x+2(n-1)(n+2\alpha-1)=-bx + c
\end{align*}
We define $\Tilde{x}$ as the solution of $-bx+c=0$:
\[\Tilde{x}:=\frac{c}{b}=\frac{(n+2\alpha-1)(n-1)}{2\alpha(n+2\alpha-1)\mathbb{H}-n+1}=\frac{n-1}{2\alpha\mathbb{H}-\frac{n-1}{n+2\alpha-1}}.\]
Moreover, for all $\alpha\geq 1$, \begin{align*}
b =& 4\alpha(n+2\alpha-1)\mathbb{H} -2(n-1)\\
=& 4\alpha(n+2\alpha-1)*(\sum_{i=1}^{n-1}\frac{1}{i+2\alpha-1})-2(n-1)\\
\geq& 4\alpha(n+2\alpha-1)*\frac{n-1}{n+2\alpha-2}-2(n-1)\\
>&4(n-1)-2(n-1)\\
=&2(n-1)>0
\end{align*}
Therefore, based on Claim~\ref{cla:roundopt}, the optimal bonus $x^*(\alpha,\alpha,n)$ is
\[
x^*(\alpha,\alpha,n)=\textsc{rd}(\Tilde{x})=\textsc{rd}(\frac{n-1}{2\alpha\mathbb{H}-\frac{n-1}{n+2\alpha-1}})
\]
\subsection{Certain Case}
In the certain case, $\alpha=\lambda p,\beta = \lambda(1-p), \lambda\rightarrow \infty$. Thus, when $n$ is finite, the winning probability of Alice is fixed to $p$ across all rounds. Note that we only consider $\alpha\geq\beta$ without loss of generality. Therefore, $p\geq \frac12$. The number of rounds Alice wins follows a binomial distribution. Formally, the probability of Alice wins $j$ rounds in the first $i$ rounds is
\[
\Pr[S_i=j]=\binom{i}{j} p^j (1-p)^{i-j}
\]
Then we calculate $Q^i_j$
\begin{align}
Q^i_j=2\binom{i}{j}p^{j+1} (1-p)^{i-j+1} \label{eq:certainq}
\end{align}
Moreover, we can apply the Main Technical Lemma~\ref{lem:ratio} to show that the first $n-1$ rounds have the same expected surprise.
\begin{corollary}
When $\alpha=\lambda p,\beta = \lambda(1-p), \lambda\rightarrow \infty$, given $n$ that is finite, \[
\mathrm{E}[\Delta_\mathcal{\bel}]=\sum_{i=1}^{n}\mathrm{E}[\Delta_\mathcal{\bel}^i]=\mathrm{E}[\Delta_\mathcal{\bel}^{n-1}]*(n-1)+\mathrm{E}[\Delta_\mathcal{\bel}^{n}]
\]
\label{cor:certain}
\end{corollary}
\begin{proof}[Proof of Corollary~\ref{cor:certain}]
\[
\frac{\mathrm{E}[\Delta_\mathcal{\bel}^{i}]}{\mathrm{E}[\Delta_\mathcal{\bel}^{i+1}]}=\frac{i+\alpha+\beta}{i+\alpha+\beta-1}\rightarrow 1\] as $\lambda\rightarrow \infty$.
\end{proof}
By formula~\eqref{eq:gensurp}, we have
\begin{align}
\mathrm{E}[\Delta_\mathcal{\bel}(x)]
=&\bigg(Q^{n-2}_{\frac{n-x-2}2}*\frac{\frac{n-x}2+\alpha}{n+\alpha+\beta-1}+Q^{n-2}_{\frac{n+x-2}2}*\frac{\frac{n-x}2+\beta}{n+\alpha+\beta-1}\notag\\
&+\sum_{j=\frac{n-x}2}^{\frac{n+x-4}2}Q^{n-2}_j*\frac{1}{n+\alpha+\beta-1}\bigg)*(n-1)+\sum_{j=\frac{n-x}2}^{\frac{n+x-2}2}Q^{n-1}_j \tag{Formula~\eqref{eq:gensurp} and Corollary~\ref{cor:certain}}\\
=&\bigg(Q^{n-2}_{\frac{n-x-2}2}*p+Q^{n-2}_{\frac{n+x-2}2}*(1-p)\bigg)*(n-1)+\sum_{j=\frac{n-x}2}^{\frac{n+x-2}2}Q^{n-1}_j\tag{$\alpha=\lambda p,\beta = \lambda(1-p), \lambda\rightarrow \infty$}\\
= & \left(2\binom{n-2}{\frac{n-x-2}2}p^{\frac{n-x+2}2}(1-p)^{\frac{n+x}2}+2\binom{n-2}{\frac{n+x-2}2}p^{\frac{n+x}2}(1-p)^{\frac{n-x+2}2}\right)*(n-1)\notag\\
&+\sum_{j=\frac{n-x}2}^{\frac{n+x-2}2}2\binom{n-1}{j}p^{j+1}(1-p)^{n-j}\tag{Apply formula~\eqref{eq:certainq} to substitute $Q^i_j$}
\end{align}
Then we calculate
\begin{align*}
& \mathrm{E}[\Delta_\mathcal{\bel}(x + 1)] - \mathrm{E}[\Delta_\mathcal{\bel}(x - 1)] \\
= & \underbrace{2(n-1)\bigg(\binom{n-2}{\frac{n-x-3}2}p^{\frac{n-x+1}2}(1-p)^{\frac{n+x+1}2}+\binom{n-2}{\frac{n+x-1}2}p^{\frac{n+x+1}2}(1-p)^{\frac{n-x+1}2}} \\
&\overbrace{-\binom{n-2}{\frac{n-x-1}2}p^{\frac{n-x+3}2}(1-p)^{\frac{n+x-1}2}\bigg)-\binom{n-2}{\frac{n+x-3}2}p^{\frac{n+x-1}2}(1-p)^{\frac{n-x+3}2}}^\text{difference of first n-1 round's expected surprise} \\
&+\overbrace{\sum_{i=\frac{n-x-1}2}^{\frac{n+x-1}2}2\binom{n-1}{i}p^{i+1}(1-p)^{n-i}-\sum_{i=\frac{n-x+1}2}^{\frac{n+x-3}2}2\binom{n-1}{i}p^{i+1}(1-p)^{n-i}}^\text{difference of final round's expected surprise}\notag\\
= & (n-x-1)\left(\binom{n-1}{\frac{n-x-1}2}p^{\frac{n-x+1}2}(1-p)^{\frac{n+x+1}2}\right) +
\binom{n-1}{\frac{n+x-1}2}p^{\frac{n+x+1}2}(1-p)^{\frac{n-x+1}2}\\
& - (n+x-1)\left(\binom{n-1}{\frac{n-x-1}2}p^{\frac{n-x+3}2}(1-p)^{\frac{n+x-1}2}+
\binom{n-1}{\frac{n+x-1}2}p^{\frac{n+x-1}2}(1-p)^{\frac{n-x+3}2}\right)\\
& + 2\binom{n-1}{\frac{n+x-1}2}p^{\frac{n+x+1}2}(1-p)^{\frac{n-x+1}2}+2\binom{n-1}{\frac{n-x-1}2}p^{\frac{n-x+1}2}(1-p)^{\frac{n+x+1}2}\\
\tag{Apply $n\binom{n-1}{k}=(n-k)\binom{n}{k}$}\\
= & \binom{n-1}{\frac{n+x-1}2}p^{\frac{n-x+1}2}(1-p)^{\frac{n-x+1}2}\left((n-x+1)((1-p)^x+p^x)-(n+x-1)(p(1-p)^{x-1}+p^{x-1}(1-p))\right)\\
= & \binom{n-1}{\frac{n+x-1}2}p^{\frac{n-x+1}2}(1-p)^{\frac{n-x+1}2}\left((2np-n-(x-1))p^{x-1}+(n-2np-(x-1))(1-p)^{x-1}\right) \\
\propto & (2np-n-(x-1))p^{x-1}+(n-2np-(x-1))(1-p)^{x-1} \tag{Since $\binom{n-1}{\frac{n+x-1}2}p^{\frac{n-x+1}2}(1-p)^{\frac{n-x+1}2}>0$}
\end{align*}
We define $F(x):=(2np-n-(x-1))p^{x-1}+(n-2np-(x-1))(1-p)^{x-1},x\in[1,n+1)$ and analyze the solution of $F(x)=0$. We show several examples of $F(x)$ with different $n$ in Figure~\ref{fig:F_example}. We can see that, intuitively, when $n$ is large enough, $F(x)=0$ has 2 solutions. One is $x=1$, and the other solution is close to $2np-n$ when $n$ is large.
\begin{figure}[!ht]\centering
\subfigure[$n=5$]{\includegraphics[width=.24\linewidth]{figures/F_example_5.pdf}}
\subfigure[$n=6$]{\includegraphics[width=.24\linewidth]{figures/F_example_6.pdf}}
\subfigure[$n=7$]{\includegraphics[width=.24\linewidth]{figures/F_example_7.pdf}}
\subfigure[$n=8$]{\includegraphics[width=.24\linewidth]{figures/F_example_8.pdf}}
\caption{\textbf{Examples for function F}}
\label{fig:F_example}
\end{figure}
\begin{lemma}\label{lem:fx}
When $p\geq \frac12$, $F(x)=0$ has a trivial solution at $x=1$, and has a non-trivial solution $\Tilde{x}\in (1,2np-n+1)$ if and only if $p>\frac12$ and $n>\frac{1}{(\frac{1}{2}-p)\ln (\frac{1-p}{p})}$. There is no other solution. Moreover, when $x\in (1,\Tilde{x})$, $F(x)>0$, when $\Tilde{x}<n-1$ and $x\in (\Tilde{x},n-1]$, $F(x)<0$.
Besides, let $a=2np-n-2$, when $p>\frac{1}{1+(a+1)^{-\frac{1}{a}}}$, the non-trivial solution of $F(x)=0$ is in $(2np-n-1,2np-n+1)$.
\end{lemma}
We defer the proof to appendix. $F(x)$ has a solution $\Tilde{x}$ in $(1,2np-n+1)$ when $p>\frac12$ and $n>\frac{1}{(\frac{1}{2}-p)\ln (\frac{1-p}{p})}$. When $x\in [1,\Tilde{x})$, $F(x)\geq 0$, i.e. $\mathrm{E}[\Delta_\mathcal{\bel}(x + 1)] - \mathrm{E}[\Delta_\mathcal{\bel}(x - 1)] \geq 0$.
When $\Tilde{x}\leq n-1$ and $x\in[\Tilde{x},n-1]$, $\mathrm{E}[\Delta_\mathcal{\bel}(x + 1)] - \mathrm{E}[\Delta_\mathcal{\bel}(x - 1)]\leq 0$. Therefore, the conditions of Claim \ref{cla:roundopt} are satisfied. We apply Claim \ref{cla:roundopt} to obtain:
\[
x^*(\alpha,\beta,n) = \begin{cases}
\textsc{rd}(\Tilde{x}) & \text{if $p>\frac12$ and $n>\frac{1}{(\frac{1}{2}-p)\ln(\frac{1-p}{p})}$}\\
\textsc{rd}(1) & \text{otherwise}\\
\end{cases}
\]
where $\Tilde{x}$ is the non-trivial solution of the equation $F(x)=0$.
Finally, we study the approximation of the $x^*(\alpha,\beta,n)$ in the certain case and show that under certain conditions, it is around the "expected lead" $\textsc{rd}(2np-n)$, the number of points the weaker player needs to comeback in expectation.
Lemma~\ref{lem:fx} shows that when $p>\frac{1}{1+(a+1)^{-\frac{1}{a}}}$, the non-trivial solution of $F(x)=0$ is in $(2np-n-1,2np-n+1)$. Recall that $\textsc{rd}(x)\in [x-1,x+1)$. Then we have $|\textsc{rd}(\Tilde{x})-\textsc{rd}(2np-n)|<3$. Moreover, $\textsc{rd}(x)$ is an integer that has the same parity as $n$. Therefore, $|\textsc{rd}(\Tilde{x})-\textsc{rd}(2np-n)|\leq 2$, i.e. the difference between the approximation $\textsc{rd}(2np-n)$ and the optimal bonus $\textsc{rd}(\Tilde{x})$ is $\leq 2$.
\subsection{General Beta Prior Setting}
We provide an $O(n)$ algorithm for general Beta prior setting. A natural idea is to enumerate all possible bonus $x$ and calculate the corresponding expected total surprise value $\mathrm{E}[\Delta_\mathcal{\bel}(x)]$. However, $\mathrm{E}[\Delta_\mathcal{\bel}(x)]$ contains an $O(n)$ summation and lead to an $O(n^3)$ running time algorithm.
Recall formula~\eqref{eq:gendif},
\begin{equation}
\label{eq:difference}
\begin{aligned}
&\mathrm{E}[\Delta_\mathcal{\bel}(x+1)]-\mathrm{E}[\Delta_\mathcal{\bel}(x-1)]\\
=&\bigg(\left(Q^{n-2}_{\frac{n-x-3}2}-Q^{n-2}_{\frac{n-x-1}2}\right)*(\frac{n-x-1}2+\alpha)+\left(Q^{n-2}_{\frac{n+x-1}2}-Q^{n-2}_{\frac{n+x-3}2}\right)*(\frac{n-x-1}2+\beta)\bigg)\notag\\
&*\frac{(n+2\alpha-2)\mathbb{H}}{n+2\alpha-1}+Q^{n-1}_{\frac{n+x-1}2}+Q^{n-1}_{\frac{n-x-1}2}
\end{aligned}
\end{equation}
Note that in this formula, only $\mathbb{H}$ and $Q^i_j$ cannot be calculated in O(1) time. We can preprocess $\mathbb{H}$ in the time of $O(n)$. Then notice that if we can calculate all the $Q^i_j$ in the formula within $O(1)$ time, we can compute the difference between $\mathrm{E}[\Delta_\mathcal{\bel}(x+1)]$ and $\mathrm{E}[\Delta_\mathcal{\bel}(x-1)]$ in $O(1)$ time.
Recall Lemma \ref{lem:generald}
\begin{align*}
Q^i_j&=\frac{2(\alpha+\beta-1)\binom{i}{j}\binom{\alpha+\beta-2}{\alpha-1}}{(\alpha+\beta+i)\binom{i+\alpha+\beta}{j+\alpha}}\\
&=\frac{\Gamma(i+1)\Gamma(\alpha+\beta)\Gamma(j+\alpha+1)\Gamma(i-j+\beta+1)}{(\alpha+\beta+i) \Gamma(i+\alpha+\beta+1)\Gamma(j+1)\Gamma(i-j+1)\Gamma(\alpha)\Gamma(\beta)}\\
&=\frac{1}{\alpha+\beta+i}*\frac{\Gamma(i+1)}{\Gamma(j+1)\Gamma(i-j+1)}*\frac{\Gamma(j+\alpha+1)}{\Gamma(\alpha)}*\frac{\Gamma(i-j+\beta+1)}{\Gamma(\beta)}*\frac{\Gamma(\alpha+\beta)}{\Gamma(i+\alpha+\beta+1)}
\end{align*}
We consider the following four parts separately
\[
\begin{cases}
\Gamma(1+y)\\
\frac{\Gamma(\alpha+y)}{\Gamma(\alpha)}\\
\frac{\Gamma(\beta+y)}{\Gamma(\beta)}\\
\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha+\beta+y)}\\
\end{cases}
y\in \{0, 1, 2, \ldots, n\}
\]
Due to the property of the Gamma function $\forall z> 0, \Gamma(z+1)=z\Gamma(z)$, we can use the recursive method to preprocess the above four parts in $O(n)$ time. Then for any $i,j\in\{0, 1, 2, \ldots, n\},j\leq i$, we can calculate the value of $Q_j^i$ in $O(1)$ time.
Based on this, when we enumerate all possible bonus $x$ in ascending order, we can calculate the value of $\mathrm{E}[\Delta_\mathcal{\bel}(x+1)]$ based on $\mathrm{E}[\Delta_\mathcal{\bel}(x-1)]$ in $O(1)$ time. We then find the optimal bonus $x$. We present the pseudo code in Algorithm~\ref{algo:optimal_x}. The total time and space complexity of the algorithm is $O(n)$.
\begin{algorithm}[!ht]
\SetAlgoNoLine
\caption{Calculate optimal bonus $x^*$}
\label{algo:optimal_x}
\KwIn{Number of rounds $n$, the parameters of the prior Beta distribution $\alpha,\beta$}
\KwOut{Optimal bonus $x^*$}
\For{$i$ in $\{0, 1,\ldots,n\}$}{
Initialize $\Gamma(i+1),\frac{\Gamma(\alpha+i)}{\Gamma(\alpha)},\frac{\Gamma(\beta+i)}{\Gamma(\beta)},\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha+\beta+i)}$}
$surp\_max:=0$\\
$surp\_sum:=0$\\
$x:= n\%2$\\
\For{$i$ in $\{n\%2+1,\ldots,n-1\}$} {
$surp\_sum += \mathrm{E}[\Delta_\mathcal{\bel}(i+1)]-\mathrm{E}[\Delta_\mathcal{\bel}(i-1)]$\tcc*{Formula~\eqref{eq:gendif}}
\If{$surp\_sum>surp\_max$}{
$surp\_max:=surp\_sum$\\
$x:= i + 1$
}
}
\Return $x^*=x$
\end{algorithm}
We implement the algorithm to conduct numerical experiments for $n=5, n=10, n=20, n=40$ (Figure~\ref{fig:contourf_finite}) and the figure becomes more and more similar to the asymptotic case (Figure~\ref{fig:infinite}) as $n$ increases.
\section{Asymptotic Case}
In the asymptotic case, we can use a continuous integral to approximate the discrete summation. Here we define bonus ratio $\mu:=\frac{x}{n}$ and use the integration of $\mu$ to approximate the overall surprise. Formally,
\begin{theorem}
For all $\alpha\geq\beta\geq 1$, there exists a function $Z_{\alpha,\beta,n}(\mu)$ such that $\forall \mu\in (0,1)$, $\mathrm{E}[\Delta_{\mathcal{B}}(\mu*n)]=Z_{\alpha,\beta,n}(\mu)*(1+O(\frac1n))$. When we define $\mu^*:=\arg\max_{\mu} Z_{\alpha,\beta,n}(\mu)$,
\begin{itemize}
\item \textbf{Symmetric $\alpha=\beta$}
\[\mu^*= \frac{1}{2\alpha \mathbb{H}_{2\alpha}(n-1)-1}\]
\item \textbf{Near-certain $\alpha=\lambda p, \beta=\lambda (1-p)$} fixing $p$, for all sufficiently small $\epsilon>0$, when $\lambda>O(\log\frac{1}{\epsilon})$\footnote{See detailed conditions in Lemma~\ref{lem:gu}}, the optimal $\mu^*$ is around the ``expected lead'',
\[\mu^*\in (\frac{(\alpha-\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}-\epsilon,\frac{(\alpha-\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1})\approx (\frac{\alpha-\beta}{\alpha+\beta}-\epsilon,\frac{\alpha-\beta}{\alpha+\beta})=(2p-1-\epsilon, 2p-1)\]
\item \textbf{General} $\mu^*$ is the unique solution of $G(\mu)=0$ and $\mu^*<\frac{(\alpha-\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}$.
\end{itemize}
\[G(\mu):=(1+\mu)^{\alpha-\beta}\left(\frac{(\alpha-\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}-\mu\right)+(1-\mu)^{\alpha-\beta}\left(\frac{(-\alpha+\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}-\mu\right)\]
\label{the:asymptotic}
\end{theorem}
\begin{figure}[!ht]\centering
\subfigure[$\alpha-\beta=0$]{\includegraphics[width=.24\linewidth]{figures/G_example_5_5.pdf}}
\subfigure[$\alpha-\beta=2$]{\includegraphics[width=.24\linewidth]{figures/G_example_6_4.pdf}}
\subfigure[$\alpha-\beta=4$]{\includegraphics[width=.24\linewidth]{figures/G_example_7_3.pdf}}
\subfigure[$\alpha-\beta=6$]{\includegraphics[width=.24\linewidth]{figures/G_example_8_2.pdf}}
\caption{\textbf{Examples for function G}}
\label{fig:G_example}
\end{figure}
\begin{figure}[!ht]\centering
\includegraphics[width=1\linewidth]{figures/asymptotic3.PNG}
\caption{\textbf{Asymptotic case}}
\label{fig:asymptotic}
\end{figure}
We define $\betadens{\theta}$ as the density function of Beta distribution, i.e. $\betadens{\theta}=\frac1{B(\alpha,\beta)}\theta^{\alpha-1}(1-\theta)^{\beta-1}$.\footnote{$B(\alpha,\beta)$ is beta function, i.e. $B(\alpha,\beta)=\frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}$} In the asymptotic case when $n$ is sufficiently large, we can simplify Lemma~\ref{lem:generald} to obtain Lemma~\ref{lem:asymtotic}, illustrated in Figure~\ref{fig:asymptotic}. The simplification for $d,L,U$ is straightforward. For $\Pr[S_{n-1}=j]$, note that informally, due to law of large number, when $n$ is sufficiently large, the state in the $n-1$ or $n-2$ round concentrate to $p n$. Thus, given that $p$ follows distribution $\betadens{\theta}$, we have $\Pr[S_{n-1}=j]\approx \Pr[S_{n-2}=j]$ which are approximately proportional to $\betadens{\theta_j}$, where $\theta_j=\frac{j}n$
\begin{lemma}[Informal]
When $n$ is sufficiently large, fixed $\alpha,\beta,\mu\in(0,1)$, we have
\[
\begin{cases}
\frac{L_{n-2}}{n}\approx \frac{L_{n-1}}{n}\approx \frac{1-\mu}2\\
\frac{U_{n-2}}{n}\approx \frac{U_{n-1}}{n}\approx \frac{1+\mu}2\\
\end{cases}
\]
For any $L_{n-1}\leq j\leq U_{n-1}$, let $\theta_j=\frac{j}{n}$, we have
\[
\Pr[S_{n-1}=j]\approx \Pr[S_{n-2}=j]\approx \frac{\betadens{\theta_j}}n
\]
\[
q^{n-1}_j\approx q^{n-2}_j\approx \theta_j
\]
and
\[
d^{n-2}_j\approx
\begin{cases}
0,&\theta_j<\frac{L_{n-2}}{n}\\
\frac{1-\mu}2,&\theta_j=\frac{L_{n-2}}{n}\\
\frac{1}{n},&\frac{L_{n-2}}{n}<\theta_j<\frac{U_{n-2}}{n}\\
\frac{1-\mu}2,&\theta_j=\frac{U_{n-2}}{n}\\
0,&\theta_j>\frac{U_{n-2}}{n}
\end{cases}
\qquad
d^{n-1}_j=
\begin{cases}
0,&\theta_j<\frac{L_{n-1}}n\\
1,&\frac{L_{n-1}}n\leq \theta_j\leq\frac{U_{n-1}}n\\
0,&\theta_j>\frac{U_{n-1}}n
\end{cases}
\]
\label{lem:asymtotic}
\end{lemma}
Recall the general formula~\eqref{eq:general} here:
\begin{align*}
\mathrm{E}[\Delta_\mathcal{\bel}(x)] =& \bigg(\overbrace{\Pr[S_{n-2}=L_{n-2}]*2 q^{n-2}_{L_{n-2}}*(1-q^{n-2}_{L_{n-2}})*d_{L_{n-2}}^{n-2}}^\text{penultimate round (at point $L_{n-2}$)}\notag\\
&+\overbrace{\Pr[S_{n-2}=U_{n-2}]*2q^{n-2}_{U_{n-2}}*(1-q^{n-2}_{U_{n-2}})*d_{U_{n-2}}^{n-2}}^\text{penultimate round (at point $U_{n-2}$)}\notag\\
&+\overbrace{\sum_{j=L_{n-2}+1}^{U_{n-2}-1}\Pr[S_{n-2}=j]*2q^{n-2}_{j}*(1-q^{n-2}_{j})*d_{j}^{n-2}}^\text{penultimate round (between $L_{n-2}$ and $U_{n-2}$)}\bigg)*(n+\alpha+\beta-2)*\mathbb{H}\notag\\
&+\overbrace{\sum_{j=L_{n-1}}^{U_{n-1}}\Pr[S_{n-1}=j]*2q^{n-1}_{j}*(1-q^{n-1}_{j})*d_{j}^{n-1}}^\text{final round}
\end{align*}
By substituting Lemma~\ref{lem:asymtotic} into formula~\eqref{eq:general}, we can obtain an approximation for the overall expected surprise $\mathrm{E}[\Delta_{\mathcal{B}}(\mu*n)]$:
\begin{align}
&Z_{\alpha,\beta,n}(\mu)\notag\\
:=&\bigg(\overbrace{\frac{\betadens{\frac{1-\mu}{2}}}n*2*\frac{1+\mu}{2}*\frac{1-\mu}{2}*\frac{1-\mu}{2}}^\text{penultimate round (at point $L_{n-2}$)}+\overbrace{\int_{\frac{1-\mu}2}^{\frac{1+\mu}2}\betadens{\theta}*2*\theta*(1-\theta)*\frac{1}{n}d\theta}^\text{penultimate round (between $L_{n-2}$ and $U_{n-2}$)}\notag\\
&+\underbrace{\frac{\betadens{\frac{1+\mu}{2}})}n*2*\frac{1+\mu}{2}*\frac{1-\mu}{2}*\frac{1-\mu}{2}}_\text{penultimate round (at point $U_{n-2}$)}\bigg)*n\mathbb{H}+\underbrace{\int_{\frac{1-\mu}2}^{\frac{1+\mu}2}\betadens{\theta}*2*\theta*(1-\theta)d\theta}_\text{final round}\notag\\
=&\left(\int_{\frac{1-\mu}2}^{\frac{1+\mu}2}2\betadens{\theta}\theta(1-\theta)d\theta\right)*(\mathbb{H}+1)\\
&+\left(\betadens{\frac{1+\mu}{2}})+\betadens{\frac{1-\mu}{2}})\right)\frac{(1-\mu)^2(1+\mu)\mathbb{H}}{4}\notag
\end{align}
Formally, we need a formal version of Lemma~\ref{lem:asymtotic} to delicately analyze the relationship between $Z_{\alpha,\beta,n}(\mu)$ and $\mathrm{E}[\Delta_{\mathcal{B}}(\mu*n)]$ and prove the main result of Theorem~\ref{the:asymptotic}. We defer the formal version of Lemma~\ref{lem:asymtotic} and the proof to appendix.
We then analyze the property of $Z_{\alpha,\beta,n}(\mu)$.
In order to find the optimal $\mu$, we calculate the derivation of $Z_{\alpha,\beta,n}(\mu)$
\begin{align*}
&\frac{dZ_{\alpha,\beta,n}(\mu)}{d\mu}\\ =&\left(\betadens{\frac{1+\mu}{2}}+\betadens{\frac{1-\mu}{2}}\right)\frac{1-\mu^2}{4}*(\mathbb{H}+1)\\
&+\frac{d\left(\betadens{\frac{1+\mu}{2}}+\betadens{\frac{1-\mu}{2}}\right)}{d\mu}*\frac{(1-\mu)^2(1+\mu)\mathbb{H}}{4}\\
&+\left(\betadens{\frac{1+\mu}{2}}+\betadens{\frac{1-\mu}{2}}\right)*\frac{d((1-\mu)^2(1+\mu))}{d\mu}*\frac{\mathbb{H}}{4}\\
=&\left(\betadens{\frac{1+\mu}{2}}+\betadens{\frac{1-\mu}{2}}\right)\frac{1-\mu^2}{4}*(\mathbb{H}+1)\\
&+\frac{\left((\alpha-\beta-(\alpha+\beta-2)\mu)\betadens{\frac{1+\mu}{2}}+(-\alpha+\beta-(\alpha+\beta-2) \mu )\betadens{\frac{1-\mu}{2}}\right)(1-\mu)\mathbb{H}}{4}\\
&+\left(\betadens{\frac{1+\mu}{2}}+\betadens{\frac{1-\mu}{2}}\right)\frac{(1-\mu)(-3\mu-1)\mathbb{H}}{4}\\
=&\frac{(1-\mu)\betadens{\frac{1+\mu}{2}}}{4}\left(-((\alpha+\beta)\mathbb{H}-1)\mu+(\alpha-\beta)\mathbb{H}+1\right)\tag{Combining like terms}\\
&+\frac{(1-\mu)\betadens{\frac{1-\mu}{2}}}{4}\left(-((\alpha+\beta)\mathbb{H}-1)\mu+(-\alpha+\beta)\mathbb{H}+1\right)\\
\propto & (1+\mu)^{\alpha-1}*(1-\mu)^{\beta-1}\left(-((\alpha+\beta)\mathbb{H}-1)\mu+(\alpha-\beta)\mathbb{H}+1\right)\tag{Substituting density of Beta distribution}\\
&+(1-\mu)^{\alpha-1}*(1+\mu)^{\beta-1}\left(-((\alpha+\beta)\mathbb{H}-1)\mu+(-\alpha+\beta)\mathbb{H}+1\right)\\ \tag{$\mu<1$}
\propto & (1+\mu)^{\alpha-\beta}\left(-((\alpha+\beta)\mathbb{H}-1)\mu+(\alpha-\beta)\mathbb{H}+1\right)\\
&+(1-\mu)^{\alpha-\beta}\left(-((\alpha+\beta)\mathbb{H}-1)\mu+(-\alpha+\beta)\mathbb{H}+1\right)\\
\propto & (1+\mu)^{\alpha-\beta}\left(\frac{(\alpha-\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}-\mu\right)+(1-\mu)^{\alpha-\beta}\left(\frac{(-\alpha+\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}-\mu\right)\\
\end{align*}
Let \[G(\mu):=(1+\mu)^{\alpha-\beta}\left(\frac{(\alpha-\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}-\mu\right)+(1-\mu)^{\alpha-\beta}\left(\frac{(-\alpha+\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}-\mu\right)\]
The examples of $G(\mu)$ are illustrated in Figure~\ref{fig:G_example}.
\begin{lemma}[Property of $G(\mu)$]\label{lem:gu}
For all $\alpha\geq \beta$, when $n$ is sufficiently large, $G(0)>0$ and $G(1)<0$. $G(\mu)=0,\mu\in[0,1]$ has a unique solution and the solution is in $(0,\frac{(\alpha-\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1})$.
Moreover, for all $0<\epsilon< \frac{(\alpha-\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}$, when $\alpha-\beta>\frac{\log (\frac{2(\alpha-\beta)\mathbb{H}}{(\alpha+\beta)\mathbb{H}-1}-\epsilon)-\log \epsilon}{\log (1+\frac{(\alpha-\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}-\epsilon)-\log (1-(\frac{(\alpha-\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}-\epsilon))) } \footnote {This is approximately $\frac{\log\frac{\alpha-\beta}{\alpha+\beta}+\log\frac{1}{\epsilon}}{\log \alpha-\log \beta }<\frac{1}{\log \alpha-\log \beta } \log\frac{1}{\epsilon} $}$, the solution is within $(\frac{(\alpha-\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1}-\epsilon,\frac{(\alpha-\beta)\mathbb{H}+1}{(\alpha+\beta)\mathbb{H}-1})\approx (\frac{\alpha-\beta}{\alpha+\beta}-\epsilon, \frac{\alpha-\beta}{\alpha+\beta})$.
\end{lemma}
We defer the proof to appendix. The above lemma implies that $Z_{\alpha,\beta,n}(\mu)$ first increases and then decreases. Thus, the global optimal $\mu^*$ is also the local maximum, $G(\mu)=0$'s unique solution. Thus, the results for general case in Theorem~\ref{the:asymptotic} follow from the lemma. In the symmetric case, $\alpha=\beta$, \begin{align*}
G(\mu)&=\frac{2}{(\alpha+\beta)\mathbb{H}-1}-2\mu=0\\
\Rightarrow \mu^*&=\frac{1}{2\alpha \mathbb{H}-1}.
\end{align*} The results for near certain case also directly follow from the above lemma. Therefore, we finish the proof.
\section{Conclusion and Discussion}
In a multi-round competition, we show that we can increase the audience's overall surprise by setting a proper bonus in the final round. We further show that the optimal bonus size depends on the audience's prior and in the following settings, we obtain solutions of various forms for both the case of a finite number of rounds and the asymptotic case:
\begin{description}
\item [Symmetric] the audience's prior belief does not lean towards any player, here we obtain a clean closed-form solution in both the finite the and asymptotic case;
\item [Certain] the audience is a priori certain about the two players' relative abilities, here the optimal bonus is a special function's solution, and approximately and asymptotically equal to the ``expected lead'', the number of points the weaker player will need to come back in expectation;
\item [General] the optimal bonus can be obtained by a linear algorithm and, in the asymptotic case, is a special function's solution.
\end{description}
One natural extension is to validate our theoretical predictions using field experiments. We can potentially conduct online field experiments, e.g., AB test, to examine the effectiveness of the scoring rules. Moreover, the results from field experiments could potentially capture features that might be neglected in the existing models and consequently inform the development of new theories.
Regarding the theoretical work, one future direction is to incorporate the time factor into the model since a line of psychology literature \cite{kahneman1993more,baddeley1993recency} shows that the audience may judge their experience largely on their feeling in the later part of the game (end-effect). We can generalize our analysis to factor in that surprise may be more valued in some time periods (e.g. the last round) than others. Another future direction is extending our results to the setting where we allow the scores to increase from the first round to the final round (e.g., 1,2,3,4,.....), or the bonus size depends on the results of previous rounds, or the setting where the game may not end in a pre-determined fixed time (e.g. Quidditch), or even a more complicated game setting (e.g. board games).
\newpage
\bibliographystyle{plainnat}
| {
"attr-fineweb-edu": 1.94043,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUc8rxaKgTskq9XgSD | \section{Preprocessing}
\label{preprocessing}
\begin{figure*}[ht]
\centering
\tiny
\begin{minipage}{5 cm}
\vspace{1em}
\textcolor{purple}{\textbf{USER:}I'm looking for the Lensfield Hotel. Can you help me find it ?}\par
\textcolor{purple}{\textbf{ASSISTANT:}Yes of course, it is located in 53-57 lensfield road. Would you like me to book you a reservation ?}\par
\textcolor{purple}{\textbf{USER:} No thank you. Could you give me the postcode and tell me if they have free parking ?}\par
\textcolor{purple}{\textbf{ASSISTANT:} Yes, they have free parking. The postcode is cb21en. Is there anything else you'd like to know about this hotel ?}\par
\textcolor{purple}{\textbf{USER:}No, that's all I needed. Thanks! Bye!}\par
\textcolor{purple}{\textbf{ASSISTANT:} You are welcome. Goddbye!}\par
\end{minipage}
\hspace{0.2cm}
\begin{minipage}{5 cm}
\vspace{1em}
\textcolor{teal}{\textbf{USER:}I'd like to find a train that leaves after 17:00 on Saturday.}\par
\textcolor{teal}{\textbf{ASSISTANT:} Absolutely, I need departure and destination information and a time you need to arrive by.}\par
\textcolor{teal}{\textbf{USER:} I will be departing from Broxbourne and I'd like to go to Cambridge.}\par
\textcolor{teal}{\textbf{ASSISTANT:} I have 7 available. The first leaves at 17:32, and arrives at 18:32. Would you like me to book that for you ?}\par
\textcolor{teal}{\textbf{USER:}I am not looking to book at the moment, can I just get the train ID? Thanks.}\par
\textcolor{teal}{\textbf{ASSISTANT:} Of course! TR0811 leaves at 17:32, arrives at 18:32, with a duration of 60 minutes. The price is 14.32 pounds. May I assist you with anything else ?}\par
\textcolor{teal}{\textbf{USER:} That is all I need for today. Thank you. Good bye.}\par
\textcolor{teal}{\textbf{ASSISTANT:} Okay glad I could help you. Enjoy your trip.}\par
\end{minipage}
\hspace{0.2cm}
\begin{minipage}{5 cm}
\vspace{1em}
\textcolor{purple}{\textbf{USER:}I'm looking for the Lensfield Hotel. Can you help me find it ?}\par
\textcolor{purple}{\textbf{ASSISTANT:}Yes of course, it is located in 53-57 lensfield road. Would you like me to book you a reservation ?}\par
\textcolor{purple}{\textbf{USER:} No thank you. Coul you give me the postcode and tell me if they have free parking ?}\par
\textcolor{purple}{\textbf{ASSISTANT:} Yes, they have free parking. The postcode is cb21en. Is there anything else you'd like to know about this hotel ?}\par
\textcolor{teal}{\textbf{USER:}I'd like to find a train that leaves after 17:00 on Saturday.}\par
\textcolor{teal}{\textbf{ASSISTANT:} Absolutely, I need departure and destination information and a time you need to arrive by.}\par
\textcolor{teal}{\textbf{USER:} I will be departing from Broxbourne and I'd like to go to Cambridge.}\par
\textcolor{teal}{\textbf{ASSISTANT:} I have 7 available. The first leaves at 17:32, and arrives at 18:32. Would you like me to book that for you ?}\par
\textcolor{teal}{\textbf{USER:}I am not looking to book at the moment, can I just get the train ID? Thanks.}\par
\textcolor{teal}{\textbf{ASSISTANT:} Of course! TR0811 leaves at 17:32, arrives at 18:32, with a duration of 60 minutes. The price is 14.32 pounds. May I assist you with anything else ?}\par
\textcolor{teal}{\textbf{USER:} That is all I need for today. Thank you. Good bye.}\par
\textcolor{teal}{\textbf{ASSISTANT:} Okay glad I could help you. Enjoy your trip.}\par
\end{minipage}
\caption{An example of combining two single-task dialogues in \textcolor{purple}{color1} and \textcolor{teal}{color2} together to form a single multi-task dialogue.}
\label{example1}
\end{figure*}
The MultiWoZ 2.0 dataset has a JSON metadata that maintains a dictionary of slot-value pairs provided by the user to the agent in every utterance. We use this metadata to construct a local and a global knowledge of slot-value shared by the user and split to relabel the dataset for single domain and multidomain dialogues. The preprocessing step removed the noise in the labeling of dialogues. We used this approach to keep a test set of multi-domain dialogues to evaluate the model performance on compositional tasks. On the clean split of single domain dialogues we generate synthetic multidomain dialogues using two different approaches:
\subsection{Random Synthetic (RS)}
In this approach, we pick a single task dialogue $^iD^{SNG}$ and randomly select a set of \emph{K} single task dialogues,$\left(^iD^{SNG}_{noise}\right)_{k=1}^K$, to inject noise in $D^{SNG}$. With an hyperparameter, \emph{percentCopy}, we select the number of utterances to be copied from every dialogue in the set noiseDialogues and add it as a prefix to $D^{SNG}$. This results in $K$ negative samples of synthetic multidomain dialogues, $\left(^iD^{MUL}_{RS}\right)_{k=1}^K$, for every single domain dialogues in the dataset.
\subsection{Targetted Synthetic (TS)}
We bucket the single domain dialogues based on the conversation domain (\emph{taxi, hotel, attraction} etc.,). Similarly, we bucket the multi-task dialogues in the training set to measure the topic distributions in multi-task dialogues. Using the computed distribution of composite tasks in \emph{true} multidomain dialogues and the domain label of every $^iD^{SNG}$, we constrain the selection of random dialogues to conform to the training distribution of \emph{true} composite tasks in the training set. The hyperparameters and the remainder of the procedure is similar to RS except when combining the single domain dialogues from two different domains $\left(^iDom,^jDom\right)$, we inject the topic change exchanges randomly sampled from $TC^{\left(^jDom1,^iDom2\right)}$.
For training the proposed Domain Invariant Transformer model, we create the labels for the auxiliary tasks using the preprocessing steps used to split the dataset into single and multi-domain dialogues
\subsection{Experiments varying $\alpha$}
\begin{table}[H]
\centering
\small
\begin{tabular}{ccc}
\toprule
\textbf{$\alpha$} & \textbf{BLEU (MUL)}&\textbf{BLEU}(BOTH)\\
\midrule
0.0 & 14.07 & 13.94\\
\midrule
0.00001 & 13.74 & 13.31\\
\midrule
0.0001 & 14.13 & 14.11\\
\midrule
0.001 & 15.06 & {\bf 14.81}\\
\midrule
0.01 & 14.61 & 14.40\\
\midrule
0.1 & 14.70 & 14.41\\
\bottomrule
\end{tabular}
\caption{Varying the $\alpha$ to understand the effect of the discriminator on decoder performance}
\label{tab:alpha}
\end{table}
We experimented with different values of $\alpha$ to understand the influence of the discriminator loss. The results in Table \ref{tab:alpha} show that Domain Invariant Transformer performed better when $\alpha$ is $0.001$. The experiment also shows consistent performance improvement in all the experiments with different $\alpha$ highlighting the usefulness of training an auxiliary network to train domain invariant encoder representations.
\section{Token distribution}
\label{sec:token-distri}
\begin{table}[h]
\centering
\small
\subfigure[Table 1]{
\begin{tabular}{c|c}
\midrule
\textbf{MUL Train} & 492688\\
\textbf{SNG Valid} & 16907\\
\textbf{Intersection} & 9238 \\
\textbf{\% Unseen} & 45.36\\
\end{tabular}}
%
\subfigure[Table 2]{
\begin{tabular}{c|c}
\midrule
\textbf{MUL Train} & 492688 \\
\textbf{MUL Valid} & 104261\\
\textbf{Intersection} & 48076 \\
\textbf{\% Unseen} & 53.89\%\\
\end{tabular}}
\\
\subfigure[Table 3]{
\begin{tabular}{c|c}
\midrule
\textbf{SNG Train} & 124038 \\
\textbf{MUL Valid} & 104261\\
\textbf{Intersection} & 22254 \\
\textbf{\% Unseen} & 78.66\%\\
\end{tabular}}
%
\subfigure[Table 4]{
\begin{tabular}{c|c}
\midrule
\textbf{SNG Train} & 124038 \\
\textbf{SNG Valid} & 16907\\
\textbf{Intersection} & 6562 \\
\textbf{\% Unseen} & 61.19\%\\
\end{tabular}}
\\
\subfigure[Table 5]{
\begin{tabular}{c|c}
\midrule
\textbf{SNG+MUL Train} & 568674\\
\textbf{SNG Valid} & 104261\\
\textbf{Intersection} & 49999 \\
\textbf{\% Unseen} & 52.04\%\\
\end{tabular}}
%
\subfigure[Table 6]{
\begin{tabular}{c|c}
\midrule
\textbf{SNG+MUL Train} & 568674\\
\textbf{SNG Valid} & 16907\\
\textbf{Intersection} & 9746 \\
\textbf{\% Unseen} & 42.36\%\\
\end{tabular}}
\caption{Analysis of 4-gram overlap across different combinations of train and validation splits that were used in the experiments. The analysis show that the \%Unseen in validation set is higher when training with SNG (Single domain dialogues) but considerably lower when trained with MUL. The composition task requires models to understand the underlying task structure but the data distribution and performance of transformer strongly correlate to show that the transformer model at best mimics the surface level token distribution than understanding the nature of task.}
\label{tab:token-distibution}
\end{table}
We analyze the token distribution in the dataset to understand the negative result further. We observed that despite the task distributions are matched the underlying token distribution in different set up is not (Table \ref{tab:token-distibution}). We looked at the overlap of the distribution of 4-grams in conversations on the different splits we used for training. We observed that Multi-task dialogues (MUL) training set has as much 4-gram overlap with MUL Valid and SNG (Single task dialogues) Valid sets as the combined (SNG + MUL) training data.
The analysis raises doubts in the performance of transformer model with increased MUL train dialogues that the performance improvement cannot be only because of the model's ability to decompose multiple tasks but may be because the MUL train has higher 4-gram overlap with SNG Valid and MUL Valid. This shows that despite the dialogues carrying rich information in task oriented dialogues, the model at best only mimics the surface level token distribution. Hence, it is not clear if the Transformer model can generalize to multi-task dialogues with an understanding of the underlying task structure.
\section{Introduction}
Recent years have seen a tremendous surge in the application of deep learning methods for dialogue in general \cite{conv-seq2seq,pipeline,multiwoz,deal} and task-oriented dialogue \cite{sclstm, fb, neural_assistant} specifically. Task-oriented dialogue systems help users accomplish tasks such as booking a movie ticket and ordering food via conversation. Generative models are a popular choice for next turn response generation in such systems \cite{pipeline,latent,copy-dialog}. These models are typically learned using large amounts of dialogue data for every task \cite{multiwoz,taskmaster}. It is natural for users of the task-oriented dialogue system to want to accomplish multiple tasks within the same conversation, e.g. booking a movie ticket and ordering a taxi to the movie theater within the same conversation. The brute-force solution would require collecting dialogue data for every task combination which might be practically infeasible given the combinatorially many possibilities.
While the ability of generative dialogue models to compose multiple tasks has not yet been studied in the literature, there has been some investigation on the compositionality skills of deep neural networks. \citet{lake} propose a suite of tasks to evaluate a method's compositionality skills and find that deep neural networks generalize to unseen compositions only in a limited way. \citet{kottur} analyze whether the language emerged when multiple generative models interact with each other is compositional and conclude that compositionality arises only with strong regularization.
Motivated by the practical infeasibility of collecting data for combinatorially many task compositions, we focus on task-level compositionality of text response generation models. We begin by studying the effect of training data size of human-human multiple task dialogues on the performance of Transformer \cite{transformer} generative models. Next, we explore two solutions to improve task-level compositionality. First, we propose a data augmentation approach \cite{aug_1, aug_2,alex,aug,back} where we create synthetic multiple task dialogues for training from human-human single task dialogue; we add a portion of one dialogue as a prefix to another to simulate multiple task dialogues during training. As a second solution, we draw inspiration from the domain adaptation literature \cite{domain_1,domain_2,domain_nlp_1,domain_nlp,video,speech_domain} and encourage the model to learn domain invariant representations with an auxiliary loss to learn representations that are invariant to single and multiple task dialogues.
We conduct our experiments on the Multiwoz dataset \cite{multiwoz}. The dataset contains both single and multiple task dialogues for training and evaluation. In Multiwoz, the tasks in multiple task dialogues are only the combinations of tasks in single task dialogues. This allows the dataset to be an appropriate benchmark for our experiments.
To summarize, our key findings are:
\begin{itemize}
\item{We study task-level compositionality of text response generation models and find that they are heavily reliant on multiple task conversations at train time to do well on such conversations at test time.}
\item{We explore two novel unsupervised solutions to improve task-level compositionality: (1) creating synthetic multiple task dialogue data from human-human single task dialogue and (2) forcing the encoder representation to be invariant to single and multiple task dialogues using an auxiliary loss.}
\item{Highlighting the difficulty of composing tasks in generative dialogues with experiments on the Multiwoz dataset, where both the methods combined result only in a 8.5\% BLEU \cite{papineni2002bleu} score improvement when zero-shot evaluated on multiple task dialogues.}
\end{itemize}
\section{Background}
Let $d_1, d_2, \ldots, d_M$ be the dialogues in the training set and every dialogue $d_m = ((u^1_m, a^1_m), (u^2_m, a^2_m), \ldots, (u^{n_m}_m, a^{n_m}_m)$ ($\forall m \in \{1,2,\ldots,M\}$) consists of $n_m$ turns each of user and assistant. Further each user and assistant turn consists of a sequence of word tokens. The individual dialogue could be either single task or multiple task depending on the number of tasks being accomplished in the dialogue.
The response generation model is trained to generate each turn of the assistant response given the conversation history. The generative model learns a probability distribution given by $P(a^{i} \mid (u^1, a^1), \ldots, (u^{i-1}, a^{i-1}), u^i)$. We drop the symbol $m$ that denotes a particular training example for simplicity. The assistant turn $a^i$ consists of a sequence of word tokens, $a^{i} = (w_1^i, w_2^i, \ldots, w_{l^i}^i)$. The response generation model factorizes the joint distribution left-to-right given by,
$P(a^{i} \mid x^i) = \prod\limits_{j=1}^{l^i} P(w_j \mid x^i, w_1^i, \ldots, w_{j-1}^i)$ \\
where $x^i=((u^1, a^1), \ldots, (u^{i-1}, a^{i-1}), u^i)$ refers to the conversation history till the $i^{th}$ turn.
We use a Transformer \cite{transformer} sequence-to-sequence model to parameterize the above distribution. Given a training set of dialogues, the parameters of the Transformer model are learned to optimize the conditional language modelling objective given by,
\begin{equation}
\label{eq:lm}
L_{LM} = \sum_{m=1}^{M} \sum _{i=1}^{n_m} \log P(a^{i} \mid x^i, \Theta)
\end{equation}
where $\Theta$ refers to the parameters of the Transformer model.
\section{Data Augmentation}
The first solution we explore for task compositionality generates synthetic multiple task dialogues for training from human-human single task dialogues \footnote{\href{https://github.com/ppartha03/Dialogue-Compositionality-of-Generative-Transformer}{Code repository}}. Here, we sample two dialogues from the training set, and add a portion of one dialogue as a prefix to another. While this procedure might not create dialogues of the quality equivalent to human-human multiple task dialogue, it is an unsupervised way to create approximate multiple task dialogues that the model could theoretically benefit from.
Concretely, we randomly sample two single task dialogues $d_i$ and $d_j$ from the training set and create a noisy multiple task dialogue by adding a fraction of the dialogue $d_j$ as a prefix to dialogue $d_i$. The fraction of dialogue taken from dialogue $d_j$ is given by the hyperparameter $augment\_fraction$. The number of times dialogue $d_i$ is augmented by a randomly sampled dialogue is given by the hyper-parameter $augment\_fold$.
We consider two strategies for sampling the dialogue $d_j$. In $Random\_Augment$, the dialogue is uniformly randomly sampled from the remainder of the training set. A potential issue with the random strategy is that it might create spurious task combinations and the model might fit to this noise. Motivated by the spurious task combination phenomenon, we consider another sampling strategy $Targeted\_Augment$ where we create synthetic multiple task dialogues only for task combinations that exist in the development set. Here, $d_j$ is sampled from a set of dialogues whose task is compatible with the task of dialogue $d_i$. The Transformer model is now trained on the augmented training set using the objective function given in Equation \ref{eq:lm}. The effect of the sampling strategy and the hyperparameters on the model performance is discussed in the experiments section (Section \ref{sec:experiments}).
\section{Domain Invariant Transformer}
\label{sec:trans_disc}
We propose Domain Invariant Transformer model (Figure \ref{fig:trans_disc_diagram}) to maintain a domain invariant representation of the encoder by training the encoder representation for an auxiliary task. Here, the auxiliary task for the network is to predict the label ,$^i\hat{l}$, denoting the type of task (single or multi-task) in the encoded conversation history. The model takes as input the sequence of byte pair encoded tokens that are represented at the encoder hidden state as a set of attention weights from the multi-head multiple layer attention mechanism of transformer. The conditional language model (Equation \ref{eq:lm}) is learnt by a transformer decoder on top that attends over the encoder states.
The discriminator task network is trained with average pooling of the encoder summary over the attention heads ($h_j$)as shown in Equation \ref{averagepool}.
\begin{equation}
^ie^{s} = \sum_{j=1}^{k} \frac{\left(h_j\right)}{k}
\label{averagepool}
\end{equation}
The average pooled encoder summary is passed as input to a two-layer feed forward discriminator. The discriminator network has a dropout \cite{srivastava14dropout} layer in-between the two fully connected layers ($f_1$ and $f_2$) (Equation \ref{discriminator}).
\begin{equation}
\hat{y}_i = f_2\left(f_1\left(^ie^{s}\right)\right)
\label{discriminator}
\end{equation}
The binary cross-entropy loss, $L_{disc}$, for the predicted label, $\hat{y}_i$, an input context \emph{i} is computed as in Equation \ref{disc_loss}.
\begin{equation}
L_{disc} = - \left(y_i\log\left(\hat{y}_i\right) + \left(1 - y_i\right)\log\left(1 - \hat{y}_i\right)\right)
\label{disc_loss}
\end{equation}
The Domain Invariant Transformer model optimizes a convex combination of the two losses as shown in Equation \ref{final_loss}.
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth,height=8cm]{images/trans_disc.png}
\caption{Domain Invariant Transformer Architecture.}
\label{fig:trans_disc_diagram}
\end{figure}
\begin{equation}
L_{train} = \alpha * L_{disc} + \left(1-\alpha\right) * L_{LM}
\label{final_loss}
\end{equation}
The language model loss makes sure that the model learns to generate the next utterance while the discriminator loss makes sure the model is aware of the nature of task. To understand the effect of the auxiliary loss we experiment with different values for $\alpha$ (ref Appendix).
\section{Experiments}
\label{sec:experiments}
\subsection{Importance of multiple task dialogues}
We measure the importance of multiple task dialogue on the overall performance of transformer by training the model with varying amount of multiple task dialogues and keeping the task distribution between multiple and single domain dialogues almost similar in the experiments. We keep increasing the number of multiple task dialogues while reducing the single task dialogues to keep the total number of dialogues constant at $2,150$. The model should be able to learn to generalize to multiple tasks as the set of tasks are the same between the train and test sets with only the nature in which the task is posed by the user is different. We use the Tensor2Tensor \cite{tensor2tensor} framework to run our experiments with \textit(tiny) hyper-parameter setting in the framework.
\begin{table}[h!t]
\centering
\small
\begin{tabular}{cc|cc}
\toprule
\multicolumn{2}{c|}{\textbf{Training Data}} & \multicolumn{2}{c}{\textbf{BLEU}}\\
Single & Multiple & Multiple Only & Overall\\
\midrule
$2150$ & $0$ & $7.17$ & $6.81$ \\
$1836$ & $314$ & $7.25$ & $6.87$\\
$1522$ & $628$ & $7.94$ & $7.84$ \\
$1208$ & $942$ & $8.68$ & $8.68$\\
$894$ & $1256$ & $8.83$ & $8.27$ \\
$580$ & $1570$ & $9.33$ & $8.84$ \\
$266$ & $1884$ & $9.10$ & $9.25$ \\
\bottomrule
\end{tabular}
\caption{Ablation study to understand the usefulness of Multiple task dialogues.}
\label{tab:multiple_task_dialog}
\end{table}
As shown in Table \ref{tab:multiple_task_dialog}, the quality of the model improves significantly as number of multiple task dialogues increases. Interestingly, even though the total number of dialogues are kept fixed, the overall validation BLEU score also improves as the number of multiple task dialogues increase in the training set. The results show that the models may be better at decomposing than composing in the domain of goal oriented dialogues or the model at best can only mimic surface level token distribution (Appendix \ref{sec:token-distri}). Though training with more multi-task dialogues can potentially improve the performance, it is not a scalable solution. We will test two of the out-of-the-shelf techniques to improve the task level compositionality in the following section.
\subsection{Zero-shot Compositionality Experiments}
We experiment on Transformer to evaluate the performance on handling zero-shot compositional tasks by training the baseline model only on single task dialogues, and with the proposed data augmentation techniques. The results, in Table \ref{tab:zero_shot}, show that the \emph{Targeted\_Augment} technique increased the performance on multiple-task dialogues by 8.5\% BLEU score while the scores of the model slightly dropped in the performance of all dialogues.
\begin{table}[!h]
\small
\centering
\setlength\tabcolsep{4pt}
\begin{tabular}{ccc}
\toprule
\textbf{Data} & \multicolumn{2}{c}{\textbf{BLEU}}\\
& Multiple & Overall \\
\midrule
SNG & 7.17 & 6.81\\
\midrule
SNG + RS & 7.46 & 7.14 \\
\midrule
SNG +TS & 7.78 & 7.09 \\
\bottomrule
\end{tabular}
\caption{SNG: Single task dialogues, RS: Random\_Augment Synthetic, and TS: Targeted\_Augment Synthetic. }
\label{tab:zero_shot}
\end{table}
The reason for only a minor BLEU improvement could be due to the noise in generation process. Although the task distributions are matched, the token level distributions appear to be significantly different between the single and multiple-tasks. The results suggest that the method may inject more noise in the token level distribution thereby not improving the model performance significantly.
\subsection{Domain Invariant Transformer}
We compared the proposed architecture and the baseline Transformer model to understand the effects of domain invariant encoder representation towards language generation in multi-task dialogues. We observed from our experiments in Table \ref{tab:multiple-task-final-table} that Domain Invariant Transformer or Transformer model fails to generalize with few-shot multi-task dialogues. The data augmentation techniques too appear to not contribute towards improving the performance. But, Domain Invariant Transformer model improved the performance to a BLEU score when trained only on all of training data, which, though was not the intended objective. Although that seems good, the model is still heavily reliant on human-human multiple domain dialogues and zero-shot or few-shot generalization in compositional dialogues seem quite difficult to achieve.
\begin{table}[h!t]
\centering
\small
\begin{tabular}{p{1.2cm}cc|cc}
\toprule
\textbf{Model} & \multicolumn{2}{c}{\textbf{Training Data}} & \multicolumn{2}{c}{\textbf{BLEU}}\\
& Multiple & Synthetic & Multiple & Overall\\
\midrule
\multirow{1}{*}{\parbox{2cm}{Transformer}} & $1.00$ & No & $14.06$ & $14.00$ \\
\midrule
\multirow{2}{*}{\parbox{2cm}{Transformer}} & $0.50$ & Yes & $11.4$ & $12.43$ \\
& $1.00$ & Yes & $11.89$ & $12.32$ \\
\midrule
\multirow{2}{*}{\parbox{2cm}{Transformer \\ Discriminator}} & $0.50$ & No & $12.24$ & $12.13$ \\
& $1.00$ & No & $15.06$ & $14.81$ \\
\midrule
\multirow{2}{*}{\parbox{2cm}{Transformer \\ Discriminator}} & $0.50$ & Yes & $11.05$ & $11.60$ \\
& $1.00$ & Yes & $11.29$ & $12.13$ \\
\bottomrule
\end{tabular}
\caption{0.5 and 1.0 correspond to half and all of multitask samples respectively during training. Synthetic refers to \emph{Targeted\_Augment} dialogues.}
\label{tab:multiple-task-final-table}
\end{table}
The poor performance of the data augmentation techniques can be due to the overwhelming noise in token distribution of input contexts, which skews the language model that the model learns.
\section{Conclusion}
We studied the problem of composing multiple dialogue tasks to predict next utterance in a single multiple-task dialogue. We found that even powerful transformer models do not naturally compose multiple tasks and the performance is severely relied on multiple task dialogues. In this paper, we explored two solutions that only further showed the difficulty of composing multiple dialogue tasks. The challenge in generalizing to zero-shot composition, as observed in the experiments, hints at the possibility of transformer model potentially mimicking only the surface level tokens without understanding the underlying task. The token overlap distribution in Appendix \ref{sec:token-distri} supports the possibility.
| {
"attr-fineweb-edu": 1.349609,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfC04uzlh_1001JYj | \section{Introduction}
Swiss-system tournaments received a highly increasing consideration in the last years and are implemented in various professional and amateur tournaments in, e.g., badminton, bridge, chess, e-sports and card games. A Swiss-system tournament is a non-eliminating tournament format that features a predetermined number of rounds of competition. Assuming an even number of participants, each player plays exactly one other player in each round and two players play each other at most once in a tournament. The number of rounds is predetermined and publicly announced. The actual planning of a round usually depends on the results of the previous rounds to generate as attractive matches as possible and highly depends on the considered sport. Typically, players with the same numbers of wins in the matches played so far are paired, if possible.
Tournament designers usually agree on the fact that one should have at least $\log (n)$ rounds in a tournament with $n$ participants to ensure that there cannot be multiple players without a loss in the final rankings. \citet{appleton1995may} even mentions playing $\log (n) + 2$ rounds, so that a player may lose once and still win the tournament.
In this work, we examine a bound on the number of rounds that can be \emph{guaranteed} by tournament designers. Since the schedule of a round depends on the results of previous rounds, it might happen that at some point in the tournament, there is no next round that fulfills the constraint that two players play each other at most once in a tournament. This raises the question of how many rounds a tournament organizer can announce before the tournament starts while being sure that this number of rounds can always be scheduled. We provide bounds that are \emph{independent} of the results of the matches and the detailed rules for the setup of the rounds.
We model the feasible matches of a tournament with $n$ participants as an undirected graph with $n$ vertices. A match that is feasible in the tournament corresponds to an edge in the graph. Assuming an even number of participants, one round of the tournament corresponds to a perfect matching in the graph. After playing one round we can delete the corresponding perfect matching from the set of edges to keep track of the matches that are still feasible. We can guarantee the existence of a next round in a Swiss-system tournament if there is a perfect matching in the graph. The largest number of rounds that a tournament planner can guarantee is equal to the largest number of perfect matchings that a greedy algorithm is guaranteed to delete from the complete graph. Greedily deleting perfect matchings models the fact that rounds cannot be preplanned or adjusted later in time.
Interestingly, the results imply that infeasibility issues can arise in some state-of-the-art rules for table-tennis tournaments in Germany. There is a predefined amateur tournament series with more than 1000 tournaments per year that \emph{guarantees} the 9 to 16 participants 6 rounds in a Swiss-system tournament~\citep{httvCup}. We can show that a tournament with 10 participants might become infeasible after round 5, even if these rounds are scheduled according to the official tournament rules. Infeasible means that no matching of the players who have not played before is possible anymore and thus no rule-consistent next round. For more details, see \citet{Kuntz:Thesis:2020}. Remark~\ref{rem:extend} shows that tournament designers could \emph{extend} the lower bound from 5 to 6, by choosing the fifth round carefully.
We generalize the problem to the famous social golfer problem in which not $2$, but $k\geq 3$ players compete in each match of the tournament, see~\citet{csplib:prob010}. We still assume that each pair of players can meet at most once during the tournament. A famous example of this setting is Kirkman's schoolgirl problem \citep{kirkman1850note}, in which fifteen girls walk to school in rows of three for seven consecutive days such that no two girls walk in the same row twice.
In addition to the theoretical interest in this question, designing golf tournaments with a fixed size of the golf groups that share a hole is a common problem in the state of the art design of golfing tournaments, see e.g.,~\citet{golf}. Another application of the social golfer problem is the Volleyball Nations league. Here 16 teams play a round-robin tournament. To simplify the organisation, they repeatedly meet in groups of four at a single location and play all matches within the group. Planning which teams to group together and invite to a single location is an example of the social golfer problem. See \citet{volleyball}.
In graph-theoretic terms, a round in the social golfer problem corresponds to a set of vertex-disjoint cliques of size $k$ that contains every vertex of the graph exactly once. In graph theory, a feasible round of the social golfer problem is called a clique-factor.
We address the question of how many rounds can be guaranteed if clique-factors, where each clique has a size of $k$, are greedily deleted from the complete graph, i.e., without any preplanning.
A closely related problem is the Oberwolfach problem. In the Oberwolfach problem, we seek to find seating assignments for multiple diners at round tables in such a way that two participants sit next to each other exactly once. Half-jokingly, we use the fact that seatings at Oberwolfach seminars are assigned greedily, and study the greedy algorithm for this problem. Instead of deleting clique-factors, the algorithm now iteratively deletes a set of vertex-disjoint cycles that contains every vertex of the graph exactly once. Such a set is called a cycle-factor. We restrict attention to the special case of the Oberwolfach problem in which all cycles have the same length $k$. We analyze how many rounds can be guaranteed if cycle-factors, in which each cycle has length $k$, are greedily deleted from the complete graph.
\subsection*{Our Contribution} Motivated by applications in sports, the social golfer problem, and the Oberwolfach problem, we study the greedy algorithm that iteratively deletes a clique, respectively cycle, factor in which all cliques/cycles have a fixed size $k$, from the complete graph. We prove the following main results for complete graphs with $n$ vertices for $n$ divisible by $k$.
\begin{itemize}
\item We can always delete $\lfloor n/(k(k-1))\rfloor$ clique-factors in which all cliques have a fixed size $k$ from the complete graph. In other words, the greedy procedure guarantees a schedule of $\lfloor n/(k(k-1))\rfloor$ rounds for the social golfer problem.
This provides a simple polynomial time $\frac{k-1}{2k^2-3k-1}$-approximation algorithm.
\item The bound of $\lfloor n/(k(k-1))\rfloor$ is tight, in the sense that it is the best possible bound we can guarantee for our greedy algorithm. To be more precise, we show that a tournament exists in which we can choose the first $\lfloor n/(k(k-1))\rfloor$ rounds in such a way that no additional feasible round exists. If a well-known conjecture by \citet{chen1994equitable} in graph theory is true (the conjecture is proven to be true for $k\leq 4$), then this is the unique example (up to symmetries) for which no additional round exists after $\lfloor n/(k(k-1))\rfloor$ rounds. In this case, we observe that for $n>k(k-1)$ we can always pick a different clique-factor in the last round such that an additional round can be scheduled.
\item We can always delete $\lfloor (n+4)/6\rfloor$ cycle-factors in which all cycles have a fixed size $k$, where $k\geq 3$, from the complete graph. This implies that our greedy approach guarantees to schedule $\lfloor (n+4)/6\rfloor$ rounds for the Oberwolfach problem. Moreover, the greedy algorithm can be implemented so that it is a polynomial time $\frac{1}{3+\epsilon}$-approximation algorithm for the Oberwolfach problem for any fixed $\epsilon>0$.
\item If El-Zahar's conjecture \citep{el1984circuits} is true (the conjecture is proven to be true for $k\leq 5$), we can increase the number of cycle-factors that can be deleted to $\lfloor (n+2)/4\rfloor$ for $k$ even and $\lfloor (n+2)/4-n/4k\rfloor$ for $k$ odd. Additionally, we show that this bound is essentially tight by distinguishing three different cases. In the first two cases, the bound is tight, i.e., an improvement would immediately disprove El-Zahar's conjecture. In the last case, a gap of one round remains.
\end{itemize}
\section{Preliminaries}
We follow standard notation in graph theory and for two graphs $G$ and $H$ we define an $H$-factor of $G$ as a union of vertex-disjoint copies of $H$ that contains every vertex of the graph $G$.
For some graph $H$ and $n \in\mathbb{N}_{\geq 2}$, a \emph{tournament} with $r$ rounds is defined as a tuple $T=(H_1,\ldots, H_r)$ of $H$-factors of the complete graph $K_n$ such that each edge of $K_n$ is in at most one $H$-factor. The \emph{feasibility graph} of a tournament $T=(H_1,\ldots, H_r)$ is a graph $G = K_n \backslash \bigcup_{i \leq r} H_i$ that contains all edges that are in none of the $H$-factors.
If the feasibility graph of a tournament $T$ is empty, we call $T$ a \emph{complete tournament}.
Motivated by Swiss-system tournaments and the importance of greedy algorithms in real-life optimization problems, we study the greedy algorithm that starts with an empty tournament and iteratively extends the current tournament by an arbitrary $H$-factor in every round until no $H$-factor remains in the feasibility graph. We refer to Algorithm \ref{algo:greedy} for a formal description.
\vspace{\baselineskip}
\begin{algorithm}[H]
\SetAlgoLined
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{number of vertices $n$ and a graph $H$}
\Output{tournament $T$}
$G \leftarrow K_n$\\
$i \leftarrow 1$\\
\While{there is an $H$-factor $H_i$ in $G$}{
delete $H_i$ from $G$\\
$i \leftarrow i+1$\\
}
\Return $T=(H_1,\dots,H_{i-1})$
\caption{Greedy tournament scheduling}
\label{algo:greedy}
\end{algorithm}
\vspace{\baselineskip}
\subsection*{Greedy Social Golfer Problem}
In the greedy social golfer problem we consider tournaments with $H=K_k$, for $k\geq 2$, where $K_k$ is the complete graph with $k$ vertices and all $\frac{k(k-1)}{2}$ edges. The greedy social golfer problem asks for the minimum number of rounds of a tournament computed by Algorithm \ref{algo:greedy}, as a function of $n$ and $k$. The solution of the greedy social golfer problem is a guarantee on the number of $K_k$-factors that can be iteratively deleted from the complete graph without any preplanning.
For sports tournaments this corresponds to $n$ players being assigned to rounds with matches of size $k$ such that each player is in exactly one match per round and each pair of players meets at most once in the tournament.
\subsection*{Greedy Oberwolfach Problem}
In the greedy Oberwolfach problem we consider tournaments with $H=C_k$, for $k\geq 3$, where $C_k$ is the cycle graph with $k$ vertices and $k$ edges. The greedy Oberwolfach problem asks for the minimum number of rounds calculated by Algorithm~\ref{algo:greedy}, given $n$ and $k$. This corresponds to a guarantee on the number of $C_k$-factors that can always be iteratively deleted from the complete graph without any preplanning.\\
Observe that for $k= 3$, both problems are equivalent.
To avoid trivial cases, we assume throughout the paper that $n$ is divisible by $k$. This is a necessary condition for the existence of a \emph{single} round. Usually, in real-life sports tournaments, additional dummy players are added to the tournament if $n$ is not divisible by $k$. The influence of dummy players on the tournament planning strongly depends on the sport. There are sports, like e.g.,\ golf or karting where matches can still be played with less than $k$ players, or others where the match needs to be cancelled if one player is missing, for example beach volleyball or tennis doubles. Thus, the definition of a best possible round if $n$ is not divisible by $k$ depends on the application. We exclude the analysis of this situation from this work to ensure a broad applicability of our results and focus on the case $n \equiv 0 \mod k$.
\subsection{Related Literature}
For matches with $k=2$ players, \cite{rosa1982premature} studied the question whether a given tournament can be extended to a round-robin tournament. This question was later solved by \cite{Csaba2016} as long as the graph is sufficiently large. They showed that even if we apply the greedy algorithm for the first $n/2-1$ rounds, the tournament can be extended to a complete tournament by choosing all subsequent rounds carefully.
\cite{cousins1975maximal} asked the question of how many rounds can be guaranteed to be played in a Swiss-system tournament for the special case $k=2$. They showed that $\frac{n}{2}$ rounds can be guaranteed. Our result of $\left\lfloor\frac{n}{k(k-1)}\right\rfloor$ rounds for the social golfer problem is a natural generalization of this result. \cite{rees1991spectrum} investigated in more detail after how many rounds a Swiss-system tournament can get stuck.
For a match size of $k\geq 2$ players, the original \emph{social golfer problem} with $n\geq 2$ players asks whether a complete tournament with $H=K_k$ exists. For $H=K_2$, such a complete tournament coincides with a round-robin tournament. Round-robin tournaments are known to exist for every even number of players. Algorithms to calculate such schedules are known for more than a century due to \citet{schurig1886}. For a more recent survey on round-robin tournaments, we refer to \cite{rasmussen2008round}.
For $H=K_k$ and $k\geq 2$, complete tournaments are also known as resolvable balanced incomplete block designs (resolvable-BIBDs). To be precise, a \emph{resolvable-BIBD} with parameters $(n,k,1)$ is a collection of subsets (blocks) of a finite set $V$ with $|V|=n$ elements with the following properties:
\begin{enumerate}
\item Every pair of distinct elements $u,v$ from $V$ is contained in exactly one block.
\item Every block contains exactly $k$ elements.
\item The blocks can be partitioned into rounds $R_1, R_2, \ldots , R_r$ such that each element of $V$ is contained in exactly one block of each round.
\end{enumerate}
Notice that a round in a resolvable-BIBD corresponds to an $H$-factor in the social golfer problem.
Similar to the original social golfer problem, a resolvable-BIBD consists of $(n-1)/(k-1)$ rounds. For the existence of a resolvable-BIBD the conditions $n \equiv 0 \mod{k}$ and $n-1 \equiv 0 \mod{k-1}$ are clearly necessary. For $k=3$, \citet{ray1971solution} proved that these two conditions are also sufficient. Later, \citet{hanani1972resolvable} proved the same result for $k=4$. In general, these two conditions are not sufficient (one of the smallest exceptions being $n=45$ and $k=5$), but \citet{ray1973existence} showed that they are asymptotically sufficient, i.e., for every $k$ there exist a constant $c(k)$ such that the two conditions are sufficient for every $n$ larger than $c(k)$. These results immediately carry over to the existence of a \emph{complete} tournament with $n$ players and $H=K_k$.
Closely related to our problem is the question of the existence of graph factorizations. An $H$-factorization of a graph $G$ is collection of $H$-factors that exactly cover the whole graph $G$. For an overview of graph theoretic results, we refer to \cite{yuster2007combinatorial}. \cite{condon2019bandwidth} looked at the problem of maximizing the number of $H$-factors when choosing rounds carefully. For our setting, their results imply that in a sufficiently large graph one can always schedule rounds such that the number of edges remaining in the feasibility graph is an arbitrary small fraction of edges. Notice that the result above assumes that we are able to preplan the whole tournament. In contrast, we plan rounds of a tournament in an online fashion depending on the results in previous rounds.
In greedy tournament scheduling, Algorithm~\ref{algo:greedy} greedily adds one round after another to the tournament, and thus \emph{extends} a given tournament step by step. The study of the existence of another feasible round in a given tournament with $H=K_k$ is related to the existence of an equitable graph-coloring. Given some graph $G=(V,E)$, an $\ell$-coloring
is a function $f: V \rightarrow \{1, \ldots, \ell \}$, such that $f(u) \neq f(v)$ for all edges $(u,v) \in E$. An \emph{equitable $\ell$-coloring} is an $\ell$-coloring, where the number of vertices in any two color classes differs by at most one, i.e., $|\{ v | f(v)=i \}| \in \{ \left\lfloor \frac{n}{\ell} \right\rfloor , \left\lceil \frac{n}{\ell} \right\rceil\}$ for every color $i \in \{1, \ldots , \ell \}$.
To relate equitable colorings of graphs to the study of the extendability of tournaments, we consider the complement graph $\bar{G}$ of the feasibility graph $G=(V,E)$, as defined by
$\bar{G}=K_n \backslash E$. Notice that a color class in an equitable coloring of the vertices of $\bar{G}$ is equivalent to a clique in $G$. In an equitable coloring of $\bar{G}$ with $\frac{n}{k}$ colors, each color class has the same size, which is $k$. Thus, finding an equitable $\frac{n}{k}$-coloring in $\bar{G}$ is equivalent to finding a $K_k$-factor in $G$ and thus an extension of the tournament. Questions on the existence of an equitable coloring dependent on the vertex degrees in a graph have already been considered by \citet{erdos1964problem}, who posed a conjecture on the existence of equitable colorings in low degree graphs, that has been proven by \citet{hajnal1970proof}. Their proof was simplified by \citet{kierstead2010fast}, who also gave a polynomial time algorithm to find an equitable coloring. In general graphs, the existence of clique-factors with clique size equal to $3$ \citep[][Sec.~3.1.2]{garey1979computers} and at least $3$ \citep{kirkpatrick1978completeness,kirkpatrick1983complexity,hell1984packings} is known to be NP-hard.
The maximization variant of the social golfer problem for $n$ players and $H=K_k$ asks for a schedule which lasts as many rounds as possible. It is mainly studied in the constraint programming community using heuristic approaches \citep{dotu2005scheduling, triska2012effective,triska2012improved, liu2019social}. Our results give lower bounds for the maximization variant using a very simple greedy algorithm.
For $n$ players and table sizes $k_1, \ldots, k_{\ell}$ with $n=k_1 + \ldots +k_{\ell}$, the (classical) \emph{Oberwolfach problem} can be stated as follows. Defining $\tilde{H} = \bigcup_{i\leq \ell} C_{k_i}$ the problem asks for the existence of a tournament of $n$ players, with $H=\tilde{H}$ which has $(n-1)/2$ rounds. Note that the Oberwolfach problem does not ask for such an assignment but only for existence. While the general problem is still open, several special cases have been solved. Assuming $k=k_1=\ldots=k_{\ell}$, \citet{alspach1989oberwolfach} showed existence for all odd $k$ and all $n$ odd with $n \equiv 0 \mod{k}$. For $k$ even, \citet{alspach1989oberwolfach} and \citet{hoffman1991existence} analyzed a slight modification of the Oberwolfach problem and showed that there is a tournament, such that the corresponding feasibility graph $G$ is not empty, but equal to a perfect matching for all even $n$ with $n \equiv 0 \mod{k}$.
Recently, the Oberwolfach problem was solved for large $n$, see \cite{glock2021resolution}, and for small $n$, see \cite{salassa}.
\citet{liu2003equipartite} studied a variant of the Oberwolfach problem in bipartite graphs and gave conditions under which the existence of a complete tournament is guaranteed.
A different optimization problem inspired by finding feasible seating arrangements subject to some constraints is given by \cite{estanislaomeunier}.
The question of extendability of a given tournament with $H=C_k$ corresponds to the covering of the feasibility graph with cycles of length $k$. Covering graphs with cycles is already studied since \citet{petersen1891theorie}. The problem of finding a set of cycles of arbitrary lengths covering a graph (if one exists) is polynomially solvable \citep{edmonds1970matching}. However, if certain cycle lengths are forbidden, the problem is NP-complete \citep{hell1988restricted}.
\subsection{Example}
Consider the example of a tournament with $n=6$ and $H=K_2$ depicted in Figure \ref{fig:exa}. The coloring of the edges in the graph on the left represents three rounds $H_1, H_2, H_3$. The first round $H_1$ is depicted by the set of red edges. Each edge corresponds to a match. In the second round, all blue edges are played. The third round $H_3$ consists of all green edges. After these three rounds, the feasibility graph $G$ of the tournament is depicted on the right side of the figure. We cannot feasibly schedule a next round as there is no perfect matching in $G$. Equivalently, we can observe that the tournament with $3$ rounds cannot be extended, since there is no equitable $3$-coloring in $\bar{G}$, which is depicted on the left of Figure~\ref{fig:exa}.
\begin{figure}[t]
\begin{minipage}{0.55\textwidth}
\centering
\begin{tikzpicture}
\draw[thick,red] (-2,1) -- (-2,-1);
\draw[thick,green!50!black] (-2,1) -- (0,-2);
\draw[thick,blue] (-2,1) -- (2,-1);
\draw[thick,blue] (0,2) -- (-2,-1);
\draw[thick,red] (0,2) -- (0,-2);
\draw[thick,green!50!black] (0,2) -- (2,-1);
\draw[thick,green!50!black](2,1) -- (-2,-1);
\draw[thick,blue] (2,1) -- (0,-2);
\draw[thick,red] (2,1) -- (2,-1);
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,2){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,-1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,-2){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,-1){};
\node at (0,-2.5) {};
\end{tikzpicture}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\begin{tikzpicture}
\draw[thick] (-2,1) -- (0,2);
\draw[thick] (-2,1) -- (2,1);
\draw[thick] (0,2) -- (2,1);
\draw[thick] (-2,-1) -- (0,-2);
\draw[thick] (-2,-1) -- (2,-1);
\draw[thick] (0,-2) -- (2,-1);
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,2){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,-1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,-2){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,-1){};
\node at (0,-2.5){};
\end{tikzpicture}
\end{minipage}
\caption{Consider a tournament with 6 participants and $H=K_2$. The left figure corresponds to three rounds, where each color denotes the matches of one round. The right figure depicts the feasibility graph after these three rounds.}\label{fig:exa}
\end{figure}
On the other hand there is a tournament with $n=6$ and $H=K_2$ that consists of $5$ rounds. The corresponding graph is depicted in Figure~\ref{fig:com}. Since this is a complete tournament, the example is a resolvable-BIBD with parameters $(6,2,1)$. The vertices of the graph correspond to the finite set $V$ of the BIBD and the colors in the figure correspond to the rounds in the BIBD. Note that these examples show that there is a complete tournament with $n=6$ and $H=K_2$, where $5$ rounds are played while the greedy algorithm can get stuck after $3$ rounds. In the remainder of the paper, we aim for best possible bounds on the number of rounds that can be guaranteed by using the greedy algorithm.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\draw[thick,red] (-2,1) -- (0,2);
\draw[thick] (-2,1) -- (2,1);
\draw[very thick,yellow!90!black] (0,2) -- (2,1);
\draw[thick,red] (-2,-1) -- (0,-2);
\draw[thick] (-2,-1) -- (2,-1);
\draw[very thick,yellow!90!black] (0,-2) -- (2,-1);
\draw[very thick,yellow!90!black] (-2,1) -- (-2,-1);
\draw[thick,green!50!black] (-2,1) -- (0,-2);
\draw[thick,blue] (-2,1) -- (2,-1);
\draw[thick,blue] (0,2) -- (-2,-1);
\draw[thick] (0,2) -- (0,-2);
\draw[thick,green!50!black] (0,2) -- (2,-1);
\draw[thick,green!50!black](2,1) -- (-2,-1);
\draw[thick,blue] (2,1) -- (0,-2);
\draw[thick,red] (2,1) -- (2,-1);
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,2){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,-1){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,-2){};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,-1){};
\end{tikzpicture}
\caption{A complete tournament with 6 players and 5 rounds, in which each color represents the matches of a round.}\label{fig:com}
\end{figure}
\subsection{Outline}
The paper is structured as follows. We start with the analysis of Swiss-system tournaments to demonstrate our main ideas. To be more precise, Section \ref{sec:war} considers the setting of greedy tournament scheduling with $H=K_2$. Section \ref{sec:gsgp} then generalizes the main results for the greedy social golfer problem. Lastly, in Section \ref{sec:gop}, we obtain lower and upper bounds on the number of rounds for the greedy Oberwolfach problem.
\section{Warmup: Perfect Matchings}\label{sec:war}
Most sports tournaments consist of matches between two competing players. We therefore first consider the special case
of a tournament with $H=K_2$.
In this setting, the greedy social golfer problem boils down to iteratively deleting perfect matchings from the complete graph. Recall that Propositions \ref{prop:k=2} and \ref{prop:k=2l}, and Corollary \ref{corr:ndivbyfour} were already shown by \cite{cousins1975maximal}. For completeness, we have added the proofs.
First, we use Dirac's theorem to show that we can always greedily delete at least $\frac{n}{2}$ perfect matchings from the complete graph. Recall that we assume $n$ to be even to guarantee the existence of a single perfect matching.
\begin{proposition}\label{prop:k=2}
For each even $n\in\mathbb{N}$ and $H=K_2$, Algorithm~\ref{algo:greedy} outputs a tournament with at least $\frac{n}{2}$ rounds.
\end{proposition}
\begin{proof}
Algorithm~\ref{algo:greedy} starts with an empty tournament and extends it by one round in every iteration.
To show that Algorithm~\ref{algo:greedy} runs for at least $\frac{n}{2}$ iterations we consider the feasibility graph of the corresponding tournament. Recall that the degree of each vertex in a complete graph with $n$ vertices is $n-1$. In each round, the algorithm deletes a perfect matching and thus the degree of a vertex is decreased by $1$. After at most $\frac{n}{2}-1$ rounds, the degree of every vertex is at least $\frac{n}{2}$. By Dirac's theorem \citep{dirac1952some}, a Hamiltonian cycle exists. The existence of a Hamiltonian cycle implies the existence of a perfect matching by taking every second edge of the Hamiltonian cycle. So after at most $\frac{n}{2}-1$ rounds, the tournament can be extended and the algorithm does not terminate.
\end{proof}
Second, we prove that the bound of Proposition \ref{prop:k=2} is tight by showing that there are tournaments that cannot be extended after $\frac{n}{2}$ rounds.
\begin{proposition}\label{prop:k=2l}
There are infinitely many $n \in \mathbb{N}$ for which there exists a tournament that cannot be extended after $\frac{n}{2}$ rounds.
\end{proposition}
\begin{proof}
Choose $n$ such that $\frac{n}{2}$ is odd. We describe the chosen tournament by perfect matchings in the feasibility graph $G$. Given a complete graph with $n$ vertices, we partition the vertices into a set $A$ with $|A|=\frac{n}{2}$ and $V\setminus A$ with $|V\setminus A|=\frac{n}{2}$. We denote the players in $A$ by $1,\ldots, \frac{n}{2}$ and the players in $V\setminus A$ by $\frac{n}{2}+1,\ldots,n$.
In each round $r=1,\ldots,\frac{n}{2}$, player $i+\frac{n}{2}$ is scheduled in a match with player $i+r-1$ (modulo $\frac{n}{2}$) for all $i=1,\ldots,\frac{n}{2}$. After deleting these $\frac{n}{2}$ perfect matchings, the feasibility graph $G$ consists of two disjoint complete graphs of size $\frac{n}{2}$, as every player in $A$ has played against every player in $V\setminus A$. Given that $\frac{n}{2}$ is odd, no perfect matching exists and hence the tournament cannot be extended.
\end{proof}
A natural follow-up question is to characterize those feasibility graphs that can be extended after $\frac{n}{2}$ rounds. Proposition \ref{prop:cha} answers this question and we essentially show that the provided example is the only graph structure that cannot be extended after $\frac{n}{2}$ rounds.
\begin{proposition}\label{prop:cha}
Let $T$ be a tournament of $\frac{n}{2}$ rounds with feasibility graph $G$ and its complement $\bar{G}$. Then $T$ cannot be extended if and only if $\bar{G} = K_{\frac{n}{2},\frac{n}{2}}$ and $\frac{n}{2}$ is odd.
\end{proposition}
Before we prove the proposition we present a result by \citet{chen1994equitable}, which the proof makes use of.
\subsubsection*{Chen-Lih-Wu theorem \citep{chen1994equitable}.}
Let $G$ be a connected graph with maximum degree $\Delta(G) \geq \frac{n}{2}$. If $G$ is different from $K_m$ and $K_{2m+1,2m+1}$ for all $m\geq 1$, then $G$ is equitable $\Delta(G)$-colorable.
\begin{proof}[Proof of Proposition \ref{prop:cha}.]
If the complement of the feasibility graph $\bar{G} = K_{\frac{n}{2},\frac{n}{2}}$ with $\frac{n}{2}$ odd, we are exactly in the situation of the proof of Proposition~\ref{prop:k=2l}. To show equivalence, assume that either $\bar{G} \neq K_{\frac{n}{2},\frac{n}{2}}$ or $\frac{n}{2}$ even.
By using the Chen-Lih-Wu Theorem, we show that in this case $\bar{G}$ is equitable $\frac{n}{2}$-colorable.
After $\frac{n}{2}$ rounds, we have $\Delta(\bar{G})=\frac{n}{2}$. We observe that $\bar{G}= K_n$ if and only if $n=2$ and in this case $\bar{G} = K_{1,1}$, a contradiction. Thus all conditions of the Chen-Lih-Wu theorem are fulfilled, and $\bar{G}$ is equitable $\frac{n}{2}$-colorable. An equitable $\frac{n}{2}$-coloring in $\bar{G}$ corresponds to a perfect matching in $G$ and hence implies that the tournament is extendable.
\end{proof}
\begin{corollary}
\label{corr:ndivbyfour}
For each even $n \in \mathbb{N}$ divisible by four and $H=K_2$, Algorithm~\ref{algo:greedy} outputs a tournament with at least $\frac{n}{2}+1$ rounds.
\end{corollary}
\begin{remark}
\label{rem:extend}
By selecting the perfect matching in round $\frac{n}{2}$ carefully, there always exists a tournament with $\frac{n}{2}+1$ rounds.
\end{remark}
\begin{proof}
After $\frac{n}{2}-1$ rounds of a tournament $T$, the degree of every vertex in $G$ is at least $\frac{n}{2}$. By Dirac's theorem \citep{dirac1952some}, there is a Hamiltonian cycle in $G$. This implies that two edge-disjoint perfect matchings exist: one that takes every even edge of the Hamiltonian cycle and one that takes every odd edge of the Hamiltonian cycle. If we first extend $T$ by taking every even edge of the Hamiltonian cycle and then extend $T$ by taking every odd edge of the Hamiltonian cycle, we have a tournament of $\frac{n}{2}+1$ rounds.
\end{proof}
\section{The Greedy Social Golfer Problem}\label{sec:gsgp}
We generalize the previous results to $k\geq 3$. This means we analyze tournaments with $n$ participants and $H=K_k$. Dependent on $n$ and $k$, we provide tight bounds on the number of rounds that can be scheduled greedily, i.e., by using Algorithm~\ref{algo:greedy}.
Remember that we assume that $n$ is divisible by $k$.
\begin{theorem}\label{thm:k>2}
For each $n \in \mathbb{N}$ and $H= K_k$, Algorithm~\ref{algo:greedy} outputs a tournament with at least $\lfloor\frac{n}{k(k-1)}\rfloor$ rounds.
\end{theorem}
Before we continue with the proof, we first state a result from graph theory. In our proof, we will use the Hajnal-Szemeredi theorem and adapt it such that it applies to our setting.
\subsubsection*{Hajnal-Szemeredi Theorem \citep{hajnal1970proof}.}
Let $G$ be a graph with $n\in\mathbb{N}$ vertices and maximum vertex degree $\Delta(G)\leq \ell-1$. Then $G$ is equitable $\ell$-colorable.
\begin{proof}[Proof of Theorem \ref{thm:k>2}.]
We start by proving the lower bound on the number of rounds. Assume for sake of contradiction that there are $n \in \mathbb{N}$ and $k \in \mathbb{N}$ such that the greedy algorithm for $H=K_k$ terminates with a tournament $T$ with $r\leq \lfloor\frac{n}{k(k-1)}\rfloor -1$ rounds. We will use the feasibility graph $G$ corresponding to $T$. Recall that the degree of a vertex in a complete graph with $n$ vertices is $n-1$. For each $K_k$-factor $(H_1, \dots, H_r)$, every vertex loses $k-1$ edges. Thus, every vertex in $G$ has degree
\[n-1 - r(k-1) \geq n-1-\left(\Big\lfloor\frac{n}{k(k-1)}\Big\rfloor -1\right)(k-1) \geq n-1-\frac{n}{k}+k-1\;.\]
We observe that each vertex in the complement graph $\bar{G}$ has at most degree $\frac{n}{k} - k +1$. Using the Hajnal-Szemeredi theorem with $\ell = \frac{n}{k}$, we obtain the existence of an $\frac{n}{k}$-coloring where all color classes have size $k$. Since there are no edges between vertices of the same color class in $\bar{G}$, they form a clique in $G$. Thus, there exists a $K_k$-factor in $G$, which is a contradiction to the assumption that Algorithm \ref{algo:greedy} terminated.
This implies that $r>\lfloor\frac{n}{k(k-1)}\rfloor -1$, i.e., the total number of rounds is at least $\lfloor\frac{n}{k(k-1)}\rfloor$.
\end{proof}
\begin{remark}
\citet{kierstead2010fast} showed that finding a clique-factor can be done in polynomial time if the minimum vertex degree is at least $\frac{n(k-1)}{k}$.
\end{remark}
Let OPT be the maximum possible number of rounds of a tournament. We conclude that the greedy algorithm is a constant factor approximation factor for the social golfer problem.
\begin{corollary}
Algorithm~\ref{algo:greedy} outputs at least $\frac{1}{k}\text{OPT}-1$ rounds for the social golfer problem. Thus it is a $\frac{k-1}{2k^2-3k-1}$-approximation algorithm for the social golfer problem.
\end{corollary}
\begin{proof}
The first statement follows directly from Theorem \ref{thm:k>2} and the fact that $\text{OPT} \leq \frac{n-1}{k-1}$.
For proving the second statement, we first consider the case $n\leq 2k(k-1)-k$. Note that the algorithm always outputs at least one round. OPT is upper bounded by $\frac{n-1}{k-1} \leq \frac{2k (k-1)- k-1}{k-1}=\frac{2k^2-3k-1}{k-1}$, which implies the approximation factor.
For $n\geq 2k(k-1)-k$, observe that the greedy algorithm guarantees to output $\lfloor \frac{n}{k(k-1)} \rfloor$ rounds in polynomial time by Theorem \ref{thm:k>2}. This yields
\begin{align*}
\frac{\left \lfloor \frac{n}{k(k-1)}\right\rfloor}{\frac{n-1}{k-1}} &\geq \frac{\frac{n - \left(k (k-1)-k\right)}{k(k-1)}}{\frac{n-1}{k-1}}\geq \frac{\frac{2k(k-1)-k - k (k-1)+k}{k(k-1)}}{\frac{2k(k-1)-k-1}{k-1}}=\frac{k-1}{2k^2-3k-1},
\end{align*}
where the first inequality follows since we round down by at most $\frac{k(k-1)-k}{k(k-1)}$ and the second inequality follows since the second term is increasing in $n$ for $k \geq 3$.
\end{proof}
Our second main result on greedy tournament scheduling with $H=K_k$ shows that the bound of Theorem \ref{thm:k>2} is tight.
\begin{theorem}
There are infinitely many $n \in \mathbb{N}$ for which there exists a tournament that cannot be extended after $\lfloor\frac{n}{k(k-1)}\rfloor$ rounds.
\label{lowerboundexample_k>2}
\end{theorem}
\begin{proof}
We construct a tournament with $n=j(k(k-1))$ participants for some $j$ large enough to be chosen later. We will define necessary properties of $j$ throughout the proof and argue in the end that there are infinitely many possible integral choices for $j$. The tournament we will construct has $\lfloor\frac{n}{k(k-1)}\rfloor$ rounds and we will show that it cannot be extended. Note that $\lfloor\frac{n}{k(k-1)}\rfloor = \frac{n}{k(k-1)}$.
The proof is based on a step-by-step modification of the feasibility graph $G$. We will start with the complete graph $K_n$ and describe how to delete $\frac{n}{k(k-1)}$ $K_k$-factors such that the resulting graph does not contain a $K_k$-factor. This is equivalent to constructing a tournament with $\lfloor\frac{n}{k(k-1)}\rfloor$ rounds that cannot be extended.
Given a complete graph with $n$ vertices, we partition the vertices $V$ in two sets, a set $A$ with $\ell=\frac{n}{k}+1$ vertices and a set $V \backslash A$ with $n-\ell$ vertices. We will choose all $\frac{n}{k(k-1)}$ $K_k$-factors in such a way, that no edge $\{a,b\}$ with $a\in A$ and $b\notin A$ is deleted, i.e., each $K_k$ is either entirely in $A$ or entirely in $V\setminus A$. We will explain below that this is possible. Since a vertex in $A$ has $\frac{n}{k}$ neighbours in $A$ and $k-1$ of them are deleted in every $K_k$-factor, all edges within $A$ are deleted after deleting $\frac{n}{k(k-1)}$ $K_k$-factors.
We now first argue that after deleting these $\frac{n}{k(k-1)}$ $K_k$-factors, no $K_k$-factor exists. Assume that there exists another $K_k$-factor. In this case, each vertex in $A$ forms a clique with $k-1$ vertices of $V \backslash A$. However, since $(k-1)\cdot(\frac{n}{k}+1)>\frac{(k-1)n}{k}-1=|V \backslash A|$ there are not enough vertices in $V \backslash A$, a contradiction to the existence of the $K_k$-factor.
It remains to show that there are $\frac{n}{k(k-1)}$ $K_k$-factors that do not contain an edge $\{a,b\}$ with $a\in A$ and $b \notin A$. We start by showing that $\frac{n}{k(k-1)}$ $K_k$-factors can be found within $A$. \citet{ray1973existence} showed that given $k'\geq 2$ there exists a constant $c(k')$ such that if $n'\geq c(k')$ and $n' \equiv k' \mod k'(k'-1)$, then a resolvable-BIBD with parameters $(n',k',1)$ exists.
By choosing $k'=k$ and $n' = \ell$ with $j= \lambda \cdot k +1$ for some $\lambda \in \mathbb{N}$ large enough, we establish $\ell\geq c(k)$, where $c(k)$ is defined by \citet{ray1973existence}, and we get
\[|A| = \ell = \frac{n}{k}+1 = j(k-1)+1 = (\lambda k +1)(k-1) + 1 = k+ \lambda k (k-1)\;.\]
Thus, a resolvable-BIBD with parameters $(\ell,k,1)$ exists, and there is a complete tournament for $\ell$ players with $H=K_k$, i.e., we can find $\frac{n}{k(k-1)}$ $K_k$-factors in $A$.
It remains to show that we also find $\frac{n}{k(k-1)}$ $K_k$-factors in $V \setminus A$. We define a tournament that we call shifting tournament as follows. We arbitrarily write the names of the players in $V \setminus A$ into a table of size $k\times (n-\ell)/k$. Each column of the table corresponds to a $K_k$ and the table to a $K_k$-factor in $V \setminus A$. By rearranging the players we get a sequence of tables, each corresponding to a $K_k$-factor. To construct the next table from a preceding one, for each row $i$, all players move $i-1$ steps to the right (modulo $(n-\ell)/k$).
We claim that this procedure results in $\frac{n}{k(k-1)}$ $K_k$-factors that do not share an edge. First, notice that the step difference between any two players in two rows $i \neq i'$ is at most $k-1$, where we have equality for rows $1$ and $k$. However, we observe that $(n-\ell)/k$ is not divisible by $(k-1)$ since $n/k$ is divisible by $k-1$ by definition, whereas $\ell/k$ is not divisible by $k-1$ since $\ell/k(k-1)=1/(k-1)+\lambda$ and this expression is not integral. Thus, a player in row $1$ can only meet a player in row $k$ again after at least $2\frac{n-\ell}{k(k-1)}$ rounds.
Since $2\frac{n-\ell}{k(k-1)}\geq\frac{n}{k(k-1)}$ if $n\geq \frac{2k}{k-2}$, the condition is satisfied for $n$ sufficiently large.
Similarly, we have to check that two players in two rows with a relative distance of at most $k-2$ do not meet more than once. Since $\frac{n-\ell}{k(k-2)}\geq\frac{n}{k(k-1)}$ if $n\geq k^2-k$, the condition is also satisfied for $n$ sufficiently large.
Observe that there are infinitely many $n$ and $\ell$ such that $\ell=\frac{n}{k}+1$, $n$ is divisible by $k(k-1)$ and $\ell \equiv k \mod{k(k-1)}$ and thus the result follows for sufficiently large $n$.
\end{proof}
We turn our attention to the problem of characterizing tournaments that are not extendable after $\lfloor\frac{n}{k(k-1)}\rfloor$ rounds. Assuming the Equitable $\Delta$-Coloring Conjecture (E$\Delta$CC) is true, we give an exact characterization of the feasibility graphs of tournaments that cannot be extended after $\lfloor\frac{n}{k(k-1)}\rfloor$ rounds. The existence of an instance not fulfilling these conditions would immediately disprove the E$\Delta$CC.
Furthermore, this characterization allows us to guarantee $\lfloor\frac{n}{k(k-1)}\rfloor+1$ rounds in every tournament when the last two rounds are chosen carefully.
\subsubsection*{Equitable $\Delta$-Coloring Conjecture \citep{chen1994equitable}.}
Let $G$ be a connected graph with maximum degree $\Delta(G) \leq \ell$. Then $G$ is not equitable $\ell$-colorable if and only if one of the following three cases occurs:
\begin{enumerate}
\item[(i)] $G=K_{\ell+1}$.
\item[(ii)] $\ell=2$ and $G$ is an odd cycle.
\item[(iii)] $\ell$ odd and $G=K_{\ell,\ell}$.
\end{enumerate}
\subsubsection*{}The conjecture was first stated by \citet{chen1994equitable} and is proven for $|V|=k\cdot\ell$ and $k=2,3,4$. See the Chen-Lih-Wu theorem for $k=2$ and \citet{kierstead2015refinement} for $k=3,4$. Both results make use of Brooks' theorem \citep{brooks1941coloring}. For $k>4$, the conjecture is still open.
\begin{proposition}
If E$\Delta$CC{} is true, a tournament with $\lfloor\frac{n}{k(k-1)} \rfloor$ rounds cannot be extended if and only if $K_{\frac{n}{k}+1}$ is a subgraph of the complement graph $\bar{G}$.
\label{prop:charOneRoundMore}
\end{proposition}
Before we start the proof we state the following claim, which we will need in the proof.
\begin{claim}
\label{claim:connectedcomponents}
Let $G$ be a graph with $|G|$ vertices and let $m$ be such that $|G| \equiv 0 \mod{m}$. Given an equitable $m$-coloring for every connected component $G_i$ of $G$, there is an equitable $m$-coloring for $G$.
\end{claim}
\begin{proof}
Let $G$ consist of connected components $G_1, \dots, G_c$. In every connected component $G_i$, $i \in \{1, \dots, c\}$, there are $\ell_i \equiv |G_i| \mod{m}$ \emph{large color classes}, i.e., color classes with $\lfloor\frac{|G_i|}{m}\rfloor +1$ vertices and $m-\ell_i$ \emph{small color classes}, i.e., color classes with $\lfloor\frac{|G_i|}{m}\rfloor$ vertices. First note that from $\ell_i \equiv|G_i| \mod{m}$, it follows that $(\sum \ell_i) \equiv (\sum |G_i|) \equiv |G| \equiv 0 \mod{m}$, i.e., $\sum \ell_i$ is divisible by $m$.
In the remainder of the proof, we will argue that we can recolor the color classes in the connected components to new color classes $\{1,\ldots, m\}$ that form an equitable coloring. The proof is inspired by McNaughton's wrap around rule \cite{mcnaughton1959scheduling} from Scheduling. Pick some connected component $G_i$ and assign the $\ell_i$ large color classes to the new colors $\{1, \dots, \ell_i\}$. Choose some next connected component $G_j$ with $j\neq i$ and assign the $\ell_j$ large color classes to the new colors $\{(\ell_{i} + 1) \mod m, \dots, (\ell_{i} + \ell_j) \mod{m} \}$. Proceed analogously with the remaining connected components. Note that $\ell_i<m$ for all $i \in \{1, \dots, c\}$, thus we assign at most one large color class from each component to every new color. Finally, for each connected component we add a small color class to all new colors if no large color class from this connected component was added in the described procedure.
Each new color class contains exactly $\frac{\sum \ell_i}{m}$ large color classes and $c - \frac{\sum \ell_i}{m}$ small color classes and has thus the same number of vertices.
This gives us an $m$-equitable coloring of $G$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:charOneRoundMore}.]
First, assume that $K_{\frac{n}{k}+1}$ is a subgraph of $\bar{G}$, the complement of the feasibility graph. We will show that the tournament is not extendable. To construct an additional round, on the one hand at most one vertex from the complete graph $K_{\frac{n}{k}+1}$ can be in each clique. On the other hand, there are only $\frac{n}{k}$ cliques in a round which directly implies that the tournament is not extendable.
Second, assume that the tournament cannot be extended after $\lfloor \frac{n}{k(k-1)} \rfloor$ rounds. We will show that given E$\Delta$CC{} $K_{\frac{n}{k}+1}$ is a subgraph of $\bar{G}$. If there is an equitable $\frac{n}{k}$-coloring for every connected component, then by \Cref{claim:connectedcomponents} there is an equitable $\frac{n}{k}$-coloring of $\bar{G}$ and thus also a next round of the tournament. This is a contradiction to the assumption. Thus, there is a connected component $\bar{G}_i$ that is not equitable $\frac{n}{k}$-colorable.
After $\lfloor \frac{n}{k(k-1)} \rfloor$ rounds, the degree of all vertices $v$ in $\bar{G}_i$ is $\Delta(\bar{G})=(k-1)\lfloor\frac{n}{k(k-1)}\rfloor \leq \frac{n}{k}$. By E$\Delta$CC for $\ell =\frac{n}{k}$, one of the following three cases occur: (i) $\bar{G}_i=K_{\frac{n}{k} + 1}$, or (ii) $\frac{n}{k}=2$ and $\bar{G}_i$ is an odd cycle, or (iii) $\frac{n}{k}$ is odd and $\bar{G}_i=K_{\frac{n}{k}, \frac{n}{k}}$. We will show that (ii) and (iii) cannot occur, thus $\bar{G}_i=K_{\frac{n}{k} + 1}$, which will finish the proof.
Assume that (ii) occurs, i.e., $\frac{n}{k}=2$ and $\bar{G}_i$ is an odd cycle. Since we assume $k\geq 3$ in this section, an odd cycle can only be formed from a union of complete graphs $K_k$ if there is only one round with $k=3$. Thus, we have that $n=6$. In this case, (ii) reduces to (i) because $\bar{G}_i = K_{3} = K_{\frac{n}{k}+1}$.
Next, assume that (iii) occurs, i.e., $\frac{n}{k}$ is odd and $\bar{G}_i=K_{\frac{n}{k}, \frac{n}{k}}$. Given that $k \geq 3$, we will derive a contradiction. Since $k\geq 3$, every clique of size $k$ contains an odd cycle. This implies $\bar{G}_i$ contains an odd cycle, contradicting that $\bar{G}_i$ is bipartite.
\end{proof}
Note that any tournament with $H=K_k$ and $\lfloor \frac{n}{k(k-1)}\rfloor$ rounds which does not satisfy the condition in \Cref{prop:charOneRoundMore} would disprove the E$\Delta$CC{} .
\begin{proposition}
Let $n>k(k-1)$. If E$\Delta$CC{} is true, then by choosing round $\lfloor \frac{n}{k(k-1)}\rfloor$ carefully, there always exists a tournament with $\lfloor \frac{n}{k(k-1)}\rfloor + 1$ rounds,
\end{proposition}
\begin{proof}
A tournament with $\lfloor \frac{n}{k(k-1)}\rfloor$ rounds is either extendable or by \Cref{prop:charOneRoundMore}, at least one connected component of the complement of the feasibility graph is equal to $K_{\frac{n}{k}+1}$. In the former case, we are done. So assume the latter case. Denote the connected components that are equal to $K_{\frac{n}{k}+1}$
by $\bar{G}_1, \ldots , \bar{G}_c$. First, we shorten the tournament by eliminating the last round and then extend it by two other rounds.
First of all notice that to end up with $\bar{G}_i=K_{\frac{n}{k}+1}$ after $\lfloor \frac{n}{k(k-1)}\rfloor$ rounds all matches between vertices in $\bar{G}_i$ need to be scheduled entirely inside $\bar{G}_i$. The reason for this is that $\bar{G}_i$ has $\frac{n^2}{2k^2}+ \frac{n}{2k}$ edges which is the maximum number of edges that can arise from $\lfloor \frac{n}{k(k-1)}\rfloor$ rounds with $\frac{n}{k}+1$ players.
Clearly, the last round of the original tournament corresponds to a $K_k$-factor in the feasibility graph of the shortened tournament. By the assumed structure of the feasibility graph, all cliques $K_k$ are either completely within $\bar{G}_i$, $i \in \{1,\dots,c\}$ or completely within $V \setminus \bigcup_{i \in \{ 1, \ldots , c\}} \bar{G}_i$. Thus, for each $i \in \{1,\dots,c\}$, all edges between $\bar{G}_i$ and $V \setminus \bar{G}_i$ are not present in the complement of the feasibility graph.
If $c=1$, select a vertex $v_1 \in \bar{G}_1$ and $v_2 \in V \setminus \bar{G}_1$. Exchange these vertices to get a $K_k$-factor with which the shortened tournament is extended. More precisely, $v_1$ is paired with the former clique of $v_2$ and vice versa, while all remaining cliques stay the same.
Since $k<\frac{n}{k}+1$ by assumption, this ensures that there is no set of $\frac{n}{k}+1$ vertices for which we have only scheduled matches within this group. Thus, after extending the tournament, no connected component in the complement of the feasibility graph corresponds to $K_{\frac{n}{k}+1}$.
By \Cref{prop:charOneRoundMore}, the tournament can be extended to have $\lfloor \frac{n}{k(k-1)}\rfloor + 1$ rounds.
If $c>1$, we select a vertex $v_i$ from each $\bar{G}_i$ for $i \in \{ 1,\ldots , c\}$.
We exchange the vertices in a cycle to form new cliques, i.e., $v_i$ is now paired with the vertices in the old clique of $v_{i+1}$ for all $i \in \{1, \dots, c\}$, where $v_{c+1}=v_1$. By adding this new $K_k$-factor, we again ensure that there is no set of $\frac{n}{k}+1$ vertices for which we have only scheduled matches within this group. By applying \Cref{prop:charOneRoundMore} we can extend the tournament for another round, which finishes the proof.
\end{proof}
\section{The Greedy Oberwolfach Problem} \label{sec:gop}
In this section we consider tournaments with $H=C_k$ for $k \geq 3$. Dependent on the number of participants $n$ and $k$ we derive bounds on the number of rounds that can be scheduled greedily in such a tournament.
Before we continue with the theorem, we first state two graph-theoretic results and a conjecture.
\subsubsection*{Aigner-Brandt Theorem \citep{aigner1993embedding}.}
Let $G$ be a graph with minimum degree $\delta(G) \geq \frac{2n-1}{3}$. Then $G$ contains any graph $H$ with at most $n$ vertices and maximum degree $\Delta(H)= 2$ as a subgraph.
\subsubsection*{Alon-Yuster Theorem \citep{alon1996h}.}
For every $\epsilon>0$ and for every $k\in\mathbb{N}$, there exists an $n_0=n_0(\epsilon,k)$ such that for every graph $H$ with $k$ vertices and for every $n>n_0$, any graph $G$ with $nk$ vertices and minimum degree $\delta(G)\geq \left(\frac{\chi(H)-1}{\chi(H)}+\epsilon\right)nk$ has an $H$-factor that can be computed in polynomial time. Here, $\chi(H)$ denotes the chromatic number of $H$, i.e., the smallest possible number of colors for a vertex coloring of $H$.
\subsubsection*{El-Zahar's Conjecture \citep{el1984circuits}.}
Let $G$ be a graph with $n=k_1+\ldots+k_{\ell}$. If $\delta(G)\geq \lceil \frac{1}{2} k_1 \rceil + \ldots + \lceil \frac{1}{2} k_\ell \rceil$, then G contains $\ell$ vertex disjoint cycles of lengths $k_1, \ldots, k_\ell$.
El-Zahar's Conjecture is proven to be true for $k_1=\ldots=k_{\ell}=3$ \citep{corradi1963maximal}, $k_1=\ldots=k_{\ell}=4$ \citep{wang2010proof}, and $k_1=\ldots=k_{\ell}=5$ \citep{wang2012disjoint}.
\begin{theorem}\label{thm:obe1}
Let $H=C_k$, Algorithm~\ref{algo:greedy} outputs a tournament with at least
\begin{enumerate}
\item $\lfloor\frac{n+4}{6}\rfloor$ rounds for all $n \in \mathbb{N}$\;,
\item $\lfloor\frac{n+2}{4}-\epsilon\cdot n \rfloor$ rounds for $n$ large enough, $k$ even and for fixed $\epsilon>0$\;,
\end{enumerate}
If El-Zahar's conjecture is true, the number of rounds improves to $\lfloor\frac{n+2}{4}\rfloor$ for $k$ even and $\lfloor\frac{n+2}{4}-\frac{n}{4k}\rfloor$ for $k$ odd and all $n \in \mathbb{N}$.
\end{theorem}
\begin{proof}
\textbf{Statement 1.} Recall that Algorithm \ref{algo:greedy} starts with the empty tournament and the corresponding feasibility graph is the complete graph, where the degree of every vertex is $n-1$. In each iteration of the algorithm, a $C_k$-factor is deleted from the feasibility graph and thus every vertex loses $2$ edges.
We observe that as long as the constructed tournament has at most $\lfloor\frac{n-2}{6}\rfloor$ rounds, the degree of every vertex in the feasibility graph is at least $n-1-\lfloor\frac{n-2}{3}\rfloor \geq \frac{2n-1}{3}$.
Since a $C_k$-factor with $n$ vertices has degree $2$, by the Aigner-Brandt theorem $G$ contains a $C_k$-factor. It follows that the algorithms runs for another iteration. In total, the number of rounds of the tournament is at least $\lfloor\frac{n-2}{6}\rfloor+1 = \lfloor\frac{n+4}{6}\rfloor$.
\textbf{Statement 2.} Assume $k$ is even. We have that the chromatic number $\chi(C_k)=2$. As long as Algorithm~\ref{algo:greedy} runs for at most $\lfloor\frac{n-2}{4}-\epsilon\cdot n\rfloor$ iterations, the degree of every vertex in the feasibility graph is at least $n-1-2 \cdot \lfloor\frac{n-2}{4}-\epsilon\cdot n\rfloor\geq n-1-\frac{n-2}{2}+2\epsilon\cdot n = \frac{n}{2}+2\epsilon\cdot n$. Hence by the Alon-Yuster theorem with $k'=k$, $n'=\frac{n}{k}$ and $\epsilon'=2\epsilon$, a $C_k$-factor exists for $n$ large enough and thus another iteration is possible. This implies that Algorithm~\ref{algo:greedy} is guaranteed to construct a tournament with $\lfloor\frac{n-2}{4}-\epsilon\cdot n\rfloor + 1$ rounds.
\textbf{Statement El-Zahar, $k$ even.} As long as Algorithm~\ref{algo:greedy} runs for at most $\lfloor\frac{n-2}{4}\rfloor$ iterations, the degree of every vertex in the feasibility graph is at least $n-1-2 \cdot \lfloor\frac{n-2}{4}\rfloor\geq n-1-\frac{n-2}{2} = \frac{n}{2}$. Hence from El-Zahar's conjecture with $k_1 = k_2 = \dots = k_\ell = k$ and $\ell=\frac{n}{k}$, we can deduce that a $C_k$-factor exists as $\frac{k}{2}\cdot \frac{n}{k}=\frac{n}{2}$, and thus another iteration is possible. This implies that Algorithm~\ref{algo:greedy} is guaranteed to construct a tournament with $\lfloor\frac{n-2}{4}\rfloor + 1$ rounds.
\textbf{Statement El-Zahar, $k$ odd.} As long as Algorithm~\ref{algo:greedy} runs for at most $\lfloor\frac{n-2}{4}-\frac{n}{4k}\rfloor$ iterations, the degree of every vertex in the feasibility graph is at least $n-1-\frac{n-2}{2}+\frac{n}{2k} = \frac{n}{2}+ \frac{n}{2k}$. Hence from El-Zahar's conjecture with $k_1 = k_2 = \dots = k_\ell = k$ and $\ell=\frac{n}{k}$, we can deduce that a $C_k$-factor exists as $\frac{k+1}{2}\cdot \frac{n}{k}=\frac{n}{2}+ \frac{n}{2k}$, and thus the constructed tournament can be extended by one more round. This implies that the algorithm outputs a tournament with at least $\lfloor\frac{n-2}{4}-\frac{n}{4k}\rfloor +1$ rounds.
\end{proof}
\begin{proposition}\label{prop:obe2}
Let $H=C_k$ for fixed $k$. Algorithm~\ref{algo:greedy} can be implemented such that it runs in polynomial time for at least
\begin{enumerate}
\item $\lfloor\frac{n+2}{4}-\epsilon\cdot n \rfloor$ rounds, for $k$ even, and fixed $\epsilon>0$, or stops if, in the case of small $n$, no additional round is possible\;,
\item $\lfloor\frac{n+3}{6}-\epsilon \cdot n\rfloor$ rounds, for $k$ odd, and fixed $\epsilon>0$\;.
\end{enumerate}
\end{proposition}
\begin{proof}
\textbf{Case 1.} Assume $k$ is even. By the Alon-Yuster theorem, analogously to Case 2 of Theorem~\ref{thm:obe1}, the first $\lfloor\frac{n+2}{4}-\epsilon\cdot n \rfloor$ rounds exist and can be computed in polynomial time, given that $n>n_0$ for some $n_0$ that depends on $\epsilon$ and $k$. Since $\epsilon$ and $k$ are assumed to be constant, $n_0$ is constant. By enumerating all possibilities in the case $n\leq n_0$, we can bound the running time for all $n \in \mathbb{N}$ by a polynomial function in $n$. Note that the Alon-Yuster theorem only implies existence of $\lfloor\frac{n+2}{4}-\epsilon \cdot n\rfloor$ rounds if $n>n_0$, so it might be that the algorithm stops earlier, but in polynomial time, for $n\leq n_0$.
\textbf{Case 2.} Assume $k$ is odd. First note that the existence of the first $\lfloor\frac{n-3}{6}-\epsilon\cdot n\rfloor \leq \lfloor \frac{n+4}{6} \rfloor$ rounds follows from Theorem~\ref{thm:obe1}. Observe that for odd cycles $C_k$ the chromatic number is $\chi(C_k)=3$. As long as Algorithm~\ref{algo:greedy} runs for at most $\lfloor\frac{n-3}{6}-\epsilon\cdot n\rfloor$ iterations, the degree of every vertex in the feasibility graph is at least $n-1-2 \cdot \lfloor\frac{n-3}{6}-\epsilon\cdot n\rfloor\geq n-1-\frac{n-3}{3}+2\epsilon\cdot n = \frac{2n}{3}+2\epsilon\cdot n$. Hence by the Alon-Yuster theorem with $\epsilon'=2\epsilon$, there is an $n_0$ dependent on $k$ and $\epsilon$ such that a $C_k$-factor can be computed in polynomial time for all $n>n_0$. Since $\epsilon$ and $k$ are assumed to be constant, $n_0$ is constant. By enumerating all possibilities for $n\leq n_0$ we can bound the running time of the algorithm by a polynomial function in $n$ for all $n \in \mathbb{N}$.
\end{proof}
\begin{corollary}
For any fixed $\epsilon>0$, Algorithm~\ref{algo:greedy} is a $\frac{1}{3+\epsilon} $-approximation algorithm for the Oberwolfach problem.
\end{corollary}
\begin{proof}
Fix $\epsilon > 0$.
\paragraph{Case 1} If $n \geq \frac{12}{\epsilon} +6$, we choose $\epsilon '= \frac{1}{(3+ \epsilon) \frac{12}{\epsilon}}$ and use Proposition~\ref{prop:obe2} with $\epsilon'$. We observe
\begin{align*}&\left\lfloor\frac{n+3}{6}-\frac{1}{(3+ \epsilon) \frac{12}{\epsilon}}\cdot n\right\rfloor \cdot (3 + \epsilon) \geq \left(\frac{n-3}{6}-\frac{1}{(3+ \epsilon) \frac{12}{\epsilon}}\cdot n\right) \cdot (3 + \epsilon) \\
= &\left(\frac{(n-3)(3 + \epsilon)}{6}-\frac{\epsilon}{12}\cdot n\right) = \left(\frac{n-3}{2} + \frac{(n-3) \epsilon}{6} -\frac{\epsilon}{12}\cdot n\right)\\ = &\left(\frac{n-3}{2} + \frac{(2 n \epsilon -6 \epsilon)}{12} -\frac{\epsilon n}{12}\right) = \frac{n-1}{2} + \frac{\epsilon( n -6)-12}{12} \geq \frac{n-1}{2} \geq \text{OPT}\;.
\end{align*}
\paragraph{Case 2} If $n < \frac{12}{\epsilon} +6$,
$n$ is a constant and we can find a cycle-factor in each round by enumeration. By the Aigner-Brandt theorem the algorithm outputs $\lfloor \frac{n+4}{6}\rfloor \geq \frac{n-1}{6}$ rounds. This implies a $\frac{1}{3}> \frac{1}{3 + \epsilon}$ approximation algorithm.
\end{proof}
In the rest of the section, we show that the bound corresponding to El-Zahar's conjecture presented in Theorem \ref{thm:obe1} is essentially tight. Through a case distinction, we provide matching examples that show the tightness of the bounds provided by El-Zahar's conjecture for two of three cases. For $k$ even but not divisible by $4$, an additive gap of one round remains. All other cases are tight. Note that this implies that any improvement of the lower bound via an example by just one round (or by two for $k$ even but not divisible by $4$) would disprove El-Zahar's conjecture.
\begin{theorem}
There are infinitely many $n \in \mathbb{N}$ for which there exists a tournament with $H=C_k$ that is not extendable after
\begin{enumerate}
\item $\lfloor\frac{n+2}{4}-\frac{n}{4k}\rfloor$ rounds if $k$ is odd\;,
\item $\lfloor\frac{n+2}{4}\rfloor$ rounds if $k \equiv 0 \mod{4}$\;,
\item $\lfloor\frac{n+2}{4}\rfloor+ 1$ rounds if $k \equiv 2 \mod{4}$\;.
\end{enumerate}
\end{theorem}
\begin{proof}
\textbf{Case 1.} Assume $k$ is odd. Let $n =2k\sum_{j=0}^i k^j$ for some integer $i\in\mathbb{N}$.
We construct a tournament with $n$ participants and $H=C_k$. To do so, we start with the empty tournament and partition the set of vertices of the feasibility graph into two disjoint sets $A$ and $B$. The sets are chosen such that $A \cup B = V$, and $|A| = \frac{n}{2}-\frac{n}{2k}+1= (k-1)\sum_{j=0}^i k^j+1=k^{i+1}$, $|B|= \frac{n}{2}+\frac{n}{2k}-1$ vertices. We observe that $|A|\leq|B|$, since $\frac{n}{2k}\geq 1$.
We construct a tournament such that in the feasibility graph all edges between vertices in $A$ are deleted.
To do so, we use a result of \citet{alspach1989oberwolfach}, who showed that there is a solution for the Oberwolfach problem for all odd $k$ with $n \equiv 0 \mod{k}$ and $n$ odd.
Observe that $|A| \equiv 0 \mod{k}$, thus $|B| \equiv 0 \mod{k}$. Furthermore, $|A|-1$ is even and since $n$ is even this also applies to $\abs{B}-1$. By using the equivalence of the Oberwolfach problem to complete tournaments, there exists a complete tournament within $A$ and within $B$.
We combine these complete tournaments to a tournament for the whole graph with $\min\{|A|-1, |B|-1\}/2 = \frac{|A|-1}{2} = \frac{n}{4}-\frac{n}{4k}$ rounds. Since $|A|$ is odd, the number of rounds is integral.
Considering the feasibility graph of this tournament, there are no edges between vertices in $A$. Thus, every cycle of length $k$ can cover at most $\frac{k-1}{2}$ vertices of $A$. We conclude that there is no $C_k$-factor for the feasibility graph, since $\frac{n}{k}\cdot \frac{k-1}{2}=\frac{n}{2}-\frac{n}{2k}$, so we cannot cover all vertices of $A$. Thus, we constructed a tournament with $\frac{n}{4}-\frac{n}{4k}=\lfloor\frac{n+2}{4}-\frac{n}{4k}\rfloor$ rounds that cannot be extended.
\textbf{Case 2.} Assume $k$ is divisible by $4$. Let $n=i\cdot k$ for some odd integer $i\in\mathbb{N}$. We construct a tournament with $n$ participants by dividing the vertices of the feasibility graph into two disjoint sets $A$ and $B$ such that $|A| = |B|= \frac{n}{2} = i \cdot \frac{k}{2}$. \citet{liu2003equipartite} showed that there exist $n/4$ disjoint $C_k$-factors in a complete bipartite graph with $n/2$ vertices on each side of the bipartition, if $n/2$ is even. That is, every edge of the complete bipartite graph is in exactly one $C_k$-factor.
Since $n/2$ is even by case distinction, there is a tournament with $n/4 = \lfloor \frac{n+2}{4} \rfloor$ rounds such that in the feasibility graph there are only edges within $A$ and within $B$ left. Since $i$ is odd, $|A| = i \cdot \frac{k}{2}$ is not divisible by $k$. Thus, it is not possible to schedule another round by choosing only cycles within sets $A$ and $B$.
\textbf{Case 3.} Assume $k$ is even, but not divisible by 4. Let $n=i\cdot k$ for some odd integer $i\in\mathbb{N}_{\geq 9}$.
We construct a tournament with $n$ participants that is not extendable after $\frac{n+2}{4} + 1$ rounds in two phases. First, we partition the vertices into two disjoint sets $A$ and $B$, each of size $\frac{n}{2}$, and we construct a base tournament with $\frac{n-2}{4}$ rounds such that in the feasibility graph only edges between sets $A$ and $B$ are deleted. Second, we extend the tournament by two additional carefully chosen rounds.
After the base tournament, the feasibility graph consists of two complete graphs $A$ and $B$ connected by a perfect matching between all vertices from $A$ and all vertices from $B$. We use the additional two rounds to delete all of the matching-edges except for one. Using this, we show that the tournament cannot be extended.
In order to construct the base tournament, we first use a result of \citet{alspach1989oberwolfach}. It states that there always exists a solution for the Oberwolfach problem with $n'$ participants and cycle length $k'$ if $n'$ and $k'$ are odd and $n' \equiv 0 \mod{k'}$.
We choose $n'=n/2$ and $k' = k/2$ (observe that by assumption $k\geq 6$ and thus $k'\geq 3$) and then apply the result by \citet{alspach1989oberwolfach} to obtain a solution for the Oberwolfach problem with $n'$ and $k'$. Next we use a construction relying on an idea by \citet{archdeacon2004cycle} to connect two copies of the Oberwolfach solution. Fix the solution for the Oberwolfach problem with $n/2$ participants and cycle length $\frac{k}{2}$, and apply this solution to $A$ and $B$ separately.
Consider one round of the tournament and denote the $C_{\frac{k}{2}}$-factor in $A$ by $(a_{1+j}, a_{2+j}, \dots, a_{\frac{k}{2}+j})$ for $j=0, \frac{k}{2}, k, \dots, \frac{n}{2}-\frac{k}{2}$. By symmetry, the $C_{\frac{k}{2}}$-factor in $B$ can be denoted by $(b_{1+j}, b_{2+j}, \dots, b_{\frac{k}{2}+j})$ for $j=0, \frac{k}{2}, k, \dots, \frac{n}{2}-\frac{k}{2}$. We design a $C_k$-factor in the feasibility graph of the original tournament.
For each $j \in \{0, \frac{k}{2}, k, \dots, \frac{n}{2}-\frac{k}{2}\}$, we construct a cycle $(a_{1+j},b_{2+j},a_{3+j}, \dots, a_{\frac{k}{2}+j},b_{1+j}, a_{2+j},b_{3+j},\dots,b_{\frac{k}{2}+j})$ of length $k$ in $G$. These edges are not used in any other round due to the construction and we used the fact that $\frac{k}{2}$ is odd. We refer to \Cref{fig:basetournament} for an example of one cycle for $k=10$. Since each vertex is in one cycle in each round, the construction yields a feasible round of a tournament. Applying this procedure to all rounds yields the base tournament with $\frac{n-2}{4}$ rounds.
\begin{figure}[t]
\begin{minipage}{0.48\textwidth}
\centering
\begin{tikzpicture}[scale=0.7]
\draw (0,0) ellipse (4cm and 1.1cm);
\draw (0,-2.5) ellipse (4cm and 1.1cm);
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,0){};
\node[left] at (-2,0) {$a_1$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-1,-0.5){};
\node[left] at (-1,-0.5) {$a_2$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (1,-0.5){};
\node[right] at (1,-0.5) {$a_3$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,0){};
\node[right] at (2,0) {$a_4$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,0.5){};
\node[above] at (0,0.5) {$a_5$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,-2.5){};
\node[left] at (-2,-2.5) {$b_1$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-1,-3){};
\node[left] at (-1,-3) {$b_2$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (1,-3){};
\node[right] at (1,-3) {$b_3$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,-2.5){};
\node[right] at (2,-2.5) {$b_4$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,-2){};
\node[above] at (0,-2) {$b_5$};
\draw (-2,0) -- (-1,-0.5);
\draw (-1,-0.5) -- (1,-0.5);
\draw (1,-0.5) -- (2,0);
\draw (2,0) -- (0,0.5);
\draw (0,0.5) -- (-2,0);
\draw (-2,-2.5) -- (-1,-3);
\draw (-1,-3) -- (1,-3);
\draw (1,-3) -- (2,-2.5);
\draw (2,-2.5) -- (0,-2);
\draw (0,-2) -- (-2,-2.5);
\node at (0,-4){};
\end{tikzpicture}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\centering
\begin{tikzpicture}[scale=0.7]
\draw (0,0) ellipse (4cm and 1.1cm);
\draw (0,-2.5) ellipse (4cm and 1.1cm);
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,0){};
\node[left] at (-2,0) {$a_1$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-1,-0.5){};
\node[above] at (-1,-0.5) {$a_2$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (1,-0.5){};
\node[above] at (1,-0.5) {$a_3$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,0){};
\node[right] at (2,0) {$a_4$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,0.5){};
\node[left] at (0,0.5) {$a_5$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,-2.5){};
\node[left] at (-2,-2.5) {$b_1$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-1,-3){};
\node[left] at (-1,-3) {$b_2$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (1,-3){};
\node[left] at (1,-3) {$b_3$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,-2.5){};
\node[right] at (2,-2.5) {$b_4$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,-2){};
\node[below] at (0,-2) {$b_5$};
\draw (-2,0) -- (-1,-3);
\draw (-1,-3) -- (1,-0.5);
\draw (1,-0.5) -- (2,-2.5);
\draw (2,-2.5) -- (0,0.5);
\draw (0,0.5) -- (-2,-2.5);
\draw (-2,-2.5) -- (-1,-0.5);
\draw (-1,-0.5) -- (1,-3);
\draw (1,-3) -- (2,0);
\draw (2,0) -- (0,-2);
\draw (0,-2) -- (-2,0);
\node at (0,-4){};
\end{tikzpicture}
\end{minipage}
\caption{Construction of the base tournament. We transform two cycles of length $5$ into one cycle of length $10$.}\label{fig:basetournament}
\end{figure}
For each edge $e=\{a_{\bar{j}},a_j\}$ with $j\neq \bar{j}$ which is deleted in the feasibility graph of the tournament within $A$, we delete the edges $\{a_{\bar{j}},b_j\}$ and $\{a_j,b_{\bar{j}}\}$ in the feasibility graph. After the base tournament, all edges between $A$ and $B$ except for the edges $(a_1,b_1), (a_2,b_2), \dots , (a_{\frac{n}{2}},b_\frac{n}{2})$ are deleted in the feasibility graph.
In the rest of the proof, we extend the base tournament by two additional rounds. These two rounds are designed in such a way that after the rounds there is exactly one edge connecting a vertex from $A$ with one from $B$. To extend the base tournament by one round construct the cycles of the $C_k$-factor in the following way. For $j\in\{0, \frac{k}{2}, k, \dots, \frac{n}{2}-\frac{k}{2}\}$, we construct the cycle $(a_{1+j},b_{1+j},b_{2+j},a_{2+j}, \ldots,b_{\frac{k}{2}-2 +j} b_{\frac{k}{2}-1 +j}, b_{\frac{k}{2} +j}, a_{\frac{k}{2} +j},a_{\frac{k}{2}-1 +j})$, see \Cref{fig:extendround1}. Since all edges within $A$ and $B$ are part of the feasibility graph as well as all edges $(a_{j'},b_{j'})$ for $j' \in \{ 1, \ldots , \frac{n}{2}\}$ this is a feasible construction of a $C_k$-factor and thus an extension of the base tournament.
\begin{figure}[t]
\centering
\begin{tikzpicture}[scale=0.9]
\draw (0,0) ellipse (4cm and 1cm);
\draw (0,-2.5) ellipse (4cm and 1cm);
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,0){};
\node[left] at (-2,0) {$a_1$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-1,-0.5){};
\node[left] at (-1,-0.5) {$a_2$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (1,-0.5){};
\node[right] at (1,-0.5) {$a_3$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,0){};
\node[right] at (2,0) {$a_4$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,0.5){};
\node[left] at (0,0.5) {$a_5$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-2,-2.5){};
\node[left] at (-2,-2.5) {$b_1$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (-1,-3){};
\node[left] at (-1,-3) {$b_2$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (1,-3){};
\node[left] at (1,-3) {$b_3$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (2,-2.5){};
\node[right] at (2,-2.5) {$b_4$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] at (0,-2){};
\node[left] at (0,-2) {$b_5$};
\draw (-2,0) -- (-2,-2.5);
\draw (-2,-2.5) -- (-1,-3);
\draw (-1,-3) -- (-1,-0.5);
\draw (-1,-0.5) -- (1,-0.5);
\draw (1,-0.5) -- (1,-3);
\draw (1,-3) -- (2,-2.5);
\draw (2,-2.5) -- (0,-2);
\draw (0,-2) -- (0,0.5);
\draw (0,0.5) -- (2,0);
\draw (2,0) -- (-2,0);
\end{tikzpicture}
\caption{An example of one cycle in the construction that is used for the extension of the base tournament.}\label{fig:extendround1}
\end{figure}
After the extension of the base tournament by one round the feasibility graph has the following structure. The degree of all vertices equals $\frac{n}{2}-2$ and the only edges between vertices from $A$ and $B$ are
\[\left\{(a_{\frac{k}{2}-1+j}, b_{\frac{k}{2}-1+j}) \mid j \in \left\{0, \frac{k}{2}, k, \dots, \frac{n}{2} - \frac{k}{2}\right\}\right\}\;.\]
We will construct one more round such that
after this round, there is only one of the matching edges remaining in the feasibility graph.
In order to do so, we will construct the $C_k$-factor with cycles $(C_1,\ldots, C_\frac{n}{k})$ by a greedy procedure as follows. Cycles $C_1, \dots, C_{\frac{n}{2k}-\frac{1}{2}}$ will all contain two matching edges and the other cycles none. In order to simplify notation we set
\[A_M = \left\{a_{\frac{k}{2}-1+j} \mid j \in \left\{0, \frac{k}{2}, k, \dots, \frac{n}{2} - \frac{k}{2}\right\}\right\}\;,\]
and $A_{-M} = A \setminus A_M$. We have $|A_{-M}| = \frac{n}{2}-\frac{n}{k}$. We define $B_M$ and $B_{-M}$ analogously. For some cycle $C_{z}$, $z\leq\frac{n}{2k}-\frac{1}{2}$, we greedily pick two of the matching edges. Let $(a_{\ell},b_{\ell})$ and $(a_r,b_r)$ be these two matching edges. To complete the cycle, we show that we can always construct a path from $a_{\ell}$ to $a_r$ by picking vertices from $A_{-M}$ and from $b_{\ell}$ to $b_r$ by vertices from $B_{-M}$. Assuming that we have already constructed cycles $C_1,\ldots, C_{z-1}$, there are still
\begin{align*}
\frac{n}{2} - \frac{n}{k} - (z-1) \left(\frac{k}{2}-2\right)
\end{align*}
unused vertices in the set $A_{-M}$. Even after choosing some vertices for cycle $z$ the number of unused vertices in $A_{-M}$ is at least
\[\frac{n}{2} - \frac{n}{k} - z \left(\frac{k}{2}-2\right) \geq \frac{n}{2} - \frac{n}{k} - z \frac{k}{2} \geq \frac{n}{2} - \frac{n}{k} - \frac{n}{2k} \frac{k}{2} = \frac{n}{4} - \frac{n}{k} \geq \frac{n}{12}\;.\]
Let $N(v)$ denote the neighborhood of vertex $v$. The greedy procedure that constructs a path from $a_{\ell}$ to $a_r$ works as follows. We set vertex $a_{\ell}$ active. For each active vertex $v$, we pick one of the vertices $a \in N(v) \cap A_{-M}$, delete $a$ from $A_{-M}$ and set $a$ active. We repeat this until we have chosen $\frac{k}{2}-3$ vertices. Next, we pick a vertex in $N(v) \cap A_{-M} \cap N(a_r)$ in order to ensure that the path ends at $a_r$. Since $|A_{-M}| \geq \frac{n}{12}$, we observe
\[|N(v) \cap A_{-M} \cap N(a_r)| \geq \frac{n}{12} - 1-2\;,\]
so there is always a suitable vertex as $n\geq 9 k\geq 54$.
The construction for the path from $b_{\ell}$ to $b_r$ is analogous.
For cycles $C_{\frac{n}{2k}+\frac{1}{2}}, \dots, C_\frac{n}{k}$, there are still $\frac{n}{4}+\frac{k}{4}$ leftover vertices within $A$ and within $B$.
The degree of each vertex within the set of remaining vertices is at least $\frac{n}{4}+\frac{k}{4}-3$. This is large enough to apply the Aigner-Brandt theorem as $i\geq 9$ and $k\geq 6$.
In this way, we construct a $C_k$-factor in the feasibility graph. This means we can extend the tournament by one more round. In total we constructed a tournament of $\frac{n+2}{4}+1$ rounds, which is obviously equal to $\lfloor \frac{n+2}{4} \rfloor +1$.
To see that this tournament cannot be extended further, consider the feasibility graph. Most of the edges within $A$ and $B$ are still present, while between $A$ and $B$ there is only one edge left. This means a $C_k$-factor can only consist of cycles that are entirely in $A$ or in $B$. Since $\abs{A}=\abs{B}$ and the number of cycles $\frac{n}{k}=i$ is odd, there is no $C_k$-factor in the feasibility graph and thus the constructed tournament is not extendable.
\end{proof}
\section{Conclusion and Outlook}
In this work, we studied the social golfer problem and the Oberwolfach problem from an optimization perspective. We presented bounds on the number of rounds that can be guaranteed by a greedy algorithm.
For the social golfer problem the provided bounds are tight. Assuming El-Zahar's conjecture \citep{el1984circuits} holds, a gap of one remains for the Oberwolfach problem. This gives a performance guarantee for the optimization variant of both problems. Since both a clique- and cycle-factor can be found in polynomial time for graphs with high degree, the greedy algorithm is a $\frac{k-1}{2k^2-3k-1}$-approximation algorithm for the social golfer problem and a $\frac{1}{3+\epsilon}$-approximation algorithm for any fixed $\epsilon>0$ for the Oberwolfach problem.
Given some tournament it would be interesting to analyze the complexity of deciding whether the tournament can be extended by an additional round. Proving \ensuremath{\mathsf{NP}}\xspace-hardness seems particularly complicated since one cannot use any regular graph for the reduction proof, but only graphs that are feasibility graphs of a tournament.
Lastly, the general idea of greedily deleting particular subgraphs $H$ from base graphs $G$ can also be applied to different choices of $G$ and $H$.
\section*{Acknowledgement}
This research started after supervising the Master's thesis of David Kuntz. We thank David for valuable discussions.
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 1.904297,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfVjxK6nrxl9bPbXh |
\section{Introduction}
Sierpi\'{n}ski and Riesel numbers are not easy to find.
To disqualify an odd positive integer as a Sierpi\'{n}ski number
or a Riesel number, one need only locate a prime in the appropriate infinite list.
With four exceptions, $k = 47, 103, 143, 197$, all of the first 100 odd positive
integers, $1 \le k \le 199$, are disqualified as Sierpi\'{n}ski numbers by
finding at least one prime in the first eight elements of the infinite
list~\cite{oeis}:
\[ k 2^{1} + 1, k 2^{2} + 1, k 2^{3} + 1, \ldots, k 2^{8} + 1. \]
Both $k = 103$ and $k = 197$ are eliminated by finding a prime in the list no later
than $k 2^{16} + 1$ \cite{oeis}, leaving $47$ and $143$ as the only possible
Sierpi\'{n}ski numbers less than $200$. It turns out that $143 \cdot 2^{53} + 1$
and $47 \cdot 2^{583} + 1$ are prime~\cite{oeis}, eliminating them. Thus, there
are no Sierpi\'{n}ski numbers in the range $1 \le k \le 199$. The situation is
similar for Riesel numbers.
In 1960, W.~Sierpi\'{n}ski~\cite{sierpinski} proved, for
\[k = 15511380746462593381, \]
every member in the infinite list, given by (\ref{eq:sierpinski}),
is divisible by one of the prime factors of the first six Fermat numbers.
For nonnegative integer, $n$, the \emph{Fermat number}, $F_{n}$, is given by
\[ F_{n} = 2^{2^n}+1. \]
The first five Fermat numbers are prime and $F_{5}$ is the product of two primes:
\[ F_{0} = 3, F_{1} = 5, F_{2} = 17, F_{3} = 257, F_{4} = 65537, \]
\[ F_{5} = 4294967297 = 641 \cdot 6700417. \]
Thus $(3\ 5\ 17\ 257\ 641\ 65537\ 6700417)$ is a cover for
$k = 15511380746462593381$, showing it to be a Sierpi\'{n}ski number.
Sierpi\'{n}ski's original proof is described in \cite[page 374]{sierpinski1}
and \cite{jones}.
In 1962, J.~Selfridge (unpublished) proved that $78557$ is a Sierpi\'{n}ski
number by showing that
\[ (3\ 5\ 7\ 13\ 19\ 37\ 73) \]
is a cover~\cite{17orbust}.
Later, in 1967, Selfridge and Sierpi\'{n}ski conjectured that
$78557$ is the smallest Sierpinski number~\cite{17orbust}.
The distributed computing project Seventeen or Bust~\cite{17orbust} is devoted
to proving this conjecture, disqualifying every $k < 78557$, by finding an $n$
that makes $k \cdot 2^{n} + 1$ prime. For example, $19249 \cdot 2^{13018586} + 1$,
a $3918990$-digit prime, eliminated $19249$~\cite{proth}. When this project
started in 2002, all but $17$ values of $k$ had already been disqualified.
Currently six values of $k$ remain to be eliminated.
Earlier, in 1956, but less well known than Sierpi\'{n}ski's work,
H.~Riesel~\cite{riesel} showed $509203$ is a Riesel number with cover
$(3\ 5\ 7\ 13\ 17\ 241)$. It is possible for the same odd positive integer
to be both a Sierpi\'{n}ski number and a Riesel number.
An example~\cite{filaseta} is $k = 143665583045350793098657$.
\section{Covers Into ACL2 Proofs}
Given an odd positive integer, $k$, with a Sierpi\'{n}ski cover, $\mathcal{C}$,
here is the process used to verify that $k$ is a Sierpi\'{n}ski number. There
is a similar process for verfying Riesel numbers from their covers.
\begin{enumerate}
\item For each $d$ in $\mathcal{C}$, find positive integer $b_{d}$ and
nonnegative integer $c_{d}$ so that for every nonnegative integer $i$,
$d$ is a factor of $k \cdot 2^{b_{d} \cdot i + c_{d}} + 1$.
In practice, every $d$ in $\mathcal{C}$ is an odd prime smaller than $k$.
\begin{enumerate}
\item Search for positive integer $b$ such that $d$ is a factor of
$2^{b} - 1$. Since $d$ is an odd prime, it turns out that
such a $b$ will always exist\footnote{For the mathematically
literate: The well-known Fermat's Little Theorem ensures the
claimed existence.}
among $1, 2, \ldots, d-1$.
Let $b_{d}$ be the first\footnote{Thus, being mathematically
precise, $b_{d}$ is just the order of $2$ in the multiplicative
group of the integers modulo $d$.}
such $b$. \label{induction}
\item Search for nonnegative integer $c$ such that $d$ is a factor
of $k \cdot 2^{c} + 1$. If such a $c$ exists, then one exists
among $0, 1, \ldots, b_{d}-1$. Let $c_{d}$ be the
first\footnote{If $d$ does not divide $k$, then
$2^{c_{d}} \equiv -(1/k)\pmod d$, so $c_{d}$ is the \emph{discrete
logarithm}, base $2$, of $-(1/k)$ in the integers modulo $d$.}
such $c$, if it exists. \label{base}
\item Assuming $c_{d}$ exists, use induction on $i$, to prove that
for every nonnegative integer $i$, $d$ is a factor of
$k \cdot 2^{b_{d} \cdot i + c_{d}} + 1$. \label{result}
The base case, when $i = 0$, follows from \ref{base} above.
The induction step, going from $i = j$ to $i = j+1$, follows
from \ref{induction} above:
\begin{equation}
k 2^{b_{d}(j+1) + c_{d}} + 1 = [k 2^{b_{d}j + c_{d}} \cdot (2^{b_{d}} - 1)]
+ [k 2^{b_{d}j + c_{d}} + 1] \label{eq:sum}
\end{equation}
By \ref{induction}, $d$ is a factor of the left summand of
(\ref{eq:sum}) and $d$ is a factor of the right summand by
the induction hypothesis.
\end{enumerate}
\item For each positive integer $n$, find $d$ in $\mathcal{C}$ and nonnegative
integer $i$ so that $n = b_{d} \cdot i + c_{d}$. If such $d$ and $i$
exist, then, by \ref{result}, $d$ is a factor of
$k \cdot 2^{b_{d} \cdot i + c_{d}} + 1 = k \cdot 2^{n} + 1$.
To ensure that such $d$ and $i$ exist for every positive $n$, only a
finite number of cases need be considered: Let $\ell$ be the least
common multiple of all the $b_{d}$'s found for the $d$'s in $\mathcal{C}$.
Check for each
\[ n \in \{0, 1, 2, \ldots, \ell - 1 \}, \]
that there always is a $d$ in $\mathcal{C}$ that satisfies the equation
\[ \bmod(n, b_{d}) = c_{d}. \]
\end{enumerate}
This process has not been formally verified in ACL2. For example, we don't
bother to check that every member of $\mathcal{C}$ is an odd prime. Instead,
for each individual $k$ and $\mathcal{C}$, ACL2 events are generated that would
prove $k$ is a Sierpi\'{n}ski number, if all the events succeed. If some of
the events fail, then, as usual when using ACL2, further study of the failure
is required, in the hope of taking corrective action.
The generation of these events is controlled by the macros
\texttt{verify-sierpinski} and \texttt{verify-riesel}. These macros
take three arguments: the name of a witness function that will find a
factor for a given $k 2^{n} \pm 1$, the number $k$ that is a
Sierpi{\'{n}}ski or Riesel number, and the cover $\mathcal{C}$ for
$k$. The macros then generate the proof, following the plan outlined
in this section.
For each $d$ in $\mathcal{C}$, $b_{d}$ and $c_{d}$ from 1a and 1b, are computed.
They are needed to define the witness function and to state the theorems
mentioned in 1c, which are then proved.
For example, the proof that 78557 is a Sierpi{\'{n}}ski number defines
this witness function:
\begin{verbatim}
(DEFUN WITNESS (N)
(IF (INTEGERP N)
(COND ((EQUAL (MOD N 2) 0) 3)
((EQUAL (MOD N 4) 1) 5)
((EQUAL (MOD N 3) 1) 7)
((EQUAL (MOD N 12) 11) 13)
((EQUAL (MOD N 18) 15) 19)
((EQUAL (MOD N 36) 27) 37)
((EQUAL (MOD N 9) 3) 73))
0))
\end{verbatim}
The rightmost numbers, in this definition, form the cover, the corresponding
$b_{d}$'s are the leftmost numbers, and the middle numbers are the $c_{d}$'s.
So $\mathcal{C} = (3\ 5\ 7\ 13\ 19\ 37\ 73)$, $b_{73} = 9$, and $c_{73} = 3$.
The theorem, from 1c, for $d = 73$ is
\begin{verbatim}
(DEFTHM WITNESS-LEMMA-73
(IMPLIES (AND (INTEGERP N)
(>= N 0))
(DIVIDES 73
(+ 1
(* 78557
(EXPT 2
(+ 3
(* 9 N)))))))
:HINTS ...)
\end{verbatim}
Four properties are proved about the witness function, establishing
78557 is a Sierpi{\'{n}}ski number:
\begin{verbatim}
(DEFTHM WITNESS-NATP
(AND (INTEGERP (WITNESS N))
(<= 0 (WITNESS N)))
:HINTS ...)
\end{verbatim}
\begin{verbatim}
(DEFTHM WITNESS-GT-1
(IMPLIES (INTEGERP N)
(< 1 (WITNESS N)))
:HINTS ...)
(DEFTHM WITNESS-LT-SIERPINSKI
(IMPLIES (AND (INTEGERP N)
(<= 0 N))
(< (WITNESS N)
(+ 1 (* 78557 (EXPT 2 N))))))
(DEFTHM WITNESS-DIVIDES-SIERPINSKI-SEQUENCE
(IMPLIES (AND (INTEGERP N)
(<= 0 N))
(DIVIDES (WITNESS N)
(+ 1 (* 78557 (EXPT 2 N)))))
:HINTS ...)
\end{verbatim}
As suggested above in 2, these properties can be proved by showing
every integer is ``covered'' by one of the cases given in the
\texttt{COND}-expression used in the definition of the witness function.
\begin{verbatim}
(DEFTHM WITNESS-COVER-ALL-CASES
(IMPLIES (INTEGERP N)
(OR (EQUAL (MOD N 2) 0)
(EQUAL (MOD N 4) 1)
(EQUAL (MOD N 3) 1)
(EQUAL (MOD N 12) 11)
(EQUAL (MOD N 18) 15)
(EQUAL (MOD N 36) 27)
(EQUAL (MOD N 9) 3)))
:RULE-CLASSES NIL
:HINTS ...)
\end{verbatim}
To prove this, we first demonstrate that these cases are exhaustive when
$n$ is replaced by $\bmod(n,36)$ (where $36$ is the least common multiple
of all the moduli above). This can be checked, essentially, by computation.
\begin{verbatim}
(DEFTHM WITNESS-COVER-ALL-CASES-MOD-36
(IMPLIES (INTEGERP N)
(OR (EQUAL (MOD (MOD N 36) 2) 0)
(EQUAL (MOD (MOD N 36) 4) 1)
(EQUAL (MOD (MOD N 36) 3) 1)
(EQUAL (MOD (MOD N 36) 12) 11)
(EQUAL (MOD (MOD N 36) 18) 15)
(EQUAL (MOD (MOD N 36) 36) 27)
(EQUAL (MOD (MOD N 36) 9) 3)))
:RULE-CLASSES NIL
:HINTS ...)
\end{verbatim}
The actual modular equivalences
that need to be proved depend on both the number 78557 and its cover.
Although the theorem that is being proved is obviously true, there
does not appear to be a way to prove it once and for all in ACL2, not
even using \texttt{encapsulate}. Instead, a pair of theorems very
much like the ones we have described needs to be proved from scratch
for each different Sierpi{\'{n}}ski or Riesel number. As experienced
ACL2 users, we are concerned that ACL2 will simply fail to prove this
theorem for some combination of numbers and their covers. However, we
have used these macros to generate the proof for each of the
Sierpi{\'{n}}ski and Riesel numbers \emph{with covers} listed in the
appendix, and all of the proofs have gone through automatically.
Note that the appendix essentially\footnote{Given a Sierpi{\'{n}}ski or
Riesel number $k$ and its cover $\mathcal{C}$, infinitely many other examples
can be constructed: Let $P$ be the product of the numbers in $\mathcal{C}$ and
let $i$ be a positive integer. Then $k + 2 \cdot i \cdot P$ is also a
Sierpi{\'{n}}ski or Riesel number with the same cover.}
contains all the Sierpi{\'{n}}ski and Riesel numbers known to us.
\section{Numbers Without Covers}
There are odd positive integers, shown to be Sierpi\'{n}ski (or Riesel) numbers,
that have no known covers. ACL2 proofs have been constructed for these numbers.
For example~\cite{filaseta}, $k = 4008735125781478102999926000625$ is a
Sierpi\'{n}ski number, but no (complete) cover is known. For all
positive integer, $n$, if $\bmod( n, 4) \ne 2$, then $k \cdot 2^{n} + 1$ has
a factor among the members of $(3\ 17\ 97\ 241\ 257\ 673)$.
To show $k$ is a Sierpi\'{n}ski number, a factor of $k \cdot 2^{n} + 1$
must be found for all positive integer, $n$, such that $\bmod( n, 4) = 2$.
Such a factor is constructed using these facts:
\begin{itemize}
\item $k = 44745755^{4}$ is a fourth power
\item $4x^{4} + 1 = (2x^{2} + 2x + 1) \cdot (2x^{2} - 2x + 1)$
\end{itemize}
Let $i = 44745755$, so $k = i^{4}$. Then
\begin{eqnarray}
k \cdot 2^{4n+2} + 1 & = & 2^{2}(i \cdot 2^{n})^{4} + 1 \nonumber \\
& = & 4(i \cdot 2^{n})^{4} + 1 \nonumber \\
& = & [2(i \cdot 2^{n})^{2} + 2(i \cdot 2^{n}) + 1]
\cdot
[2(i \cdot 2^{n})^{2} - 2(i \cdot 2^{n}) + 1]
\label{eq:product}
\end{eqnarray}
The left factor of (\ref{eq:product}) algebraically reduces to show
\[ 4004365181040050 \cdot 2^{2 \lfloor n / 4 \rfloor}
\mbox{} + 89491510 \cdot 2^{\lfloor n / 4 \rfloor} + 1 \]
is a factor of $k \cdot 2^{n} + 1$, whenever $\bmod( n, 4) = 2$.
A Riesel number, $k$, with no known cover, is given in Appendix A.
In this example, $k = a^{2}$ is a square and
\begin{eqnarray*}
k \cdot 2^{2n} - 1 & = & a^{2} \cdot 2^{2n} - 1 \\
& = & (a2^{n})^{2} - 1 \\
& = & (a2^{n} + 1) \cdot (a2^{n} - 1)
\end{eqnarray*}
shows how to factor $k \cdot 2^{m} - 1$ when $m$ is even and positive.
A (partial) cover, listed in Appendix A, gives a constant factor for
each $k \cdot 2^{m} - 1$, when $m$ is odd and positive.
\section{Conclusions}
Given a Sierpi\'{n}ski or Riesel number, $k$, and its cover, we have described
ACL2 macros that generate events verifying that each integer, in the appropriate
infinite list, has a smaller factor in the cover.
For the few known Sierpi\'{n}ski or Riesel numbers with no known covers,
hand-crafted ACL2 proofs have been constructed verifying that each
integer, in the appropriate infinite list, has a smaller factor.
\bibliographystyle{eptcs}
| {
"attr-fineweb-edu": 1.78125,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbNHxaJJQnMsbycmb | \section{Introduction}
The distribution of the number of goals scored in association football (soccer) matches has been investigated
by various authors over the last half century \citep{moroney56,maher82,dixon98,greenhough01,bittner07}.
The emphasis has usually been on finding models that describe the distributions observed in large data-bases of
match results, often with the objective of forecasting results, of optimising
playing or betting strategies, or of studying the efficiency of the betting market. We here use the results of this work
not for forecasting but to consider what, if anything, can be deduced from the result of a match
about the relative strengths of the teams.
A football match can be regarded as an experiment to determine which of the two teams is in some sense
superior, or perhaps one should say ``.. is superior given
the date and circumstances of the match". The statistical models of goal numbers that have been developed
have major implications for the probability that the experiment gives a correct result -- that is to say that
``the best team won". These have not been widely discussed, and here we quantify them and extend some of the considerations
to tournaments involving many teams and matches.
In the simplest models which have been considered, goal scoring is regarded as a Bernoulli process in which
the probability of team $A$ scoring in time interval $dt$ is $\lambda_a dt$, where $\lambda_a$ is constant,
and similarly that for team $B$ is $\lambda_b dt$. This leads to the probability of the result $(N_a,N_b)$
being given by the product of two univariate Poisson distributions :
\begin{equation}
\wp\left\{ (N_a,N_b)|(\alpha_a,\alpha_b) \right\} \; = \; {{\alpha_a^{N_a} \exp(-\alpha_a)}\over{N_a!}}\:{{\alpha_b^{N_b} \exp(-\alpha_b)}\over{N_b!}}
\; = \;{\alpha_a}^{N_a} {\alpha_b}^{N_b} {{\exp(-\alpha_a-\alpha_b)}\over{N_a!N_b!}}
\label{conditional_prob_2_univariate}
\end {equation}
with expectation values $\alpha_a = \lambda_a T$ and $\alpha_b = \lambda_b T$, where $T$ is the match duration.
In practice the $\lambda$ are not constant. Variation of $\lambda$ during a match would not by itself
invalidate the Poisson model as a mean level can be used. The well-known ``home team advantage" implies that
$\lambda$ is likely to depend on where the match is played \citep{lee77,clarke_norman95}.
This is sometimes accommodated by analysing separately
the score at home and away matches. More difficult to handle is the fact that it might be expected that for psychological
or strategic reasons $\lambda$ might depend on the number of goals already scored by either or both of the two teams.
There is statistical evidence that this is indeed the case.
In considering the distribution of goal scores at an aggregate level, it was noted from an early stage that
there is an excess of high scores compared with a Poisson distribution. Maher \cite{maher82}
pointed out that the negative binomial distribution used by Reep et al. \cite{reep71} and, implicitly,
by Moroney \cite{moroney56} to provide a better description of the tail of the distribution can be regarded as the weighted sum of
Poisson distributions with different means. Thus it is consistent with the expected effect of including
results obtained with different $\alpha$ in the aggregate.
Greenhough et al. \cite{greenhough01} found that the high score tails in some datasets could not be modelled by either Poisson or
negative binomial distributions and were better described by using extremal statistics. Bittner et al. \cite{bittner07}
explain the excess in terms of a dependance of $\alpha$ on the number of goals already scored -- a dependence that
they ascribe to `football fever' -- a self affirmation in which goals encourage more goals. This effect appears to dominate over one in which a winning team either relaxes or plays a purely defensive game.
It is obviously a simplification to model each teams score independent of the other. Modifying the simple
univariate Poisson model of equation (1) to allow for a correlation between the two scores leads
to a bivariate Poisson distribution for $\wp(N_a,N_b)$. Maher \cite{maher82} used a bivariate Poisson model to correct
the tendency of simpler models to underestimate the number of draws. Lee \cite{lee99} has discussed
such models in the context of Australian rugby league scores and compared them with others.
Crowder et al \cite{crowder02} have applied them to football results, and Karlis and Ntzoufras \cite{karlis03} to
both football and water polo. In some of their models Bittner et al.
\cite{bittner07}
allow for the correlation by making the
scoring rate depend on the number of goals scored by \textbf{both} teams, potentially in different ways.
Karlis and Ntzoufras \cite{karlis05} developed an inflated bivariate Poisson distribution to take account simultaneously of both correlations and non-Poissonian tails.
We here consider the level of confidence that, given the statistical uncertainty implied by models such as those discussed above,
one can have in the outcome of a match. First (in Section 2) the simple model of Equation 1 is used. In Section 2.2 we show that
the conclusions are little changed by the use of more sophisticated models. Section 3 examines the implications for a tournament
involving a series of matches.
\section{Level of confidence in the outcome of a match}
If a football match is a well designed experiment the winning team -- that which has scored the greatest number of goals at
the end of the match -- will be the one with a higher level of skill. By making certain simplifying assumptions the
probability that the experiment gives the wrong result for purely statistical reasons can be quantified.
When considering the outcome of a single match, many of the issues which complicate the analysis of aggregate
scores can be ignored. We will put to one side issues of whether a team has lost its form, changed its manager
or is at home or away, and we will consider that the experiment has led to the correct result if the team
that is stronger, on the day and in the particular circumstances of the match, wins.
If it were possible to replay a match many times in exactly the same circumstances then after a sufficient number of matches one team could eventually be shown to be
superior to the other, with whatever level of confidence was required, but for some fraction of individual matches the score would imply a reversed ranking.
We do not in practice know $\alpha_a, \alpha_b$ but after the match we know the final score, ($N_a, N_b$). Given the number of goals scored by each team
and assuming that each follows a Poisson distribution independent of the other, then equation \ref{conditional_prob_2_univariate} allows the probability
$ \wp \left\{ (N_a, N_b) | (\alpha_a, \alpha_b)\right\}$ to be found as a function of $\alpha_a, \alpha_b$, but we are more
concerned with $ \wp \left\{ (\alpha_a, \alpha_b) | (N_a, N_b)\right\} $.
Bayes' theorem allows us to write
\begin{equation}
\wp \left\{ (\alpha_a, \alpha_b) | (N_a, N_b)\right\}
= \wp\left\{ (N_a, N_b) | (\alpha_a, \alpha_b)\right\} {{\wp\left\{ \alpha_a, \alpha_b\right\} }\over {\wp\left\{ N_a, N_b\right\}}}
\end{equation}
We will initially assume no prior knowledge about the strength of the teams. This means that, before the match, any combination of $\alpha_a, \alpha_b$ is equally
likely, or in other words that the prior probability ${\wp\{\alpha_a, \alpha_b\}}$ is constant. For a given result $N_a,N_b$,
Equation \ref{conditional_prob_2_univariate}
then also gives the relative probability $\wp\left\{ (\alpha_a, \alpha_b) | (N_a, N_b)\right\}$.
In fact the objective of the experiment is only to know which team is superior, that is to say whether $\alpha_a > \alpha_b$ or $\alpha_a < \alpha_b$.
The convention in football and most games is the Bayesian one -- one adopts the solution that has the highest probability
of producing the observed result. In the absence of prior information, the case that is most likely to lead to the result $(N_a,N_b)$ is
$\alpha_a = N_a, \alpha_b = N_b$, so if $N_a > N_b$ then we deduce that $\alpha_a > \alpha_b$ and we declare team $A$ to be superior.
But a range of solutions surrounding the best one is also allowed.
To find the probability $w$ that the result does not correctly reflect the abilities of the teams, we need to integrate
over the relevant part of $\alpha_a, \alpha_b$ space. For $N_a<N_b$
\begin{equation}
w(N_a,N_b) \; = \;\wp\left\{ (\alpha_a > \alpha_b) | (N_a, N_b)\right\} = { \int_0^\infty \int_{\alpha_b}^\infty \wp\left\{ (N_a, N_b) | (\alpha_a, \alpha_b)\right\} d\alpha_a d\alpha_b }
\end{equation}
\subsection{If two unknown univariate Poisson teams play each other...}
Suppose we have any model that gives a probability of different scores as a function of a pair of expectation values $\alpha_a, \alpha_b$
(or of some other parameters characterising the two teams). For a given final score, we can now evaluate the
probability that the match (experiment) gave a correct or a misleading result.
Figure \ref{fig_flat} gives results using the simple univariate Poisson model of Equation \ref{conditional_prob_2_univariate}. It can be see that
the probability of a false result is considerable unless the goal difference is very high. For differences less than 3--4 goals the result lacks
the 90\% confidence which within quantitative disciplines is frequently considered a minimum acceptable level of confidence in the outcome of an experiment.
The majority of final scores that occur in top quality football fail to reach even `1-sigma' confidence.
\begin{figure}[h]\centerline{\rotatebox{-90}{\scalebox{0.5}{\includegraphics{prob_plot3_flat.pdf}}}}
\caption{ The probability $W$ that a particular result $N_a,N_b$ does not correctly represent the relative abilities of the two teams. Calculated with
flat prior probability functions. The probabilities are meaningful only for integer numbers of goals, but interpolated contours are shown to define
zones in the plot corresponding to $W<10$\% (continuous lines) and to $W<32$\% (corresponding approximately to a 1$\sigma$ result). The dotted line encloses 50\%
of the results in FIFA world cup matches}
\label{fig_flat}
\vspace{0.1cm}
\end{figure}
\subsection{More complex models}
As has already been discussed, the use of univariate Poisson distributions for the two teams is an approximation.
We use here as an example the final scores during the FIFA world cup series 1938-2006 (after extra time where applicable, but without penalty shoot-outs).
The distribution of number of goals scored is shown in Figure \ref{fits}. There is an excess of high scores compared with a
Poisson distribution having the same mean (b), as seen by many authors in other datasets. A better fit is provided by a negative binomial distribution (c)
with parameters adjusted to maximise the likelihood, though there are still indications of a slight excess of high scores.
Re-evaluating the data shown in Figure \ref{fig_flat} with the negative binomial fit changes the values very little.
\begin{figure}[h]\centerline{\scalebox{0.6}{\includegraphics{fits.pdf}}}
\caption{ (a) The distribution of the number of goals per match scored by teams in the FIFA world cup 1938-2006. (b) A Poisson distribution with the same mean (1.43).
(c) A negative binomial fit. (d) The distribution of the expectation values of Poisson distributions which would have to be combined to produce (c)
(normalised to a maximum of 100).}
\label{fits}
\vspace{0.1cm}
\end{figure}
Strictly, the inclusion of results after extra time must have some effect. For example,
the scores will not be Poissonian if a decision to prolong the match depends on the score.
Thus some small part of the ``supra-Poissonian" variance must be due to including data from extended duration matches.
The effect of extra time in those matches where extra time was played is to reduce the fraction of drawn matches
from about 25\% to 12.3\%. However, the impact on the data in Figure \ref{fits}
of using results after normal time rather than after extra time is to shift the points only
by less than the size of the symbols.
More importantly, the assumption of a uniform prior is obviously invalid -- we know that there are no teams around
that regularly score thousands of goals per match!
The distribution of $\alpha$ must actually be rather narrow, otherwise analyses of large databases would not
find even approximately a Poisson score distribution. If we use narrower prior probability distributions for
$\wp\left\{\alpha_a\right\}$, $\wp\left\{\alpha_b \right\}$ (keeping them the same for $A$ and $B$, because we want to start
the experiment with no presumption about the outcome) the significance
which should be attached to the outcome of a match will be further reduced. The experiment is trying to differentiate between two teams
already known to be close in ability.
The negative binomial distribution can be expressed as a weighted mixture of Poissonian ones :
\begin{equation}
f(n) = {\Gamma(r+n) \over{n! \Gamma(r)}} p^r (1-p)^k = \int_0^\infty Poisson(n|\alpha)Gamma\left(\alpha| r,(1-p)/p\right) d\alpha
\end{equation}
Figure \ref{fits}(d) shows the Gamma distribution describing the decomposition of (c) into Poissonians with different expectation values.
This can be interpreted as showing the intrinsic range of $\alpha$ values. Using a prior of this form increases the probability $w$ of a misleading result,
as seen by comparing Figure \ref{fig_prior} with Figure \ref{fig_flat}.
If some of the high score tail is due to `goal fever' or other effects such as the general downward drift in mean scores over the 58 years covered
by the data, then the spread in $\alpha$ will be even narrower. Thus two teams playing each other are likely to be even closer in ability and the match outcome even more uncertain.
\begin{figure}[h]\centerline{\rotatebox{-90}{\scalebox{0.5}{\includegraphics{prob_plot3_prior.pdf}}}}
\caption{ As figure \ref{fig_flat} but with a prior probabilities following the form of curve (d) in Figure \ref{fits}}
\label{fig_prior}
\vspace{0.1cm}
\end{figure}
We have considered the possibility of using bivariate distributions, but for this dataset there appears to be no
correlation between the scores of the two teams and so there is no reason to do so.
\section{The situation for a tournament}
Tournaments are sometimes organised such that the fate of a team does not depend on a single match but that they have the
possibility of compensating a bad result by other good ones. In this way, by performing multiple experiments, the statistical significance of the
outcome can be increased. On the other hand it is frequently the case that the eventual winner has to pass through
many eliminating rounds, increasing the probability of error. Some studies of tournament design have considered the effects of the unreliability
of the result of a single match and how to maximise the probability that the best team/player goes forward to the next round or wins.
Most work of this sort has assumed that Gaussian noise is introduced into a comparison process, often in the context of tournaments (such as Wimbledon)
where a ranking or seeding of the competitors is used in selecting pairings ({\it e.g.} \cite{adler_etal95,glickman08}).
In soccer, as we have seen, the statistics are close to Poissonian and in the FIFA World Cup used an example here,
in recent series for the first round of the final competition teams have been grouped into ``little leagues" of 4 teams
using some degree of seeding but in combination with a random draw.
Multi-match tournaments offer an opportunity for verifying some of the ideas discussed here.
Often all combinations of group of teams play each other. If the result of each match provided a valid comparison
of the relative abilities of the two teams, the situation that A beats B beats C beats A should never arise. We refer to this
as an intransitive triplet. Note that up to this point it has only been assumed that the relative ability of two teams at a
particular time and in particular circumstances is to be tested.
We now have to imagine that a teams ability does not change and that there is a real sense in which one team may be superior to another.
But even in the absence of changes, an anomalous combination of results can arise. If the true ranking is $A>B>C$ but the outcomes of each
of the 3 matches has probability $w$ of not corresponding to that ranking, then there is a probability $w(1-w)$
that an intransitive triplet will result.
The FIFA world cup results provide a database which includes 355 examples of triplets.
Of the 147 which do not involve a drawn match, 17 (12\%)
are intransitive. This seems comparatively low, but we note that even if the match outcomes were entirely random the fraction expected would only be 25\%.
An approximate estimate of the number which might be expected can be obtained by noting that the scorelines of the non-drawn
matches in this database have uncertainties averaging $w=$20.0\% (here and in the discussion which follows,
values from Figure \ref{fig_flat} have been used as they are the most optimistic).
This corresponds to $w(1-w)=$ 16\% or 23.5$\pm$4.8 intransitive triplets expected, reasonably consistent with the 17 seen.
While multiple combinations of teams playing each other can reduce the uncertainty in the outcome, like many other
competitions the final stages of recent FIFA world cup series involve a knockout. 16 teams are reduced to 1 in 4 stages. Even if the best team reaches the
16, if it is to gain the cup it must avoid a false result in all 4 of its last games. As draws are resolved by a penalty shootout, which may be treated as
nearly random, the appropriate mean value of $w$ is that including draws, which is not 20\%,
but 27\%\def\thefootnote{\dagger}\footnote{We note that in a tournament $w$ may not be constant, but may increase in later stages as teams become more equally matched. For simplicity we adopt a mean value.}.
The best team has only a probability of about 28\% of winning the cup, even it reaches the last 16. For the actual match scores which led Italy through its last
4 matches to the 2006 cup the corresponding value comes to 30\%.
\vspace{5mm}
\section{Conclusions}
It is apparent from Figures \ref{fig_flat} and \ref{fig_prior} that the scores which most frequently arise correspond to relatively
high probabilities of a misleading outcome. In the recent FIFA World Cup only 5 matches among 64 had scores
corresponding to better than 90\% confidence in the result and one third had results which should be classified as `$<1\sigma$'.
Even on very optimistic assumptions there is less than one chance in three that it was the best team that won the cup.
The possibility of increasing the size of football (soccer) goal mouths to make the game more interesting
has been discussed and an attempt has been made to use somewhat dubious simple dynamics to quantify the likely
effect of a specific change in goal size on the number of goals scored \citep{mira06}. The present analysis cannot be
used to estimate by how much the mean score would have to be increased to achieve a
given level of confidence in the result without considering the likely difference in the level of
skills of the two teams. In principle one could imagine continuing the match with successive periods of extra time until the goal
difference becomes large enough to yield a chosen level of confidence. Such open ended matches would not
be popular with those planning television coverage (though the undefined duration of tennis matches is reluctantly accommodated).
In either case is clear that the character of the game would be entirely changed.
The sportswriter Grantland Rice once wrote \citep{rice}
{\sl
\vspace{6mm}\\
``When the One Great Scorer comes to mark beside your name,\\
He marks -- not that you won nor lost --\\
but how you played the game."
\vspace{6mm}\\
}
Perhaps its just as well, for in soccer the one bears little relationship to the other.
| {
"attr-fineweb-edu": 1.977539,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbP05qU2Apzl5uIkV | \section{Introduction}
Given $n$ cities with their pairwise distances the \emph{traveling salesman problem} (TSP)
asks for a shortest tour that visits each city exactly once.
This problem is known to be NP-hard~\cite{Kar1972} and cannot be approximated
within any constant factor unless P=NP~\cite{SG1976}.
For \emph{metric TSP} instances, i.e., TSP instances where
the distances satisfy the triangle inequality,
a $3/2$ approximation algorithm was presented by Christofides in 1976~\cite{Chr1976}.
In spite of many efforts no improvement on this approximation ratio for the general metric
TSP has been achieved so far.
One approach to obtain a better approximation algorithm for the metric TSP is
based on the \emph{subtour LP}. This LP is a relaxation of an integer program for the TSP
that was first used by Dantzig, Fulkerson, and Johnson in 1954~\cite{DFJ1954}.
If the cities are numbered from $1$ to $n$ and $c_{ij}$ denotes the distance between city $i$ and city $j$ then
the subtour LP can be formulated as follows:\\
\noindent\framebox{
\parbox{0.96\linewidth}{\vspace*{2mm}\hspace*{10mm}
$\displaystyle \text{minimize: } \sum_{1\le i<j\le n} c_{ij}\cdot x_{ij}$
\begin{eqnarray*}
\hspace*{-5mm}\text{subject to:}\\[3mm]
\sum_{j\not = i} x_{ij} & = & 1 ~~\text{ for all } i \in \{1,\ldots,n\} \\
\sum_{j\not = i} x_{ji} & = & 1 ~~\text{ for all } i \in \{1,\ldots,n\} \\
\sum_{i,j\in S} x_{ij} & \le & |S| - 1 ~~\text{ for all } \emptyset\not = S\subsetneq \{1,\ldots,n\}\\
0 & \le & x_{ij} ~\le~ 1\\[-6mm]
\end{eqnarray*}}}\\[6mm]
This LP has an exponential number of constraints but can be solved in
polynomial time via the ellipsoid method as the separation problem can be solved efficiently~\cite{GLS1981}.
The \emph{integrality ratio} of the subtour LP for the metric TSP is the supremum
of the length of an optimum TSP tour over the optimum solution of the subtour LP.
Wolsey~\cite{Wol1980} has shown that the integrality ratio of the subtour LP for metric TSP is at most $3/2$.
A well known conjecture states that the integrality ratio of the subtour LP for metric TSP
is $4/3$. This conjecture seems to be mentioned for the first in 1990~\cite[page 35]{Wil1990} but according
to~\cite{Goe2012} it was already well known several years before.
A proof of this conjecture yields a polynomial time algorithm
that approximates the value of an optimum TSP tour within a factor of $4/3$.
It is known that the integrality ratio of the subtour LP is at least $4/3$ as there exists
a family of metric TSP instances whose integrality ratio converges to $4/3$~(see for example~\cite{Wil1990}).
For the metric TSP the lower and upper bound of $4/3$ and $3/2$ on the integrality ratio
of the subtour LP have not been improved for more than 25 years. Therefore, people became
interested to study the integrality ratio of the subtour LP for special cases of the metric TSP.
The \emph{graphic TSP} is a special case of the metric TSP where the distances between the $n$ cities
are the lengths of shortest paths in an undirected connected graph on these $n$ cities.
For graphic TSP the integrality ratio
of the subtour LP is at least $4/3$~\cite{Wil1990} and at most 1.4~\cite{SV2012}.
In the $1$-$2$-TSP the distances between two cities are either 1 or 2. Here the largest known lower bound on the
integrality ratio of the subtour LP is $10/9$~\cite{Wil1990} while the smallest known upper bound is
$5/4$~\cite{QSWZ2014}.
In the \emph{Euclidean TSP} the cities are points in the plane and their distance is the Euclidean
distance between the two points. In this case the best known lower bound on the integrality ratio
seems to be $8/7$ (mentioned in~\cite{Wol1980}). The best known upper bound is as in the general metric case $3/2$.
In this paper we will improve the lower bound for the integrality ratio of the Euclidean TSP by presenting
a family of Euclidean TSP instances for which the integrality ratio of the subtour LP converges to $4/3$.
We will prove this result in Section~\ref{sec:construction}. Using a more careful analysis we prove an
explicit formula for the integrality ratio in Section~\ref{sec:ExactFormula}.
\section{Euclidean instances with integrality ratio 4/3}\label{sec:construction}
We will now describe our construction of a family of Euclidean TSP instances
for which the integrality ratio of the subtour LP converges to $4/3$.
Each instance of the family contains equidistant points on three parallel lines.
More precisely, it contains the $3 n$ points with coordinates
$(i, j\cdot d)$ for $i=1, \ldots, n$ and $j=1,2,3$ where $d$ is the distance between the parallel lines.
We will denote these instances by $G(n,d)$. Figure~\ref{fig:G(18,3)} shows the instance $G(18,3)$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.5]
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16}
\draw[blue] (\x, 6) -- (\x+1, 6);
\draw[blue] ( 0, 6) -- ( 0, 3);
\draw[blue] (17, 6) -- (17, 3);
\foreach \x in {0,1,3,4,5,6,7,9,10,11,13,14,15}
\draw[blue] (\x, 3) -- (\x+1, 3);
\foreach \x in {0,1,2,3,5,6,8,9,10,11,12,13,15,16}
\draw[blue] (\x, 0) -- (\x+1, 0);
\draw[blue] ( 2, 3) -- ( 0, 0);
\draw[blue] ( 3, 3) -- ( 4, 0);
\draw[blue] ( 8, 3) -- ( 5, 0);
\draw[blue] ( 9, 3) -- ( 7, 0);
\draw[blue] (12, 3) -- ( 8, 0);
\draw[blue] (13, 3) -- (14, 0);
\draw[blue] (16, 3) -- (15, 0);
\draw[blue] (17, 3) -- (17, 0);
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17}
\foreach \y in {0,3,6}
\fill (\x, \y) circle(1mm);
\foreach \x in {1,2,3}
\draw (\x,3) node[anchor = south] {\small $g_{\x}$};
\draw (4,3) node[anchor = south] {\small $\ldots$};
\draw (16,3) node[anchor = south] {\small $g_{n-2}$};
\end{tikzpicture}
\caption{A TSP tour for the instance $G(18,3)$.}
\label{fig:G(18,3)}
\end{figure}
The instances $G(n,d)$ belong to the class of so called \textit{convex-hull-and-line TSP}.
These are Euclidean TSP instances where all points that do not lie on the boundary of the convex hull
of the point set lie
on a single line segment inside the convex hull.
Deineko, van Dal, and Rote~\cite{DDR1994} have shown that
an optimum TSP tour for a convex-hull-and-line TSP can be found in polynomial time.
Following the notation in~\cite{DDR1994}
we denote the point $(i+1, 2 \cdot d)$ in the instance $G(n,d)$ by $g_i$ for $i=1, \ldots, n-2$
(see Figure~\ref{fig:G(18,3)}).
Thus, the set ${\cal G} := \{g_1, \ldots, g_{n-2}\}$ contains all points of the instance $G(n,d)$ that do not lie
on the boundary of the convex hull of $G(n,d)$. Let $\cal B$ denote the set of all other points in
$G(n,d)$. An optimum TSP tour for $\cal B$ is obtained by visiting the points in $\cal B$
in their cyclic order~\cite{DDR1994}. Moreover, this tour is unique. Therefore, we can call two points in $\cal B$
\textit{adjacent} if they are adjacent in the optimum tour for $\cal B$. Let ${\cal B}_l$ denote all points
in $\cal B$ that lie on the two lower lines.
The following structural result is a special case of Lemma~3 in~\cite{DDR1994}:
\begin{lemma}[Lemma~3 in \cite{DDR1994}]
An optimum TSP tour for the instance $G(n,d)$ can be obtained by splitting the set of points
$\cal G$ into $k+1$ segments
$$\{g_1, g_2, \ldots, g_{i_1}\}, ~~~
\{g_{i_1+1}, \ldots, g_{i_2}\}, ~~~ \ldots, ~~~
\{g_{i_k+1}, \ldots, g_{n-2}\}$$
for $0\le k \le n-2$, $0=i_0 < i_1 < i_2 < \cdots < i_k < n-2$,
and inserting each segment between two adjacent points in ${\cal B}_l$.
\end{lemma}
\begin{proof}
This is exactly the statement of Lemma~3 in \cite{DDR1994} except that
$\cal B$ is replaced by ${\cal B}_l$.
Thus, we only have to observe that because of symmetry of the instance $G(n,d)$
we may assume that all segments are inserted into adjacent points in ${\cal B}_l$.
\end{proof}
Let the \textit{cost} of inserting a segment $\{g_i, g_{i+1}, \ldots, g_j\}$ into $\cal B$
be the difference in the length of the optimum tour after and before inserting this segment.
From Lemma~4 in~\cite{DDR1994} it follows that a segment $\{g_i, g_{i+1}, \ldots, g_j\}$
that contains neither $g_1$ nor $g_{n-2}$ must be inserted between two adjacent points from the lower
line of points in $G(n,d)$. As the points have unit distance on the lines, the cost of inserting such a segment
only depends on the number of points contained in the segment.
\begin{lemma}\label{lemma:insertioncost}
For $d\ge 4$ the cost of inserting a segment of $i$ points from $\cal G$ into $\cal B$ is at least
$$ \begin{cases}
i-2 + \max\{2d, i-2\} & \text{if $g_1$ and $g_{n-2}$ do not belong to the segment}\\
i-d + \max\{d, i\} & \text{if $g_1$ or $g_{n-2}$ belong to the segment}
\end{cases}$$
\end{lemma}
\begin{proof}
As already mentioned above it follows from Lemma~4 in~\cite{DDR1994} that a segment
that contains neither $g_1$ nor $g_{n-2}$ must be inserted between two adjacent points from the lower
line of points in $G(n,d)$.
The total cost of inserting such a segment is $i-1$ for the horizontal connection of the
$i$ points in the segment plus the two connections from the end points of the segment to two adjacent points in
the lower line of points in $G(n,d)$ minus 1 (for the edge that is removed between the two adjacent points in
$\cal B$).
Thus we get a lower bound of $i-2 + \max\{2d, i-2\}$.
If $g_1$ or $g_{n-2}$ is contained in the segment then there are two possibilities to
insert the segment which are shown in Figure~\ref{fig:insert_g1}.
For case b) we get again the lower bound $i-2 + \max\{2d, i-2\}$ while in case a) we have a lower bound
of $i-d+\max\{d, i\}$.
Since $i-2 + \max\{2d, i-2\} \ge i-d+\max\{d, i\}$ for $d\ge 4$ the result follows.
\end{proof}
\begin{figure}[ht]
\centering
\begin{tabular}{c@{\hspace*{1.2cm}}c}
\begin{tikzpicture}[scale=0.5]
\draw[blue] ( 0, 6) -- ( 0, 3);
\foreach \x in {0,1,2,3,4,5}
\draw[blue] (\x, 3) -- (\x+1, 3);
\foreach \x in {0,1,2,3,4,5,6}
\draw[blue] (\x, 0) -- (\x+1, 0);
\draw[blue] ( 6, 3) -- ( 0, 0);
\foreach \x in {0,1,2,3,4,5,6}
\foreach \y in {0,3}
\fill (\x, \y) circle(1mm);
\foreach \x in {1,2,3}
\draw (\x,3) node[anchor = south] {\small $g_{\x}$};
\draw (4,3) node[anchor = south] {\small $\ldots$};
\end{tikzpicture}&
\begin{tikzpicture}[scale=0.5]
\draw[blue] ( 0, 6) -- ( 0, 3);
\draw[blue] ( 0, 3) -- ( 0, 0);
\foreach \x in {1,2,3,4,5}
\draw[blue] (\x, 3) -- (\x+1, 3);
\foreach \x in {0,1,2,4,5,6}
\draw[blue] (\x, 0) -- (\x+1, 0);
\draw[blue] ( 1, 3) -- ( 3, 0);
\draw[blue] ( 6, 3) -- ( 4, 0);
\foreach \x in {0,1,2,3,4,5,6}
\foreach \y in {0,3}
\fill (\x, \y) circle(1mm);
\foreach \x in {1,2,3}
\draw (\x,3) node[anchor = south] {\small $g_{\x}$};
\draw (4,3) node[anchor = south] {\small $\ldots$};
\end{tikzpicture}\\[2mm]
a) & b)
\end{tabular}
\caption{The two possibilities to insert into $\cal B$ a segment containing the point $g_1$.}
\label{fig:insert_g1}
\end{figure}
We now can compute a lower bound on the length of an optimum TSP tour for $G(n,d)$ as follows.
\begin{lemma}\label{lemma:approxlength}
Let $d\ge 4$. Then an optimum TSP tour for $G(n,d)$ has length at least
$4n + 2d-2-2n/(d+1)$.
\end{lemma}
\begin{proof}
For $k$ with $1\le k \le n-2$ let $z_1, z_2, \ldots, z_k$ be the number of points contained in the segments of $\cal G$
that are inserted into $\cal B$ for an optimum TSP tour of $G(n,d)$.
We have $\sum_{i=1}^k z_i = n-2$.
The boundary of the convex hull for the point set $\cal B$ has length $2n-2+4d$.
By Lemma~\ref {lemma:insertioncost}
the total length of the tour is at least
\begin{eqnarray*}
2n-2+4d+\sum_{i=1}^{k} (z_i-2+2d) -4d + 4 & = & 3n + 2k (d-1)
\end{eqnarray*}
where the term $-4d + 4\le 0$ is for adjusting for the at most two segments containing $g_1$ and $g_{n-2}$.
On the other hand by Lemma~\ref {lemma:insertioncost}
the total length of the tour is at least
\begin{eqnarray*}
2n-2+4d+\sum_{i=1}^{k} (2z_i-4) - 2d + 8 & = & 4n + 2 - 4k + 2d
\end{eqnarray*}
Here we need the term $-2d+8\le 0$ to adjust for
the at most two segments containing $g_1$ and $g_{n-2}$.
This shows that an optimum TSP tour for $G(n,d)$ has length at least
\begin{equation}
\min_{1\le k\le n-2} \max\{3n + 2k (d-1), 4 (n - k) + 2d + 2\}.
\label{eqn:twolowerbounds}
\end{equation}
The two functions $3n + 2k (d-1)$ and $4 (n - k) + 2d + 2$ are both linear in $k$
and the slopes have opposite sign. Thus, the expression (\ref{eqn:twolowerbounds})
is at least as large as the value at the intersection of these two linear functions.
The two linear functions intersect at $k=1+n/(2d+2)$. The value at the intersection point
is $4(n-1-n/(2d+2))+2d+2 = 4n +2d - 2 - 2n/(d+1)$.
This finishes the proof.
\end{proof}
Now we can state and prove our main result.
\begin{theorem}\label{thm:main}
Let $d$ be a function with $d(n) = \omega(1)$ and $d(n) = o(n)$.
Then the integrality ratio of the subtour LP for the instances $G(n, d(n))$
converges to $4/3$ for $n\to \infty$.
\end{theorem}
\begin{proof}
As $d(n) = \omega(1)$ we may assume that $d(n) \ge 4$.
By Lemma~\ref{lemma:approxlength} the length of any TSP tour for $G(n, d(n))$ is at least
$4n + 2d-2-2n/(d+1)$.
Figure~\ref{fig:newsubtourbound} shows a feasible solution for the subtour LP for $G(n, d(n))$ with
cost $3n-4 + 3 d(n) + \sqrt{d(n)^2 + 1} \le 3n + 4 d(n)$.
Thus, the integrality ratio for $G(n, d(n))$ is at least
\begin{equation}\label{eq:lowerbound}
\frac {4n + 2d(n)-2-2n/(d(n)+1)}{3n + 4 d(n)}
\end{equation}
which tends to $4/3$ for $n\to \infty$
as $d(n) = \omega(1)$ and $d(n) = o(n)$.
\end{proof}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.5]
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16}
\draw[blue] (\x, 6) -- (\x+1, 6);
\foreach \x in {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}
\draw[blue] (\x, 3) -- (\x+1, 3);
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16}
\draw[blue] (\x, 0) -- (\x+1, 0);
\draw[blue] ( 0, 6) -- ( 0, 3);
\draw[blue] (17, 6) -- (17, 3);
\draw[blue, dashed] ( 0, 3) -- ( 0, 0);
\draw[blue, dashed] ( 1, 3) -- ( 0, 0);
\draw[blue, dashed] ( 0, 3) -- ( 1, 3);
\draw[blue, dashed] (17, 3) -- (17, 0);
\draw[blue, dashed] (16, 3) -- (17, 0);
\draw[blue, dashed] (16, 3) -- (17, 3);
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17}
\foreach \y in {0,3,6}
\fill (\x, \y) circle(1mm);
\end{tikzpicture}
\caption{A feasible solution to the subtour LP for the instance $G(18,3)$.
The dashed lines correspond to variables with value~$1/2$ while all other lines
correspond to variables with value~1.}
\label{fig:newsubtourbound}
\end{figure}
The proofs of Lemma~3 and Lemma~4 in~\cite{DDR1994} make only use of the fact
that optimum TSP tours for certain subsets of the point set do not intersect itself.
This property holds in case of the instance $G(n,d)$ for all $L^p$-norms with $p\in\mathbb{N}$.
Moreover, the proofs of Lemma~\ref{lemma:insertioncost}
and Lemma~\ref{lemma:approxlength} also hold for arbitrary $L^p$-norms.
Therefore, Theorem~\ref{thm:main} holds for all $L^p$-norms with $p\in\mathbb{N}$,
as $3n+4d(n)$ is an upper bound for an optimum solution for the subtour LP for
any $L^p$-norm .
\section{The exact integrality ratio for $G(n, \sqrt{n-1})$}\label{sec:ExactFormula}
The lower bound~(\ref{eq:lowerbound}) proven in Section~\ref{sec:construction} attains its maximum
for $d(n) = \Theta(\sqrt{n})$. In this section we prove an explicit formula
for the integrality ratio of the instances $G(n, \sqrt{n-1})$. This leads to an improved convergence of
the proven lower bound. The structural results for an optimum TSP tour in the instances
$G(n,d)$ proven in this section may be of independent interest.
\begin{lemma}\label{lemma:cheapestinsertion}
For the instance $G(n,d)$ the cheapest cost of inserting into $\cal B$ a segment of $k$ points of $\cal G$
that contains neither $g_1$ nor $g_{n-2}$ is
$$ \begin{cases} k-2+\sqrt{(k-2)^2+4\cdot d^2} & \text{if $k$ is even}\\[2mm]
k-2+ \frac12 \cdot\left(\sqrt{(k-1)^2+4\cdot d^2} + \sqrt{(k-3)^2+4\cdot d^2}\right)
& \text{if $k$ is odd}
\end{cases}$$
\end{lemma}
\begin{proof}
By shifting the instance $G(n,d)$ we may assume that the $k$ points in the segment have the
coordinates $(i, d)$ for $i=1, \ldots, k$.
The cost of inserting this segment between the two points $(a,0)$ and $(a+1, 0)$
with $a\in\{1,\ldots,k\}$ is $(k-1) + \sqrt{(a-1)^2+d^2} + \sqrt{(k-a-1)^2+d^2} - 1$.
The minimum is attained for $a=\lfloor \frac k 2 \rfloor$.
\end{proof}
If a segment of $\cal G$ contains $g_1$ or $g_{n-2}$ then there exist two different possibilities for
inserting it into $\cal B$, see Figure~\ref{fig:insert_g1}. The following lemma states, that
if $d$ is sufficiently large then possibility a) will be cheaper.
For the proof of this lemma and in some other proofs we will make use of the inequality
\begin{equation}\label{eq:sqrt}
a+\sqrt{b^2+c} ~ \ge ~ \sqrt{(a+b)^2 +c} ~~~~~ \text{ for all } a,b,c \in \mathbb{R_+}
\end{equation}
\begin{lemma}\label{lemma:cheapest_insertion_g1}
Let $d\ge 4$.
For the instance $G(n,d)$ the cheapest cost of inserting into $\cal B$ a segment of $k$ points of $\cal G$
that contains either $g_1$ or $g_{n-2}$ is
$$ k-d+\sqrt{k^2+d^2}.$$
\end{lemma}
\begin{proof}
There exist two different possibilities to insert into $\cal B$ a segment of $k$ points of $\cal G$
that contains the point $g_1$. These two possibilities are shown in Figure~\ref{fig:insert_g1}a) and b).
In case a) the cost of inserting the segment is $k-d+\sqrt{k^2+d^2}.$
In case b) the cost of insertion is by Lemma~\ref{lemma:cheapestinsertion} at least
$k-2+\sqrt{(k-2)^2+4\cdot d^2}$.
We claim that for $d\ge 4$ we have $k-d+\sqrt{k^2+d^2} \le k-2+\sqrt{(k-2)^2+4\cdot d^2}$.
This is equivalent to the statement that for $d\ge 4$ we have
$$ h(d) ~:=~ d-2+\sqrt{(k-2)^2+4\cdot d^2}-\sqrt{k^2+d^2} ~\ge~ 0. $$
Now, $h(4) = 2+\sqrt{(k-2)^2+64}-\sqrt{k^2+16} \ge \sqrt{k^2+64}-\sqrt{k^2+16} \ge 0$ by (\ref{eq:sqrt}).
Moreover, we have $h'(d) = \frac{4 d}{\sqrt{4 d^2+(k-2)^2}}-\frac{d}{\sqrt{d^2+k^2}}+1 > 0$
which proves the claim. ~~ ~ ~\hspace*{1cm}
\end{proof}
From Lemma~\ref{lemma:cheapestinsertion} and Lemma~\ref{lemma:cheapest_insertion_g1} it
follows that for $d\ge 4$ there always exists an optimum tour of $G(n,d)$
which has a \textit{z-structure} as shown in Figure~\ref{fig:z-structure}. This means
that the tour consists of a sequence of alternately oriented z-shaped paths that cover all
points in the lower two lines. These z-shaped paths are connected by single edges of length~1
and the tour is closed by adding the top most horizontal line and two vertical connections to
the middle line. Each tour with such a z-structure can be specified by a \textit{z-vector}
which contains as entries the number of points covered by each z on the middle line.
\begin{figure*}[ht]
\centering
\begin{tikzpicture}[scale=0.496]
\tikzstyle{help lines}=[gray!90,very thin]
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26}
\draw[blue] (\x, 6) -- (\x+1, 6);
\draw[blue] ( 0, 6) -- ( 0, 3);
\draw[blue] (27, 6) -- (27, 3);
\foreach \x in {0,1,2, 4,5,6,7,8, 10,11,12,13,14,15,16 ,18,19,20,21,22,23, 25,26}
\draw[blue] (\x, 3) -- (\x+1, 3);
\foreach \x in {0,1,2,3,4,5, 7,8,9,10,11,12, 14,15,16,17,18,19 ,21,22,23,24,25,26}
\draw[blue] (\x, 0) -- (\x+1, 0);
\foreach \x in {0,1,2, 4,5, 7,8, 10,11,12, 14,15,16 ,18,19, 21,22,23, 25,26}
\draw[blue, thick] (\x, 3) -- (\x+1, 3);
\foreach \x in {0,1,2 ,4,5, 7,8, 10,11,12, 14,15,16, 18,19 ,21,22,23 ,25,26}
\draw[blue, thick] (\x, 0) -- (\x+1, 0);
\draw[blue, thick] ( 3, 3) -- ( 0, 0);
\draw[blue, thick] ( 4, 3) -- ( 6, 0);
\draw[blue, thick] ( 9, 3) -- ( 7, 0);
\draw[blue, thick] (10, 3) -- (13, 0);
\draw[blue, thick] (17, 3) -- (14, 0);
\draw[blue, thick] (18, 3) -- (20, 0);
\draw[blue, thick] (24, 3) -- (21, 0);
\draw[blue, thick] (25, 3) -- (27, 0);
\draw[style=help lines] ( 0,-0.5) -- ( 0,3.5);
\draw[style=help lines] ( 3,-0.5) -- ( 3,3.5);
\draw[style=help lines] ( 4,-0.5) -- ( 4,3.5);
\draw[style=help lines] ( 6,-0.5) -- ( 6,3.5);
\draw[style=help lines] ( 7,-0.5) -- ( 7,3.5);
\draw[style=help lines] ( 9,-0.5) -- ( 9,3.5);
\draw[style=help lines] (10,-0.5) -- (10,3.5);
\draw[style=help lines] (13,-0.5) -- (13,3.5);
\draw[style=help lines] (14,-0.5) -- (14,3.5);
\draw[style=help lines] (17,-0.5) -- (17,3.5);
\draw[style=help lines] (18,-0.5) -- (18,3.5);
\draw[style=help lines] (20,-0.5) -- (20,3.5);
\draw[style=help lines] (21,-0.5) -- (21,3.5);
\draw[style=help lines] (24,-0.5) -- (24,3.5);
\draw[style=help lines] (25,-0.5) -- (25,3.5);
\draw[style=help lines] (27,-0.5) -- (27,3.5);
\foreach \x in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27}
\foreach \y in {0,3,6}
\fill (\x, \y) circle(1mm);
\end{tikzpicture}
\caption{The z-structure of a TSP tour for the instance $G(28,3)$. The tour shown has the z-vector
$(4,3,3,4,4,3,4,3)$.}
\label{fig:z-structure}
\end{figure*}
The length of a tour can now easily be computed using its z-vector.
Set
$$ c(i) ~:=~ 2(i-1) + \sqrt{(i-1)^2 + d^2}.$$
Then $c(i)$ is the length of a z-shaped path covering $i$ points in the middle row.
\begin{lemma}
\label{lemma:TotalTourCost}
Let $d\ge 4$ and $k\ge 2$. Then the total length of a TSP tour for $G(n,d)$
corresponding to a z-vector $(z_1, z_2, \ldots, z_k)$ is
$$ n + k + 2 d - 2 + \sum_{i=1}^k c(z_i) .$$
\end{lemma}
\begin{proof}
The length of all the z-shaped subpaths of the tour is $\sum_{i=1}^k c(z_i)$. These $k$ subpaths are connected
by $k-1$ edges of length~1. In addition there is one path of length $n-1$ connecting all the points in the upper row
plus two vertical edges of length $d$ each which connect the upper row with the middle row.
\end{proof}
The next result shows that there always exists an optimum TSP tour for $G(n,d)$ which has a very special structure.
\begin{lemma}\label{lemma:zvector}
Let $d\ge 4$, $k\ge 2$ and $(z_1, z_2, \ldots, z_k)$ be the z-vector of an optimum TSP tour
for $G(n,d)$. Then $z_i \in \{\lfloor n/k \rfloor, \lceil n/k \rceil\}$.
\end{lemma}
\begin{proof}
Suppose this is not the case. Then there exist $i, j \in \{1,\ldots, k\}$ with $z_i \ge z_j+2$.
As the function $c$ is convex we have $c(z_i -1) + c(z_j+1) < c(z_i) + c(z_j)$.
By Lemma~\ref{lemma:TotalTourCost} this implies that $(z_1, z_2, \ldots, z_k)$ cannot be the
z-vector of an optimum TSP tour for $G(n,d)$.
\end{proof}
The following result shows how to bound the length of a TSP tour for $G(n,d)$ with given z-vector
solely by the length of the z-vector.
\begin{lemma}\label{lemma:TourLength}
Let $d\ge 4$ and $k\ge 2$. Then the length of an optimum TSP tour for $G(n,d)$ corresponding to a z-vector
$(z_1, z_2, \ldots, z_k)$ is at least
$$n+k+2d - 2 + k \cdot c(\frac n k)~ .$$
\end{lemma}
\begin{proof}
If $(z_1, z_2, \ldots, z_k)$ is an optimum TSP tour for $G(n,d)$ then we may assume that
$z_1 \le z_2\le \ldots \le z_k$ as reordering the $z_i$-values does not change the length of the tour.
Together with Lemma~\ref{lemma:zvector} this shows that there exists an optimum TSP tour for $G(n,d)$
which corresponds to the z-vector $z_1, \ldots, z_j, z_{j+1}, \ldots, z_k$ with
$z_1 = \ldots = z_j = \lfloor n/k \rfloor$, $z_{j+1} = \ldots = z_k = \lceil n/k \rceil$,
and $\sum_{i=1}^k z_i = n$.
As $c$ is convex we have $\sum_{i=1}^k c(z_i) \ge k\cdot c(\frac n k)$.
\end{proof}
The next result shows that there always exists an optimum TSP tour for $G(n,d)$
which corresponds to an even length z-vector.
\begin{lemma}\label{lemma:evenk}
Let $d\ge 4$ and $n$ be even. Then there exists an optimum TSP tour for $G(n,d)$ which corresponds to an even length
z-vector. Moreover, the length of an optimum TSP tour for $G(n,d)$ is at least:
$$ \min_{1\le k \le n/2} n+2k+2d - 2 + 2k \cdot c(\frac n {2k})~ .$$
\end{lemma}
\begin{proof}
By Lemma~\ref{lemma:TourLength} we only have to rule out that there can exist an optimum TSP tour
for $G(n,d)$ which corresponds to a z-vector of odd length.
By Lemma~\ref{lemma:cheapest_insertion_g1} and its proof we know that if the length of the z-vector
is odd, then it must be~1.
A TSP tour for $G(n,d)$ corresponding to a z-vector of length one has length
\begin{equation} \label{eq:k=1tourlength}
3d + 3n -4 + \sqrt{(n-2)^2 + d^2}
\end{equation}
A tour corresponding to a z-vector of length~$2$ has length
\begin{equation} \label{eq:k=2tourlength}
2d + 3n -4 + 2 \sqrt{\left(\frac n2 -1\right)^2 + d^2}
\end{equation}
Now using (\ref{eq:sqrt}) we
have $$d+\sqrt{(n-2)^2 + d^2} ~\ge~ \sqrt{(n-2)^2 + 4 d^2} ~=~ \sqrt{\left(\frac n2 -1\right)^2 + d^2}~ .$$
This implies (\ref{eq:k=1tourlength})$\ge$(\ref{eq:k=2tourlength}) and therefore the minimum
is always attained for an even length z-vector.
\end{proof}
\begin{theorem}\label{thm:opttourlength}
Let $n\ge 17$ be even. Then an optimum TSP tour for $G(n, \sqrt{n-1})$ has length
$ 4n - 4 + 2 \sqrt{n-1}$.
\end{theorem}
\begin{proof}
Let
\begin{eqnarray*}
f(k,d) &:=& n+2k+2d-2+2k\cdot c\left(\frac{n}{2k}\right)\\
& = & n+2k+2d-2+2k\cdot \left( 2 \cdot(\frac{n}{2k} - 1) + \sqrt{\left(\frac{n}{2k} - 1\right)^2 +d^2} \right)\\
& = & 3n-2k+2d-2 + \sqrt{\left(n-2k\right)^2 +4 d^2 k^2}
\end{eqnarray*}
For $d=\sqrt{n-1}$ we get
\begin{eqnarray*}
f(k,\sqrt{n-1}) & = & 3n-2k+2\sqrt{n-1}-2 + \sqrt{\left(n-2k\right)^2 +4 (n-1) k^2} \\
& = & 3n-2k+2\sqrt{n-1}-2 + \sqrt{n^2 +4 n k (k-1)} \\
\end{eqnarray*}
From Lemma~\ref{lemma:evenk} and its proof we know that there exists a TSP tour in
$G(n, \sqrt{n-1})$ of length $f(1,\sqrt{n-1})$. We now claim that
\begin{equation}\label{eq:tourlength}
f(k,\sqrt{n-1}) > f(1,\sqrt{n-1}) ~~~ \text{ for } k=2, 3, \ldots, n/2.
\end{equation}
By Lemma~\ref{lemma:evenk} this finishes the proof.
To prove (\ref{eq:tourlength}) let $g(k) := f(k,\sqrt{n-1})$.
We will show that $g'(k) > 0$ for $k>1$.
We have
\begin{eqnarray*}
g'(k) = \frac{4kn -2n - 2 \sqrt{n^2 +4 n k (k-1)}}{\sqrt{n^2 +4 n k (k-1)}}
\end{eqnarray*}
Now we have
\begin{eqnarray*}
(k-1) n &\ge& k-1 \\
\Rightarrow~~~~~ k^2 n^2 - k n^2 &\ge& nk^2 - n k \\
\Rightarrow~~~~~ 4k^2 n^2 + n^2 - 4 k n^2 &\ge& n^2 + 4nk^2 - 4n k \\
\Rightarrow~~~~~ 2kn-n &\ge& \sqrt{n^2 +4 n k (k-1)} \\
\end{eqnarray*}
This proves that $g'(k) \ge 0 $ for $k\ge1$.
\end{proof}
\begin{lemma}\label{lemma:subtourbound}
The optimum solution of the subtour LP for the instance $G(n,\sqrt{n-1})$ has value
$3n-3 + 3\sqrt{n-1} + \sqrt{n}$.
\end{lemma}
\begin{proof}
An optimum solution to the subtour LP is indicated in Figure~\ref{fig:newsubtourbound}.
It is easily verified that the value is as claimed.
\end{proof}
Now we can state and prove an explicit formula for the integrality ratio of the instances
$G(n,\sqrt{n-1})$.
\begin{theorem}\label{thm:explicitformula}
Let $n\ge 17$ be even.
The integrality ratio of the subtour LP for the instance $G(n,\sqrt{n-1})$
is
$$ \frac{4n-4+2\sqrt{n-1}} {3n-4 + 3\sqrt{n-1} + \sqrt{n}}~.$$
\end{theorem}
\begin{proof}
This result immediately follows from Theorem~\ref{thm:opttourlength}
and Lemma~\ref{lemma:subtourbound}.
\end{proof}
Using Theorem~\ref{thm:explicitformula} one obtains for example
that the integrality ratio of $G(18,\sqrt{17})$ is $1.14$.
This is much larger than the lower bound
$1.01$ which can be obtained from~(\ref{eq:lowerbound}).
The approach used to prove Theorem~\ref{thm:explicitformula}
can also be applied to other distance functions $d(n)$.
Choosing $d(n) = \sqrt{n/2-1}$ one obtains an integrality ratio of
$$ \frac{4n-6+2\sqrt{n/2-1}} {3n-4 + 3\sqrt{n/2-1} + \sqrt{n/2}}~.$$
This value is larger than the value stated in Theorem~\ref{thm:explicitformula}.
However, the proof gets a bit more complicated as
now one has to prove that the function $f(k,\sqrt{n/2-1})$
appearing in the proof of Theorem~\ref{thm:opttourlength}
attains its minimum for $k=2$.
\section*{Acknowledgement}
We thank Jannik Silvanus for useful discussions and an anonymous referee for pointing out an
improved formulation of Lemma~\ref{lemma:approxlength}.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.826172,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUc-3xK1yAga6JrtDc | \section{Introduction}
\label{intro}
Why do home teams in sport win more often than visiting teams? Researchers from, among other disciplines, psychology, economics, and statistics, have long been trying to figure that out.\footnote{Initial research into the home advantage included, among other sources, \cite{schwartz1977home, courneya1992home} and \cite{nevill1999home}. Works in psychology \citep{agnew1994crowd, unkelbach2010crowd}, economics \citep{forrest2005home, dohmen2016referee}, and statistics \citep{buraimo201012th, lopez2018often} are also recommended.}
One popular suggestion for the home advantage (HA) is that fans impact officiating \citep{moskowitz2012scorecasting}. Whether it is crowd noise \citep{unkelbach2010crowd}, duress \citep{buraimo201012th, lopez2016persuaded}, or the implicit pressure to appease \citep{garicano2005favoritism}, referee decision-making appears cued by factors outside of the run of play. If those cues tend to encourage officials to make calls in favor of the home team, it could account for some (or all) of the benefit that teams receive during home games.
A unique empirical approach for understanding HA contrasts games played in empty stadia to those played with fans, with the goal of teasing out the impact that fans have on HA. If fans impact referee decision making, it stands that an absence of fans would decrease HA. As evidence, both \cite{pettersson2010behavior} (using Italian soccer in 2007) and \cite{reade2020echoes} (two decades of European soccer) found that games without fans resulted in a lower HA.
The Coronavirus (Covid-19) pandemic resulted in many changes across sport, including the delay of most 2019-2020 soccer seasons. Beginning in March of 2020, games were put on pause, eventually made up in the summer months of 2020. Roughly, the delayed games account for about a third of regular season play. Make-up games were played as ``ghost games" -- that is, in empty stadia -- as the only personnel allowed at these games were league, club, and media officials. These games still required visiting teams to travel and stay away from home as they normally would, but without fans, they represent a natural experiment with which to test the impact of fans on game outcomes.
Within just a few months of these 2020 ``ghost games'', more than 10 papers have attempted to understand the impact that eliminating fans had on game outcomes, including scoring, fouls, and differences in team performances. The majority of this work used linear regression to infer causal claims about changes to HA. By and large, research overwhelmingly suggests that the home advantage decreased by a significant amount, in some estimates by an order of magnitude of one-half \citep{mccarrick2020home}. In addition, most results imply that the impact of no fans on game outcomes is homogeneous with respect to league.
The goal of our paper is to expand the bivariate Poisson model \citep{karlis2003analysis} in order to tease out any impact of the lack of fans on HA. The benefits of our approach are plentiful. First, bivariate Poisson models consider home and visitor outcomes simultaneously. This helps account for correlations in outcomes (i.e, if the home team has more yellow cards, there is a tendency for the away team to also have more cards), and more accurately accounts for the offensive and defensive skill of clubs \citep{jakewpoisson}. We simulate soccer games at the season level, and compare regression models (including bivariate Poisson) with respect to home advantage estimates. We find that the mean absolute bias in estimating a home advantage when using linear regression models is about six times larger when compared to bivariate Poisson. Second, we separate out each league when fit on real data, in order to pick up on both (i) inherent differences in each league's HA and (ii) how those differences are impacted by ``ghost games.'' Third, we use a Bayesian version of the bivariate Poisson model, which allows for probabilistic interpretations regarding the likelihood that HA decreased within each league. Fourth, in modeling offensive and defensive team strength directly in each season, we can better account for scheduling differences pre- and post-Covid with respect to which teams played which opponents. Altogether, findings are inconclusive regarding a drop in HA post-Covid. While in several leagues a drop appears more than plausible, in other leagues, HA actually increases.
The remainder of this paper is outlined as follows. Section \ref{Sec2} reviews post-Covid findings, and \ref{Sec3} describes our implementation of the bivariate Poisson model. Section \ref{Sec4} uses simulation to motivate the use of bivariate Poisson for soccer outcomes, Sections \ref{Sec5} and \ref{Sec6} explore the data and results, and Section \ref{Sec7} concludes.
\section{Related literature}
\label{Sec2}
To date, we count 11 efforts that have attempted to estimate post-Covid changes to soccer's HA. The estimation of changes to HA has varied in scope (the number of leagues analyzed ranges from 1 to 41), method, and finding. Table \ref{tab:1} summarizes these papers, highlighting the number of leagues compared, whether leagues were treated together or separately, methodology (split into linear regression or correlation based approaches), and overview of finding. For clarity, we add a row highlighting the contributions of this manuscript.
\begin{table}
\caption{Comparison of post-Covid research on home advantage in football. HA: Home advantage. Correlation-based approaches include Chi-square and Mann-Whitney tests. Linear Regression includes univariate OLS-based frameworks and $t$-tests. Poisson Regression assumes univariate Poisson. Papers are sorted by Method and number of Leagues. Note that this manuscript is the first paper among those listed which employs a Bayesian framework for model fitting.}
\label{tab:1}
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
Paper &Leagues & Method & Finding \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\cite{sors2020sound} & 8 (Together) & Correlation & Drop in HA\\
\cite{leitner2020no} & 8 (Together) & Correlations & Drop in HA \\
\cite{endrich2020home} & 2 (Together) & Linear Regression & Drop in HA \\
\cite{fischer2020does} & 3 (Separate) & Linear Regression & Mixed \\
\cite{dilger2020no} & 1 (NA) & Linear Regression, Correlations & Drop in HA\\
\cite{krawczyk2020home} & 4 (Separate) & Linear Regression & Mixed\\
\cite{ferraresi2020team} & 5 (Together) & Linear Regression & Drop in HA \\
\cite{reade2020echoes} & 7 (Together) & Linear Regression & Drop in HA \\
\cite{jimenez2020home} & 8 (Separate) & Linear Regression, Correlations & Mixed\\
\cite{scoppa2020social} & 10 (Together) & Linear Regression & Drop in HA\\
\cite{cueva2020animal} & 41 (Together) & Linear Regression & Drop in HA\\
\cite{mccarrick2020home} & 15 (Together) & Linear, Poisson Regression & Drop in HA\\
\cite{brysoncausal} & 17 (Together) & Linear, Poisson Regression & Mixed\\
\hline\noalign{\smallskip}
Benz and Lopez (this manuscript) &17 (Separate) & Bivariate Poisson Regression & Mixed\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}
Broadly, methods consider outcome variables $Y$ as a function of $T$ and $T'$, the home advantages pre-and post-Covid, respectively, as well as $W$, where $W$ possibly includes game and team characteristics. Though it is infeasible to detail choice of $W$ and $Y$ across each of the papers, a few patterns emerge.
Several papers consider team strength, or proxies thereof, as part of $W$. This could include fixed effects for each team \citep{ferraresi2020team, cueva2020animal, brysoncausal}, other proxies for team strength \citep{mccarrick2020home, fischer2020does, krawczyk2020home}, and pre-match betting odds \citep{endrich2020home}. The \cite{cueva2020animal} research is expansive, and includes 41 leagues across 30 countries, and likewise finds significant impacts on home and away team fouls, as well as foul differential. Other pre-match characteristics in $W$ include if the game is a rivalry and team travel \citep{krawczyk2020home}, as well as match referee and attendance \citep{brysoncausal}.
Choices of $Y$ include metrics such as goals, goal differential, points (3/1/0), yellow cards, yellow card differential, whether or not each team won, and other in-game actions such as corner kicks and fouls. Several authors separately develop models for multiple response variables. Linear regression, and versions of these models including $t$-tests, stand out most common approaches for modeling $Y$. This includes models for won/loss outcomes \citep{cueva2020animal}, goal differential \citep{brysoncausal, krawczyk2020home}, and fouls \citep{scoppa2020social}. Two authors model goals with Poisson regression \citep{mccarrick2020home, brysoncausal}. \cite{mccarrick2020home}, used univariate Poisson regression models of goals, points and fouls, finding that across the entirety of 15 leagues, the home advantage dropped from 0.29 to 0.15 goals per game, while \cite{brysoncausal} found a significant drop in yellow cards for the away team using univariate Poisson regression.
In addition to choice of $Y$, $W$, and method, researchers have likewise varied with the decision to treat each league separate. As shown in Table \ref{tab:1}, all but three papers have taken each of the available leagues and used them in an single statistical model. Such an approach boasts the benefit of deriving an estimated change in HA that can be broadly applied across soccer, but requires assumptions that (i) HA is homogeneous between leagues and (ii) differences in HA post-Covid are likewise equivalent.
Our approach will make two advances that none of the papers in Table \ref{tab:1} can. First, we model game outcomes using an expanded version of the bivariate Poisson regression model, one originally designed for soccer outcomes \citep{karlis2003analysis}. This model allows us to control for team strength, account for and estimate the correlation in game outcomes, and better model ties. Second, we will show that the assumption of a constant HA between leagues is unjustified. In doing so, we highlight that the frequent choice of combining leagues into one uniform model has far-reaching implications with respect to findings.
\section{Methods}
\label{Sec3}
Poisson regression models assume the response variable has a Poisson distribution, and models the logarithm of response as a linear combination of explanatory variables.
Let $Y_{Hi}$ and $Y_{Ai}$ be outcomes observed in game $i$ for the home ($H_i$) and away teams ($A_i$), respectively. For now we assume $Y_{Hi}$ and $Y_{Ai}$ are goals scored, but will likewise apply a similar framework to yellow cards. The response $(Y_{Hi}, Y_{Ai})$ is bivariate Poisson with parameters $\lambda_{1i}, \lambda_{2i}, \lambda_{3i}$ if
\begin{equation}\label{eqn:1}
\begin{gathered}
(Y_{Hi}, Y_{Ai}) = BP(\lambda_{1i}, \lambda_{2i}, \lambda_{3i}),
\end{gathered}
\end{equation}
\noindent where $\lambda_{1i} + \lambda_{3i}$ and $\lambda_{2i} + \lambda_{3i}$ are the goal expectations of $Y_{Hi}$ and $Y_{Ai}$, respectively, and $\lambda_{3i}$ is the covariance between $Y_{Hi}$ and $Y_{Ai}$. As one specification, let
\begin{equation}\label{eqn:2}
\begin{gathered}
\textrm{log}(\lambda_{1i}) = \mu_{ks} + T_k + \alpha_{H_{i}ks} + \delta_{A_{i}ks}, \\
\textrm{log}(\lambda_{2i}) = \mu_{ks} + \alpha_{A_{i}ks} + \delta_{H_{i}ks}, \\
\textrm{log}(\lambda_{3i}) = \gamma_{k}.
\end{gathered}
\end{equation}
\noindent In Model (\ref{eqn:2}), $\mu_{ks}$ is an intercept term for expected goals in season $s$ (which we assume to be constant), $T_k$ is a home advantage parameter, and $\gamma_{k}$ is a constant covariance, all of which correspond to league $k$. The explanatory variables used to model $\lambda_{1i}$ and $\lambda_{2i}$ correspond to factors likely to impact the home and away team's goals scored, respectively. Above, $\lambda_{1i}$ is a function of the home team's attacking strength ($\alpha_{H_{i}ks}$) and away team's defending strength ($\delta_{A_{i}ks}$), while $\lambda_{2i}$ is a function of the away team's attacking strength ($\alpha_{A_{i}ks}$) and home team's defending strength ($\delta_{H_{i}ks}$), all corresponding to league $k$ during season $s$. For generality, we refer to $\alpha_{ks}$ and $\delta_{ks}$ as general notation for attacking and defending team strengths, respectively. In using $\mu_{ks}$, $\alpha_{ks}$ and $\delta_{ks}$ are seasonal effects, centered at 0, such that $\alpha_{ks} \sim N(0, \sigma^2_{att, k})$ and $\delta_{ks} \sim N(0, \sigma^2_{def, k})$.
If $\lambda_3 = 0$ in Equation \ref{eqn:2}, then $Y_{H} \perp \!\!\! \perp Y_{A}$, and the bivariate Poisson reduces to the product of two independent Poisson distributions. Using observed outcomes in soccer from 1991, \cite{karlis2003analysis} found that assuming independence of the Poisson distributions was less suitable for modeling ties when compared to using bivariate Poisson. More recently, however, \cite{groll2018dependency} suggest using $\lambda_3 = 0$, as there are now fewer ties when compared to 1991. Structural changes to professional soccer -- leagues now reward three points for a win and one point for a tie, instead of two points for a win and one point for a tie -- are likely the cause, and thus using $\lambda_3 = 0$ in models of goal outcomes is more plausible.
There are a few extensions of bivariate Poisson to note. \cite{karlis2003analysis} propose diagonally inflated versions of Model (\ref{eqn:1}), and also included team indicators for both home and away teams in $\lambda_{3}$, in order to test of the home or away teams controlled the amount of covariance in game outcomes. However, models fit on soccer goals did not warrant either of these additional parameterizations. \cite{baio2010bayesian} use a Bayesian version bivariate Poisson that explicitly incorporates shrinkage to team strength estimates. Additionally, \cite{koopman2015dynamic} allows for team strength specifications to vary stochastically within a season, as in a state-space model \citep{glickman1998state}. hough Model's (\ref{eqn:2}) and (\ref{eqn:3}) cannot pick up team strengths that vary within a season, estimating these trends across 17 leagues could be difficult to scale; \cite{koopman2015dynamic}, for example, looked only at the English Premier League. Inclusion of time-varying team strengths, in addition to an assessment of team strengths post-Covid versus pre-Covid, is an opportunity for future work.
\subsection{Extending bivariate Poisson to changes in the home advantage}
\subsubsection{Goal Outcomes}
We extend Model (\ref{eqn:2}) to consider post-Covid changes in the HA for goals using Model (\ref{eqn:3}).
\begin{equation}\label{eqn:3}
\begin{gathered}
(Y_{Hi}, Y_{Ai}) = BP(\lambda_{1i}, \lambda_{2i}, \lambda_{3i}), \\
\textrm{log}(\lambda_{1i}) = \mu_{ks} + T_k\times(I_{pre-Covid}) + T'_k\times(I_{post-Covid}) + \alpha_{H_{i}ks} + \delta_{A_{i}ks}, \\
\textrm{log}(\lambda_{2i}) = \mu_{ks} + \alpha_{A_{i}ks} + \delta_{H_{i}ks}, \\
\textrm{log}(\lambda_{3i}) = \gamma_{k},
\end{gathered}
\end{equation}
\noindent where $T_k'$ is the post-Covid home advantage in league $k$, and $I_{pre-Covid}$ and $I_{post-Covid}$ are indicator variables for whether or not the match took place before or after the restart date shown in Table \ref{tab:3}. Of particular interest will be the comparison of estimates of $T_k$ and $T_k'$.
\subsubsection{Yellow Card Outcomes}
A similar version, Model (\ref{eqn:4}), is used for yellow cards. Let $Z_{Hi}$ and $Z_{Ai}$ be the yellow cards given to the home and away teams in game $i$. We assume $Z_{Hi}$ and $Z_{Ai}$ are bivariate Poisson such that
\begin{equation}\label{eqn:4}
\begin{gathered}
(Z_{Hi}, Z_{Ai}) = BP(\lambda_{1i}, \lambda_{2i}, \lambda_{3i}), \\
\textrm{log}(\lambda_{1i}) = \mu_{ks} + T_k\times(I_{pre-Covid}) + T'_k\times(I_{post-Covid}) + \tau_{H_{i}ks} , \\
\textrm{log}(\lambda_{2i}) = \mu_{ks} + \tau_{A_{i}ks}, \\
\textrm{log}(\lambda_{3i}) = \gamma_{k},
\end{gathered}
\end{equation}
\noindent where $\tau_{ks} \sim N(0, \sigma_{team, k}^2)$. Implicit in Model (\ref{eqn:4}), relative to Models (\ref{eqn:2}) and (\ref{eqn:3}), is that teams control their own yellow card counts, and not their opponents', and that tendencies for team counts to correlate are absorbed in $\lambda_{3i}$.
\subsubsection{Model Fits in Stan}
We use Stan, an open-source statistical software designed for Bayesian inference with MCMC sampling, for each league $k$, and with models for both goals and yellow cards. We choose Bayesian MCMC approaches over the EM algorithm \citep{karlis2003analysis, karlis2005bivariate} to obtain both (i) posterior distributions of the change in home advantage and (ii) posterior probabilities that home advantage declined in each league. No paper referenced in Table \ref{tab:1} assessed HA change probabilistically.
We fit two versions of Models (\ref{eqn:3}) and (\ref{eqn:4}), one with $\lambda_3 = 0$, and a second with $\lambda_3 > 0$. For models where $\lambda_3 = 0$, prior distributions for the parameters in Models (\ref{eqn:3}) and (\ref{eqn:4}) are as follows. These prior distributions are non-informative and do not impose any outside knowledge on parameter estimation.
\begin{equation}
\begin{gathered}
\mu_{ks} \sim N(0, 25), \\
\alpha_{ks} \sim N(0, \sigma_{att, k}^2),\\
\delta_{ks} \sim N(0, \sigma_{def, k}^2), \\
\tau_{ks} \sim N(0, \sigma_{team, k}^2), \\
\sigma_{att,k} \sim \text{Inverse-Gamma}(1, 1), \\
\sigma_{def,k} \sim \text{Inverse-Gamma}(1, 1),\\
\sigma_{team,k} \sim \text{Inverse-Gamma}(1, 1),\\
T_k \sim N(0, 25), \\
T'_k \sim N(0, 25) \nonumber
\end{gathered}
\end{equation}
For models w/ $\lambda_3 > 0$, empirical Bayes priors were used for $T_K, T'_k$ in order to aid in convergence. Namely, let $\widehat{T}_{k}$ and $\widehat{T'}_{k}$ be the posterior mean estimate of pre-Covid and post-Covid HA for from league $k$ respectively, from the corresponding model with $\lambda_3 = 0$. We let
\begin{equation}
\begin{gathered}
\overline{T}_{.} = \text{mean(}\{\widehat{T}_1, ..., \widehat{T}_{17}\}) \\
\overline{T'}_{.} = \text{mean(}\{\widehat{T'}_1, ..., \widehat{T'}_{17}\}) \\
s = 3\times\text{SD(}\{\widehat{T}_1, ..., \widehat{T}_{17}\}) \\
s' = 3\times\text{SD(}\{\widehat{T'}_1, ..., \widehat{T'}_{17}\})
\nonumber
\end{gathered}
\end{equation}
Priors $T_K, T'_k$ and $\gamma_{k}$ for the variants of Model (\ref{eqn:3}) and (\ref{eqn:4}) w/ $\lambda_3 > 0$ are as follows:
\begin{equation}
\begin{gathered}
T_k \sim N(\overline{T}_{.}, s^2) \\
T'_k \sim N(\overline{T'}_{.}, s'^2) \\
\gamma_{k} \sim N\biggr{(}0, \frac{1}{2}\biggr{)} \text{ (Goals)} \\
\gamma_{k} \sim N(0, 2) \text{ (Yellow Cards)}
\nonumber
\end{gathered}
\end{equation}
The priors on $T_K$ and $T'_k$ are weakly informative; the variance in the priors is 9 times as large as the variance in the observed variance in $\{\widehat{T}_1, ..., \widehat{T}_{17}\}$ estimated in the corresponding $\lambda_3 = 0$ model variation. As $\gamma_k$ represents the correlation term for goals/yellow cards, and exists on the log-scale, the priors are not particularly informative, and they allow for values of $\lambda_3$ that far exceed typical number of goals and yellow cards per game. Overall, our use of priors is not motivated by a desire to incorporate domain expertise, and instead the use of Bayesian modeling is to incorporate posterior probabilities as a tool to assess changes in HA.
For models with $\lambda_3 = 0$, Models (\ref{eqn:3}) and (\ref{eqn:4}) were fit using 3 parallel chains, each made up of 7000 iterations, and a burn in of 2000 draws. When $\lambda_3 > 0$ was assumed, Models (\ref{eqn:3}) and (\ref{eqn:4}) were fit using 3 parallel chains, each with 20000 iterations, and a burn-in of 10000 draws. Parallel chains were used to improve the computation time needed to draw a suitable number of posterior samples for inference. Posterior samples were drawn using the default Stan MCMC algorithm, Hamiltonian Monte Carlo (HMC) with No U-Turn Sampling (NUTS) \mbox{\citep{standocs}}.\\
To check for model convergence, we examine the $\widehat R$ statistic \citep{gelman1992, brooks1998} for each parameter. If $\widehat R$ statistics are near 1, that indicates convergence \citep{bda3}. To check for the informativeness of a parameter's posterior distribution, we use effective sample size (ESS, \cite{bda3}), which uses the relative independence of draws to equate the posterior distribution to the level of precision achieved in a simple random sample.
For goals, we present results for Model (\ref{eqn:3}) with $\lambda_3 = 0$, and for yellow cards, we present results with Model (\ref{eqn:4}) and $\lambda_3 > 0$. Henceforth, any reference to those models assumes such specifications, unless explicitly stated otherwise. All data and code for running and replicating our analysis are found at \url{https://github.com/lbenz730/soccer\_ha\_covid}.
\section{Simulation}
\label{Sec4}
\subsection{Simulation Overview}
Most approaches for evaluating bivariate Poisson regression have focused on model fit \citep{karlis2005bivariate} or prediction. For example, \cite{ley2019ranking} found bivariate Poisson matched or exceeded predictions of paired comparison models, as judged by rank probability score, on unknown game outcomes. \cite{tsokos2019modeling} also compared paired comparison models to bivariate Poisson, with a particular focus on methods for parameter estimation, and found the predictive performances to be similar. As will be our suggestion, \cite{tsokos2019modeling} treated each league separately to account for underlying differences in the distributions of game outcomes. Bivariate Poisson models have also compared favorably with betting markets \citep{koopman2015dynamic}.
We use simulations to better understand accuracy and operation characteristics of bivariate Poisson and other models in terms of estimating soccer's home advantage. There are three steps to our simulations; (i) deriving team strength estimates, (ii) simulating game outcomes under assumed home advantages, and (iii) modeling the simulated game outcomes to estimate that home advantage. Exact details of each of these three steps are shown in the Appendix; we summarize here.
We derive team strength estimates to reflect both the range and correlations of attacking and defending estimates found in the 17 professional soccer leagues in our data. As in \mbox{\cite{jakewpoisson}}, team strength estimates are simulated across single seasons of soccer using the bivariate Normal distribution. To assess if the correlation of team strengths (abbreviated as $\rho*$) effects home advantage estimates, we use $\rho* \in \{-0.8, -0.4, 0\}$ (teams that typically score more goals also allow fewer goals).
Two data generating processes are used to simulate home and away goal outcomes. The first reflects Model (\ref{eqn:2}), where goals are simulated under a bivariate Poisson distribution. The second reflects a bivariate Normal distribution. Although bivariate Poisson is more plausible for soccer outcomes \mbox{\citep{karlis2003analysis}}, using bivariate Normal allows us to better understand how a bivariate Poisson model can estimate HA under alternative generating processes. For both data generating processes, we fix a simulated home advantage $T*$, for $T* \in \{0, 0.25, 0.5\}$, to roughly reflect ranges of goal differential benefits for being the home team, as found in Figure 1 of \mbox{\cite{brysoncausal}}.
Three candidate models are fit. First, we use linear regression models of goal differential as a function of home and away team fixed effects and a term for the home advantage, versions of which were used by \mbox{\cite{brysoncausal}}, \mbox{\cite{scoppa2020social}}, \mbox{\cite{krawczyk2020home}} and \mbox{\cite{endrich2020home}}. Second, we use Bayesian paired comparison models, akin to \cite{tsokos2019modeling} and \mbox{\cite{ley2019ranking}}, where goal differential is modeled as a function of differences in team strength, as well as the home advantage. Finally, we fit Model (\ref{eqn:2}) with $\lambda_3 = 0$. Recall that when $\lambda_3 = 0$, the bivariate Poisson in Equation \ref{eqn:2}, reduces to the product of two independent Poisson distributions. The $\lambda_3 = 0$ bivariate Poisson model variant was chosen for use in simulations given that such a choice has proven suitable for modeling goals outcomes in recent year \mbox{\citep{groll2018dependency}}, and furthermore the $\lambda_3 = 0$ variant of Model (\ref{eqn:3}) will be presented in Section \ref{sec:612}.
A total of 100 seasons were simulated for each combination of $\rho*$ and $T*$ using each of the two data generating process, for a total of 1800 simulated seasons worth of data.
\subsection{Simulation Results}
Table \ref{tab:2} shows mean absolute bias (MAB) and mean bias (MB) of home advantage estimates from each of the three candidate models (linear regression, paired comparison, and bivariate Poisson) under the two data generating processes (bivariate Poisson and bivariate Normal). Each bias is shown on the goal difference scale.
\begin{table}[ht]
\centering
\begin{tabular}{|c|cc|cc|cc|}
\hline
& \multicolumn{6}{c|}{$T* = 0$}\\
& \multicolumn{2}{c}{$\rho* = -0.8$}& \multicolumn{2}{c}{$\rho* = -0.4$}& \multicolumn{2}{c|}{$\rho* = 0$}\\
\cline{2-7}
Model & MAB & MB & MAB & MB & MAB & MB\\
\hline
&\multicolumn{6}{c|}{\textbf{Data Generating Process: Bivariate Poisson}}\\
\hline
Bivariate Poisson & 0.058 & -0.005 & 0.051 & -0.005 & 0.053 & -0.003 \\
Paired Comparisons & 0.065 & -0.005 & 0.058 & -0.005 & 0.059 & -0.003 \\
Linear Regression & 0.399 & 0.020 & 0.403 & -0.090 & 0.382 & -0.029 \\
\hline
&\multicolumn{6}{c|}{\textbf{Data Generating Process: Bivariate Normal}}\\
\hline
Bivariate Poisson & 0.058 & 0.006 & 0.060 & -0.010 & 0.061 & 0.007 \\
Paired Comparisons & 0.059 & 0.006 & 0.061 & -0.010 & 0.062 & 0.008 \\
Linear Regression & 0.460 & 0.036 & 0.480 & -0.070 & 0.446 & 0.032 \\
\hline
& \multicolumn{6}{c|}{$T* = 0.25$}\\
& \multicolumn{2}{c}{$\rho* = -0.8$}& \multicolumn{2}{c}{$\rho* = -0.4$}& \multicolumn{2}{c|}{$\rho* = 0$}\\
\cline{2-7}
Model & MAB & MB & MAB & MB & MAB & MB\\
\hline
&\multicolumn{6}{c|}{\textbf{Data Generating Process: Bivariate Poisson}}\\
\hline
Bivariate Poisson & 0.061 & 0.001 & 0.061 & 0.000 & 0.064 & 0.015 \\
Paired Comparisons & 0.075 & 0.034 & 0.075 & 0.034 & 0.082 & 0.049 \\
Linear Regression & 0.424 & 0.100 & 0.474 & 0.036 & 0.425 & -0.054 \\
\hline
&\multicolumn{6}{c|}{\textbf{Data Generating Process: Bivariate Normal}}\\
\hline
Bivariate Poisson & 0.073 & -0.019 & 0.068 & -0.017 & 0.084 & -0.015 \\
Paired Comparisons & 0.074 & -0.015 & 0.068 & -0.013 & 0.085 & -0.010 \\
Linear Regression & 0.485 & -0.070 & 0.454 & 0.070 & 0.427 & -0.006 \\
\hline
& \multicolumn{6}{c|}{$T* = 0.5$}\\
& \multicolumn{2}{c}{$\rho* = -0.8$}& \multicolumn{2}{c}{$\rho* = -0.4$}& \multicolumn{2}{c|}{$\rho* = 0$}\\
\cline{2-7}
Model & MAB & MB & MAB & MB & MAB & MB\\
\hline
&\multicolumn{6}{c|}{\textbf{Data Generating Process: Bivariate Poisson}}\\
\hline
Bivariate Poisson & 0.065 & 0.001 & 0.072 & -0.012 & 0.071 & -0.004 \\
Paired Comparisons & 0.094 & 0.069 & 0.091 & 0.047 & 0.089 & 0.056 \\
Linear Regression & 0.453 & 0.138 & 0.485 & 0.036 & 0.529 & 0.083 \\
\hline
&\multicolumn{6}{c|}{\textbf{Data Generating Process: Bivariate Normal}}\\
\hline
Bivariate Poisson & 0.070 & -0.021 & 0.067 & 0.007 & 0.063 & -0.004 \\
Paired Comparisons & 0.070 & -0.015 & 0.069 & 0.013 & 0.063 & 0.002 \\
Linear Regression & 0.549 & 0.060 & 0.450 & -0.021 & 0.549 & -0.042 \\
\hline
\end{tabular}
\caption{Mean absolute bias (MAB) and mean bias (MB) in 1800 estimates of the home advantage in a single season of soccer games between 20 teams, 100 for each combination of data generating process, team strength correlation ($\rho*$) and home advantage ($T*$).
Estimates produced using linear regression, paired comparison, and bivariate Poisson regression models. The mean absolute bias for bivariate Poisson regression compares favorably; when the data generating process of goal outcomes is bivariate Poisson, bivariate Poisson models most accurately estimate the home advantage. Furthermore, when the data generating process of goal outcomes is bivariate normal, bivariate Poisson and paired comparison models perform similarly, with the bivariate Poisson model slightly more accurate.}
\label{tab:2}
\end{table}
When goal outcomes are simulated using the bivariate Poisson distribution, bivariate Poisson model estimates of home advantage average an absolute bias of roughly 0.06-0.07, and range from 11 to 31 percent lower than estimated home advantages from paired comparison models. Furthermore, for large advantages of home advantage, the paired comparison is directionally biased and tends to over estimate home advantage.
Both bivariate Poisson and paired comparison models compare favorably to linear regression. The absolute biases from linear regression models vary between 0.40 and 0.53, and tend to increase with larger home advantages. More generally, when using these models across a full season's worth of soccer games, one could expect the estimate of the home advantage from a linear regression (with home and away team fixed effects) to be off by nearly half a goal (in unknown direction), which is about six times the amount of bias shown when estimating using bivariate Poisson.
When goal outcomes are simulated using the bivariate normal distribution, bivariate Poisson and paired comparison models capture the known home advantage with equivalent accuracy (mean absolute bias' within $\pm 3\%$, with bivariate Poisson slightly better). Linear regression performs poorly under these goal outcome models, with an average absolute bias range from 0.427 to 0.549).
Overall, there do not seem to be any noticeable patterns across $\rho*$, the range of correlation between team strengths.
\section{Data}
\label{Sec5}
\begin{table}
\caption{Breakdown of leagues used in analysis. Data consists of 5 most recent seasons between 2015-2020. \# of games corresponds to sample sizes for goals model. Due to different levels of missingness between goals and yellow cards in the data, 5 leagues have a smaller \# of games in their respective pre-Covid yellow card sample, while 1 league has a smaller \# of games in its post-Covid yellow card sample. Restart date refers the date that the league resumed play after an interrupted 2019-20 season or delayed start of 2020 season (Norway/Sweden). }
\label{tab:3}
\begin{tabular}{llccccc}
\hline\noalign{\smallskip}
League & Country & Tier & Restart Date & \Centerstack{Pre-Covid \\ Games} & \Centerstack{Post-Covid \\ Games} & \Centerstack{\# of \\ Team-Seasons} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
German Bundesliga & Germany & 1 & 2020-05-16 & 1448 & 82 & 90 \\
German 2. Bundesliga & Germany & 2 & 2020-05-16 & 1449 & 81 & 90 \\
Danish Superliga & Denmark & 1 & 2020-05-31 & 1108 & 74 & 68 \\
Austrian Bundesliga & Austria & 1 & 2020-06-02 & 867 & 63 & 54 \\
Portuguese Liga & Portugal & 1 & 2020-06-03 & 1440 & 90 & 90 \\
Greek Super League & Greece & 1 & 2020-06-06 & 1168 & 58 & 78 \\
Spanish La Liga 2 & Spain & 2 & 2020-06-10 & 2233 & 129 & 110 \\
Spanish La Liga & Spain & 1 & 2020-06-11 & 1790 & 110 & 100 \\
Turkish Super Lig & Turkey & 1 & 2020-06-13 & 1460 & 70 & 90 \\
Swedish Allsvenskan & Sweden & 1 & 2020-06-14 & 960 & 198 & 80 \\
Norwegian Eliteserien & Norway & 1 & 2020-06-16 & 960 & 175 & 80 \\
English Premier League & England & 1 & 2020-06-17 & 1808 & 92 & 100 \\
Italy Serie B & Italy & 2 & 2020-06-17 & 2046 & 111 & 105 \\
Swiss Super League & Switzerland & 1 & 2020-06-19 & 836 & 65 & 50 \\
Russian Premier Liga & Russia & 1 & 2020-06-19 & 1136 & 64 & 80 \\
English League Championship & England & 2 & 2020-06-20 & 2673 & 113 & 120 \\
Italy Serie A & Italy & 1 & 2020-06-20 & 1776 & 124 & 100\\
\end{tabular}
\end{table}
The data used for this analysis are comprised of games from 17 soccer leagues in 13 European countries spanning 5 seasons between 2015 and 2020. The leagues selected for use in this analysis were among the first leagues to return to play following a suspension of the season to the Covid-19 pandemic. Typically, European countries have hierarchies of leagues (also referred to as divisions, tiers, or flights), with teams competing to be promoted to a better league and/or to avoid being relegated to the league below. For each of the 13 countries used in this analysis, the top league in that country was selected. Additionally, 2nd tier leagues were included for England, Spain, Italy and Germany, the countries among the ``Big 5" European soccer to resume domestic competition (the final of the ``Big 5" countries, France, cancelled the conclusion of its leagues' 2019-20 seasons). Only games from intra-league competition were used in this analysis, and games from domestic inter-league cup competitions (such as England's FA Cup), and inter-country competitions (such as the UEFA Champions League), were dropped. A full summary of the leagues and games used in this paper is presented in Table \ref{tab:3}.
Data was scraped from Football Reference \citep{fbref} on 2020-10-28. For each league, the five most recent seasons worth of data were pulled, not including the ongoing 2020-21 season. For 15 of the 17 leagues, this included the 2015-16 season through the 2019-20 season. Unlike the majority of European Soccer leagues, which run from August-May, the top flights in Sweden and Norway run from March-November. These leagues never paused an ongoing season due to the Covid-19 pandemic, but rather delayed the start of their respective 2020 seasons until June. As a result, the data used is this analysis is five full seasons worth of data for all the leagues outside of Sweden and Norway, while those two countries have four full seasons of data, plus all games in the 2020 season through 2020-10-28.
Throughout this analysis, we refer to pre-Covid and post-Covid samples. For each league, the pre-Covid sample constitutes all games prior to the league's restart date, listed in Table \ref{tab:3}, while the post-Covid sample includes all games is comprised of all games on or after the league's restart date. In nearly all cases, the league's restart date represents a clean divide between games that had fans in attendance and games that did not have any fans in attendance due to Covid restrictions. One exception is a German Bundesliga game between Borussia Monchengladbach and Cologne on 2020-03-11 that was played in an empty stadium just before the German Bundesliga paused its season. Additionally, seven games in Italy Serie A were played under the same circumstances. While leagues returned from their respective hiatuses without fans in attendance, some, such as the Danish Superliga, Russian Premier League, and Norwegian Eliteserien began allowing very reduced attendance by the end of the sample.
Unfortunately, attendance numbers attained from Football Reference were not always available and/or accurate, and as such, we can not systematically identify the exact number games in the sample that had no fans in attendance prior to the league suspending games, or the the exact number of games in the post-Covid sample that had fans in attendance. Related, there are several justifications for using the pre-Covid/post-Covid sample split based on league restart date:
\begin{enumerate}
\item Any number of games in the pre-Covid sample without fans in attendance is minute compared to the overall size of any league's pre-Covid sample.
\item Several month layoffs with limited training are unprecedented, and possibly impact team strengths and player skill, which in turn may impact game results in the post-Covid sample beyond any possible change in home advantage.
\item Any games in a league's post-Covid sample played before fans have attendances significantly reduced compared to the average attendance of a game in the pre-Covid sample.
\item The majority of leagues don't have a single game in the post-Covid sample with any fans in attendance, while all leagues have games in the post-Covid sample without fans.
\end{enumerate}
Recently started games in the 2020-21 season are not considered, as leagues have diverged from one another in terms of off-season structure and policies allowing fans to return to the stands.
Games where home/away goals were unavailable were removed for the goals model, and games where home/away yellow cards were unavailable were removed for the yellow cards model. The number of games displayed in Table \ref{tab:3} reflects the sample sizes used in the goals model. The number of games where goal counts were available always matched or exceeded the number of games where yellow card counts were available. Across 5 leagues, 92 games from the pre-Covid samples used in Model (\ref{eqn:3}) were missing yellow card counts, and had to be dropped when fitting Model (\ref{eqn:4}) (2 in Italy Serie B, 2 in the English League Championship, 12 in the Danish Superliga, 34 in the Turkish Super Lig, and 42 in Spanish La Liga 2). 4 games had to be dropped from the Russian Premier's Leagues post-Covid sample for the same reason.
\section{Results}
\label{Sec6}
\subsection{Goals}
\subsubsection{Model Fit}
\label{sec:611}
Results from goals Model (\ref{eqn:3}), using $\lambda_3 = 0$ for all leagues, are shown below. We choose Model (\ref{eqn:3}) with $\lambda_3= 0$ because, across our 17 leagues of data, the correlation in home and away goals per game varied between -0.16 and 0.07.
Using this model, all $\widehat R$ statistics ranged from 0.9998 - 1.003, providing strong evidence that the model properly converged. Additionally, the effective sample sizes are provided in Table \ref{tab:apdx1}. ESS are sufficiently large, especially HA parameters of interest $T_k$ and $T'_k$, suggesting enough draws were taken to conduct inference.
Figure \ref{fig:3} (in the Appendix) shows an example of posterior means of attacking ($\alpha_{ks}$) and defensive ($\delta_{ks}$) team strengths for one season of the German Bundesliga. In Figure \ref{fig:3}, the top team (Bayern Munich) stands out with top offensive and defensive team strength metrics. However, the correlation between offensive and defensive team strength estimates is weak, reflecting the need for models to incorporate both aspects of team quality.
\subsubsection{Home Advantage}
\label{sec:612}
The primary parameters of interest in Model (\ref{eqn:3}) are $T_k$ and $T'_k$, the pre- and post-Covid home advantages for each league $k$, respectively. These HA terms are shown on a log-scale, and represent the additional increase in the home team's log goal expectation, relative to a league average ($\mu_{ks}$), and after accounting for team and opponent ($\alpha_{ks}$ and $\delta_{ks}$) effects.\footnote{In our simulations in Section \ref{Sec4}, we transformed HA estimates to the goal difference scale, in order to compare to estimates from linear regression.}
Posterior distributions for $T_k$ and $T'_k$ are presented in Figure \ref{fig:1}. Clear differences exist between several of the 17 leagues' posterior distributions of $T_k$. For example, the posterior mean of $T_k$ in the Greek Super League is 0.409, or about 2.5 times that of the posterior mean in the Austrian Bundesliga (0.161). The non-overlapping density curves between these leagues adds further support for our decision to estimate $T_k$ separately for each league, as opposed to one $T$ across all of Europe.
\begin{figure}
\centering
\includegraphics[scale=0.10]{figure1.png}
\caption{Posterior distributions of $T_k$ and $T'_k$, the pre-Covid and post-Covid HAs for goals. Larger values of $T_k$ and $T'_k$ indicate larger home advantages. Prior to the Covid-19 pandemic, the Greek Super League and Norwegian Eliteserien had the largest home advantages for goals, while the Austrian Bundesliga and Swiss Super League had the smallest home advantages for goals. Across the 17 leagues in the sample, a range of differences exist between posterior distributions of $T_k$ and $T'_k$.}
\label{fig:1}
\end{figure}
\begin{table}[ht]
\caption{Comparison of posterior means for pre-Covid and post-Covid goals HA parameters from Model (\ref{eqn:3}), $\widehat{T}_{k}$ and $\widehat{T'}_{k}$, respectively. Larger values of $T_k$ and $T'_k$ indicate larger home advantages. Relative and absolute differences between $\widehat{T'}_{k}$ and $\widehat{T}_{k}$ are also shown. Probabilities of decline in HA without fans, $P(T'_k < T_k)$, are estimated from posterior draws. We estimate the probability of a decline in HA without fans to exceed 0.9 in 7 of 17 leagues, and to exceed 0.5 in 11 of 17 leagues.}
\label{tab:4}
\centering
\begin{tabular}{lccccc}
\hline \\
League & $\widehat{T}_{k}$ & $\widehat{T'}_{k}$ & $\widehat{T'}_{k}$ - $\widehat{T}_{k}$ & \% Change & $P(T'_k < T_k)$ \\ \\
\hline
Austrian Bundesliga & 0.161 & -0.202 & -0.363 & -225.7\% & 0.999 \\
German Bundesliga & 0.239 & -0.024 & -0.263 & -110.2\% & 0.995 \\
Greek Super League & 0.409 & 0.167 & -0.243 & -59.3\% & 0.972 \\
Spanish La Liga & 0.306 & 0.149 & -0.157 & -51.3\% & 0.959 \\
English League Championship & 0.234 & 0.114 & -0.119 & -51.1\% & 0.912 \\
Swedish Allsvenskan & 0.231 & 0.108 & -0.123 & -53.3\% & 0.907 \\
Spanish La Liga 2 & 0.346 & 0.232 & -0.114 & -32.9\% & 0.903 \\
Italy Serie B & 0.315 & 0.232 & -0.083 & -26.4\% & 0.825 \\
Norwegian Eliteserien & 0.356 & 0.295 & -0.061 & -17.1\% & 0.745 \\
Russian Premier Liga & 0.254 & 0.204 & -0.050 & -19.6\% & 0.655 \\
Danish Superliga & 0.236 & 0.206 & -0.030 & -12.9\% & 0.610 \\
Turkish Super Lig & 0.271 & 0.290 & 0.019 & 7.0\% & 0.419 \\
English Premier League & 0.246 & 0.264 & 0.018 & 7.2\% & 0.416 \\
German 2. Bundesliga & 0.191 & 0.249 & 0.058 & 30.5\% & 0.266 \\
Portuguese Liga & 0.256 & 0.338 & 0.082 & 32.2\% & 0.194 \\
Italy Serie A & 0.204 & 0.292 & 0.088 & 43.4\% & 0.125 \\
Swiss Super League & 0.180 & 0.362 & 0.182 & 101.1\% & 0.043 \\
\hline
\end{tabular}
\end{table}
Table \ref{tab:4} compares posterior means of $T_k$ (denoted $\widehat{T}_{k}$) with those of $T'_k$ (denoted $\widehat{T'}_{k}$) for each of the 17 leagues. Posterior means for HA without fans is smaller that the corresponding posterior mean of HA w/ fans $(\widehat{T'}_{k} < \widehat{T}_{k})$ in 11 of the 17 leagues. In the remaining 6 leagues, our estimate of post-Covid HA is larger than pre-Covid HA ($\widehat{T'}_{k} > \widehat{T}_{k}$).
Our Bayesian framework also allows for probabilistic interpretations regarding the likelihood that HA decreased within each league. Posterior probabilities of HA decline, $P(T'_k < T_k)$, are also presented in Table \ref{tab:4}. The 3 leagues with the largest declines in HA, both in absolute and relative terms were the Austrian Bundesliga $(\widehat{T}_{k} = 0.161, \widehat{T'}_{k} = -0.202)$, the German Bundesliga $(\widehat{T}_{k} = 0.229, \widehat{T'}_{k} = -0.024)$, and the Greek Super League $(\widehat{T}_{k} = 0.409, \widehat{T'}_{k} = 0.167)$. The Austrian Bundesliga and German Bundesliga were the only 2 leagues to have post-Covid posterior HA estimates below 0, perhaps suggesting that HA disappeared in these leagues altogether in the absence of fans. We find it interesting to note that among the leagues with the 3 largest declines in HA are the leagues with the highest (Greek Super League) and lowest (Austrian Bundesliga) pre-Covid HA.
We estimate the probability the HA declined with the absence of fans, $P(T'_k < T_k)$, to be 0.999, 0.995, and 0.972 in the top flights in Austria, Germany, and Greece respectively. These 3 leagues, along with the English League Championship (0.912), Swedish Allsvenskan (0.907), and both tiers in Spain (0.959 for Spanish La Liga, 0.903 for Spanish La Liga 2) comprise seven leagues where we estimate a decline in HA with probability at least 0.9.
Two top leagues -- the English Premier League $(\widehat{T}_{k} = 0.246, \widehat{T'}_{k} = 0.264)$ and Italy Serie A $(\widehat{T}_{k} = 0.204, \widehat{T'}_{k} = 0.292)$ -- were among the six leagues with estimated post-Covid HA greater than pre-Covid HA. The three leagues with largest increase in HA without fans were the Swiss Super League $(\widehat{T}_{k} = 0.180, \widehat{T'}_{k} = 0.362)$, Italy Serie A $(\widehat{T}_{k} = 0.204, \widehat{T'}_{k} = 0.292)$, and the Portuguese Liga $(\widehat{T}_{k} = 0.256, \widehat{T'}_{k} = 0.338)$.
Figure \ref{fig:4} (provided in the Appendix) shows the posterior distributions of $T_k - T'_k$, the change in goals home advantage, in each league. Though this information is also partially observed in Table \ref{tab:4} and Figure \ref{fig:2}, the non-overlapping density curves for the change in HA provide additional evidence that post-Covid changes were not uniform between leagues.
Fitting Model (\ref{eqn:3}) with $\lambda_3 > 0$ did not noticeably change inference with respect to the home advantage. For example, the probability that HA declined when assuming $\lambda_3 > 0$ was within 0.10 of the estimates shown in Table \ref{tab:4} in 14 of 17 leagues. In only one of the leagues did the estimated probability of HA decline exceed 0.9 with $\lambda_3 = 0$ and fail to exceed 0.9 with $\lambda_3 > 0$ (Swedish Allsvenskan: $P(T'_k < T_k) = 0.907$ w/ $\lambda_3 = 0$ and $0.897$ w/ $\lambda_3 > 0$).
\subsection{Yellow Cards}
\subsubsection{Model Fit}
The yellow cards model presented in this paper is Model (\ref{eqn:4}), using $\lambda_3 > 0$ for all leagues. Unlike with goals, where there was inconsistent evidence of a correlation in game-level outcomes, the correlation in home and away yellow cards per game varied between 0.10 and 0.22 among the 17 leagues.
$\widehat R$ statistics for Model (\ref{eqn:3}) ranged from 0.9999-1.013, providing strong evidence that the model properly converged. Effective sample sizes (ESS) for each parameter in Model (\ref{eqn:4}) are provided in Table \ref{tab:apdx2}. ESS are sufficiently large, especially HA parameters of interest $T_k$ and $T'_k$, suggesting enough draws were taken to conduct inference.
\subsubsection{Home Advantage}
\label{sec:622}
\begin{figure}
\includegraphics[scale=0.10]{figure2.png}
\caption{Posterior distributions of $T_k$ and $T'_k$, the pre-Covid and post-Covid HAs for yellow cards. Smaller (i.e. more negative) values of $T_k$ and $T'_k$ indicate larger home advantages. Prior to the Covid-19 pandemic, the English League Championship and Greek Super League had the largest home advantages for yellow cards, while the Swedish Allsvenskan and Turkish Super Lig had the smallest home advantages for yellow cards. Across the 17 leagues in the sample, a range of differences exist between posterior distributions of $T_k$ and $T'_k$.}
\label{fig:2}
\end{figure}
\begin{table}[ht]
\caption{Comparison of posterior means for pre-Covid and post-Covid yellow cards HA parameters from Model (\ref{eqn:4}), $\widehat{T}_{k}$ and $\widehat{T'}_{k}$, respectively. In the context of yellow cards, smaller (i.e. more negative) values of $T_k$ and $T'_k$ indicate larger home advantages. Relative and absolute differences between $\widehat{T'}_{k}$ and $\widehat{T}_{k}$ are also shown. Probabilities of decline in HA without fans, $P(T'_k > T_k)$, are estimated from posterior draws. We estimate the probability of a decline in HA without fans to exceed 0.9 in 5 of 17 leagues, and to exceed 0.5 in 15 of 17 leagues.}
\label{tab:5}
\centering
\begin{tabular}{lccccc}
\hline \\
League & $\widehat{T}_{k}$ & $\widehat{T'}_{k}$ & $\widehat{T'}_{k}$ - $\widehat{T}_{k}$ & \% Change & $P(T'_k > T_k)$ \\ \\
\hline \\
Russian Premier Liga & -0.404 & 0.037 & 0.441 & 109.1\% & 0.997 \\
German Bundesliga & -0.340 & 0.039 & 0.379 & 111.4\% & 0.986 \\
Portuguese Liga & -0.415 & -0.008 & 0.406 & 98.0\% & 0.984 \\
German 2. Bundesliga & -0.392 & 0.090 & 0.482 & 123.0\% & 0.982 \\
Spanish La Liga 2 & -0.359 & -0.169 & 0.190 & 52.9\% & 0.917 \\
Danish Superliga & -0.331 & -0.010 & 0.321 & 96.9\% & 0.878 \\
Austrian Bundesliga & -0.251 & 0.063 & 0.314 & 125.1\% & 0.863 \\
Greek Super League & -0.429 & -0.261 & 0.168 & 39.2\% & 0.829 \\
Italy Serie B & -0.397 & -0.223 & 0.174 & 43.8\% & 0.799 \\
Spanish La Liga & -0.269 & -0.094 & 0.176 & 65.3\% & 0.719 \\
Swedish Allsvenskan & -0.196 & -0.063 & 0.132 & 67.6\% & 0.682 \\
English League Championship & -0.478 & -0.393 & 0.085 & 17.7\% & 0.675 \\
Norwegian Eliteserien & -0.323 & -0.266 & 0.057 & 17.8\% & 0.615 \\
Turkish Super Lig & -0.199 & -0.122 & 0.077 & 38.8\% & 0.599 \\
Swiss Super League & -0.327 & -0.282 & 0.045 & 13.7\% & 0.581 \\
English Premier League & -0.293 & -0.366 & -0.073 & -24.9\% & 0.376 \\
Italy Serie A & -0.344 & -0.489 & -0.145 & -42.1\% & 0.240 \\
\hline
\end{tabular}
\end{table}
As with Model (\ref{eqn:3}) in Section \ref{sec:612}, the primary parameters of interest in Model (\ref{eqn:4}) are $T_k$ and $T'_k$, the pre- and post-Covid home advantages for each league $k$, respectively. Unlike with goals, where values of $T_k$ are positive, teams tend to want to avoid yellow cards, and thus estimates of $T_k$ are $< 0$. Related, a post-Covid decrease in yellow card HA is reflected by $T_k < T'_k$.
As in Section \ref{sec:612}, $T_k$ and $T'_k$ correspond to a log-scale and represent the additional increase on the home team's log yellow card expectation, relative to a league average ($\mu_{ks}$) after accounting for team and opponent ($\tau_{ks}$) tendencies. Additionally, note that the same value of $T_k$ represents a larger home advantage in a league where fewer cards are shown (i.e. smaller $\mu_{ks}$).
Posterior distributions for $T_k$ and $T'_k$ are presented in Figure \ref{fig:2}. Posterior means of $T_k$ range from -0.196 (Swedish Allsvenskan) to -0.478 (English League Championship).
Table \ref{tab:5} compares posterior means of $T_k$ (denoted $\widehat{T}_{k}$) with those of $T'_k$ (denoted $\widehat{T'}_{k}$) for each of the 17 leagues for the yellow cards model. Posterior means for $T_k$ are smaller than that the corresponding posterior mean of $T'_k$, $(\widehat{T}_{k} < \widehat{T'}_{k})$ in 15 of the 17 leagues, suggesting that yellow card HA declined in nearly every league examined in the absence of fans.
The two leagues with the largest declines in HA, both in absolute and relative terms were the German 2. Bundesliga $(\widehat{T}_{k} = -0.392, \widehat{T'}_{k} = -0.090)$ and the Austrian Bundesliga $(\widehat{T}_{k} = -0.251, \widehat{T'}_{k} = 0.063)$. In addition the top Austrian division and the 2nd German division, $\widehat{T'}_{k} > 0$ in the German Bundesliga $(\widehat{T}_{k} = -0.340, \widehat{T'}_{k} = 0.039)$ and Russian Premier League $(\widehat{T}_{k} = -0.404, \widehat{T'}_{k} = 0.037)$.
Posterior probabilities of HA decline, $P(T'_k > T_k)$, are also presented in Table \ref{tab:5}. This probability exceeds 0.9 in 5 of 17 leagues: Russian Premier Liga (.997), German Bundesliga (0.986), Portuguese Liga (0.984), German 2. Bundesliga (0.982), and Spanish La Liga 2 (0.917).
Alternatively, $\widehat{T}_{k} > \widehat{T'}_{k}$ in 2 leagues, the English Premier League $(\widehat{T}_{k} = -0.293, \widehat{T'}_{k} = -0.366)$ and Italy Serie A $(\widehat{T}_{k} = -0.344, \widehat{T'}_{k} = -0.489)$. However, given the overlap in the pre-Covid and post-Covid density curves, this does not appear to be a significant change.
Figure \ref{fig:5} (provided in the Appendix) shows the posterior distributions of $T_k - T'_k$, the change in yellow card home advantage, in each league. In Figure \ref{fig:5}, there is little, if any, overlap between estimates of the change in Serie A's yellow card home advantage, and, for example, the change in German 2. Bundesliga and the Russian Premier League, adding to evidence that the post-Covid changes in HA are not uniform across leagues.
Fitting Model (\ref{eqn:4}) with $\lambda_3 = 0$ changed inference with respect to the home advantage slightly more than was the case between the two variants of Model (\ref{eqn:3}). For example, the probability that HA declined when assuming $\lambda_3 = 0$ was within 0.10 of the estimates shown in Table \ref{tab:5} in only 9 of 17 leagues. With $\lambda_3 = 0$, we estimated the probability HA declined to be 0.979 in the Austrian Bundesliga and 0.944 in the Danish Super Liga, compared to 0.863 and 0.874, respectively with $\lambda_3 > 0$. Other notable differences include the English Premier League and Italy Serie A, whose estimated probability of HA decline rose from 0.075 and 0.073 to 0.376 and 0.240, respectively. Such differences are to be expected given the much larger observed correlation in yellow cards as compared to goals, and suggest that failure to account for correlation in yellow cards between home and away teams might lead to faulty inference and incorrect conclusions about significant decreases (or increases) in home advantage.
\subsection{Examining Goals and Yellow Cards Simultaneously}
To help characterize the relationship between changes in our two outcomes of interest, Figure \ref{fig:6} (shown in the Appendix) shows the pre-Covid and post-Covid HA posterior means of each of goals and yellow cards in the 17 leagues. The origin of the arrows in Figure \ref{fig:6} is the posterior mean of HA for pre-Covid yellow cards and goals, and the tip of the arrow is the posterior mean of post-Covid HA for yellow cards and goals.
Of the 17 leagues examined in this paper, 11 fall into the case where yellow cards and goals both experienced a decline in HA. In four leagues, the German Bundesliga, Spanish La Liga 2, Greek Super League, and Austrian Bundesliga, the probability that HA declined was greater than 0.8 for both outcomes of interest.
Despite the posterior mean HA for goals being higher post-Covid when compared to pre-Covid, the Turkish Super Lig, German 2. Bundesliga, Portugese Liga, and Swiss Super League show a possible decrease in yellow card HA. For example, we estimate the probably that HA for yellow cards declined to be 0.984 for the Portuguese Liga and 0.982 for the German 2. Bundesliga.
Both the English Premier League and Italy Serie A show posterior mean HAs that are higher for both outcomes. Of the four countries where multiple leagues were examined, only Spain's pair of leagues showed similar results (decline in HA for both outcomes). No leagues had showed posterior means with a lower HA for goals but not for yellow cards.
\section{Discussion}
\label{Sec7}
Our paper utilizes bivariate Poisson regression to estimate changes to the home advantage (HA) in soccer games played during the summer months of 2020, after the outbreak of Covid-19, and relative to games played pre-Covid. Evidence from the 17 leagues we looked is mixed. In some leagues, evidence is overwhelming that HA declined for both yellow cards and goals. Alternatively, other leagues suggest the opposite, with some evidence that HA increased. Additionally, we use simulation to highlight the appropriateness of bivariate Poisson for home advantage estimation in soccer, particularly relative to the oft-used linear regression.
The diversity in league-level findings highlights the challenges in reaching a single conclusion about the impact of playing without fans, and implies that alternative causal mechanisms are also at play. For example, two of the five major European leagues are the German Bundesliga and Italy's Serie A. In the German Bundesliga, evidence strongly points to decreased HA ($> 99\%$ with goals), which is likely why \cite{fischer2020betting} found that broadly backing away Bundesliga teams represented a profitable betting strategy. But in Serie A, we only find a 10 percent probability that HA decreased with goal outcomes. Comparing these two results does not mesh into one common theme. Likewise, Figures \ref{fig:1}-\ref{fig:2} and Figures \ref{fig:4}-\ref{fig:5} imply that both (i) HA and (ii) changes in HA are not uniform by league.
Related, there are other changes post Covid-19 outbreak, some of which differ by league. These include, but are not limited to:
1. Leagues adopted rules allowing for five substitutions, instead of three substitutions per team per game. This rule change likely favors teams with more depth (potentially the more successful teams), and suggests that using constant estimates of team strength pre-Covid and post-Covid could be inappropriate.\footnote{As shown in Table \ref{tab:3}, however, we are limited by the number of post-Covid games in each league.}
2. Certain leagues restarted play in mid-May, while others waited until the later parts of June. An extra month away from training and club facilities could have impacted team preparedness.
3. Covid-19 policies placed restrictions on travel and personal life. When players returned to their clubs, they did so in settings that potentially impacted their training, game-plans, and rest. Additionally, all of these changes varied by country, adding credence to our suggestion that leagues be analyzed separately.
Taken wholly, estimates looking at the impact of HA post-Covid are less of a statement about the cause and effect from a lack of fans \citep{mccarrick2020home, brysoncausal}, and as much about changes due to both a lack of fans $and$ changes to training due to Covid-19. Differences in the latter could more plausibly be responsible for the heterogenous changes we observe in HA post-Covid.
Given league-level differences in both HA and change in HA, we do not recommend looking at the impact of ``ghost games" using single number estimate alone. However, a comparison to \cite{mccarrick2020home}, who suggest an overall decline in per-game goals HA from 0.29 to 0.15 (48\%), is helpful for context. As shown in Table \ref{tab:4}, our median league-level decline in goals HA, on the log scale, is 0.07. Extrapolating from Model \ref{eqn:3}, assuming attacking and defending team strengths of 0, and using the average posterior mean for $\mu_k$, averaged across the 17 leagues, this equates to a decline in the per-game goals HA from 0.317 to 0.243 (23\%). This suggests the possibility that, when using bivariate Poisson regression, the overall change in HA is attenuated when compared to current literature.
We are also the first to offer suggestions on the simultaneous impact of HA for yellow cards and goals. While traditional soccer research has used yellow cards as a proxy for referee decisions relating to benefits for the home team, we find that it is not always the case that changes in yellow card HA are linked to changes to goal HA. In two leagues, German 2. Bundesliga and Poruguese Liga, there are overwhelming decreases in yellow card HA (probabilities of a decrease of at least 98\% in each), but small increases in the net HA given to home team goals. Among other explanations, this suggests that yellow cards are not directly tied to game outcomes. It could be the case that, for example, visiting teams in certain leagues fouled less often on plays that did not impact chances of scoring or conceding goals. Under this hypothesis, yellow cards aren't a direct proxy for a referee-driven home advantage, and instead imply changes to player behavior without fans, as suggested by \cite{leitner2020analysis}. Alternatively, having no fan support could cause home players to incite away players less frequently. Said FC Barcelona star Lionel Messi \citep{messi}, ``It's horrible to play without fans. It's not a nice feeling. Not seeing anyone in the stadium makes it like training, and it takes a lot to get into the game at the beginning.''
Finally, we use simulations to highlight limitations of using linear regression with goal outcomes in soccer. The mean absolute bias in HA estimates is roughly six times higher when using linear regression, relative to bivariate Poisson. Absolute bias when estimating HA using bivariate Poisson also compares favorably to paired comparison models. Admittedly, our simulations are naive, and one of our two data generating processes for simulated game outcomes aligns with the same Poisson framework as the one we use to model game results. This, however, is supported by a wide body of literature, including \cite{reep1968skill}, \cite{reep1971skill}, \cite{dixon1997modelling}, and \cite{karlis2000modelling}. Despite this history, linear regression remains a common tool for soccer research (as shown in Table \ref{tab:1}); as an alternative, we hope these findings encourage researchers to consider the Poisson distribution.
\section*{Declarations}
\subsection*{Conflict of interest}
The authors declare that they have no conflict of interest. The authors would like to note that this work is not endorsed, nor associated with Medidata Solutions, Inc.
\subsection*{Funding}
Not Applicable.
\subsection*{Data Availability}
All data used in this project are open source, and come from Football Reference \citep{fbref}. We make our cleaned, analysis-ready dataset available at \url{https://github.com/lbenz730/soccer_ha_covid/tree/master/fbref_data}.
\subsection*{Code Availability}
All code for scraping data, fitting models, and conduncting analyses has been made available for public use at \url{https://github.com/lbenz730/soccer_ha_covid}.
\clearpage
\bibliographystyle{spbasic}
| {
"attr-fineweb-edu": 1.893555,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdhLxK6wB9mpb2X9H | \section{Background}
\label{sec:background}
Predicting the outcome of a sports game - and in particular American football in our case - has been of interest for several decades now.
For example, Stern \cite{stern91} used data from the 1981, 1983 and 1984 NFL seasons and found that the distribution of the win margin follows a normal distribution with mean equal to the pregame point spread and standard deviation a little less than 14 points.
He then used this observation to estimate the probability distribution of the number of games won by a team.
The most influential work in the space, Win Probability Added, has been developed by Burke \cite{wpa} who uses various variables such as field position, down etc. to predict the win probability added after every play.
This work forms the basis for ESPN's prediction engine, which uses an ensemble of machine learning models.
Inspired by the early work and observations from Stern Winston developed an in-game win-probability model \cite{winston2012mathletics}, which was further adjusted from Pro-Football Reference to form their P-F-R win probability model using the notion of expected points \cite{pfrmodel}.
More recently, Lock and Nettleton \cite{lock2014using} provided a random forest model for in-game win probability, while Shneider \cite{gambletron} created a system that uses real-time data from betting markets to estimate win probabilities.
Finally, similar in-game win probability models exist for other sports (e.g., \cite{buttrey2011estimating,stern1994brownian}).
One of the main reasons we introduce {{\tt iWinRNFL}} despite the presence of several in-game win probability models is the fact that the majority of them are hard to reproduce either because they are proprietary or because they are not described in great detail.
Furthermore an older model, might not be applicable anymore ``as-is'' given the changes the game undergoes over the years (e.g., offenses rely much more on the passing game today as compared to a decade ago).
Our goal with this work is twofold; (i) to provide an {\em open}\footnote{Source code and data will be made publicly available.}, simple, yet well-calibrated, in-game win probability model for the NFL, and (ii) to emphasize on the appropriate ways to evaluate such models.
\section{Conclusions}
\label{sec:conclusions}
In this paper, motivated by several recent comebacks that seemed improbable at the moment, we designed {{\tt iWinRNFL}}, a simple generalized linear model for in-game win probabilities in NFL.
{{\tt iWinRNFL}} uses 10 independent variables to assess the win probability of a team for any given state of the game.
Our evaluations indicate that the probabilities provided by our model are consistent and well-calibrated.
One crucial point is that the developed models need to be re-evaluated frequently.
The game changes rapidly both due to changes in the rules but also because of changes in players skills or even due to analytics (e.g., see the explosion of three-point shots in basketball, or the number of NFL teams that are now run a pass-first offense).
Hence, the details of the model can also change due to these changes.
Finally, win probability models, while currently are mainly used from media in order to enhance sports storytelling, can form a central component for the evaluation of NFL players.
In particular, the added win probability from each player can form the dependent variable in a (adjusted) plus-minus type of regression.
In the future we plan to explore similar applications of {{\tt iWinRNFL}} related with personnel decision, in-game adjustments etc.
\section{Model Evaluation}
\label{sec:evaluation}
We begin by evaluating how well the output probabilities of {{\tt iWinRNFL}} follow what happens in reality.
When a team is given a 70\% probability of winning at a given state of the game, this essentially means that if the game was played from that state onwards 100 times, the team is expected to win approximately 70 of them.
Of course, it is clear that we cannot have the game played more than once so one way to evaluate the probabilities of our model is to consider all the instances where the model provided a probability for team winning $x\%$ and calculate the fraction of instances that ended up in a win for this team.
Ideally we would expect this fraction to be $x\%$ as well.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.25]{plots/fullwinprob.pdf
\caption{{{\tt iWinRNFL}} is consistent over the whole range of probabilities.}
\label{fig:fullwp}
\end{center}
\end{figure}
In order to obtain these results we split our data in a training and test set in a 70-30\% proportion respectively.
Figure \ref{fig:fullwp} presents the results on our test set, where we used bins of a 0.05 probability range.
In particular, as we can see the predicted probabilities match very well with the actual outcome of these instances.
The fitted line has a slope of 1.007 (with a 95\% confidence interval of [0.994,1.019]), while the intercept is -0.005 (with a 95\% confidence interval [-0.013, 0.0014]), while the $R^2 = 0.99$.
Simply put the line is for all practical purposes the $y=x$ line, which translates to a fairly consistent and accurate win probability.
We have also calculated the accuracy of our binary predictions (i.e., without loss of generality the home team is projected to win if $\Pr(H=1| \mathbf{x})>0.5$) on the test set and is equal with 74\%, that is, our predictions are correct in 74\% of the instances.
Another metric that has been traditionally used in the literature to evaluate the performance of a probabilistic prediction is the Brier score $\beta$ \cite{brier1950verification}.
In the case of a binary probabilistic prediction the Brier score is calculated as:
\begin{equation}
\beta = \dfrac{1}{N}\sum_{i=1}^N (\pi_i-y_i)^2
\label{eq:brier}
\end{equation}
where $N$ is the number of observations, $\pi_i$ is the probability assigned to instance $i$ being equal to 1 and $y_i$ is the actual (binary) value of instance $i$.
Brier scores takes values between 0 and 1 and does not evaluate the accuracy of the predicted probabilities but rather the calibration of these probabilities, that is, the level of certainty they provide.
The lower the value of $\beta$ the better the model performs in terms of calibrated predictions.
Our model exhibits a Brier score $\beta$ of 0.17.
Typically the Brier score of a model is compared to a baseline value $\beta_{base}$ obtained from a {\em climatology} model \cite{mason2004using}.
A climatology model assigns the same probability to every observation (that is, home team win in our case), which is equal to the fraction of positive labels in the whole dataset.
Hence, in our case the climatology model assigns a probability of 0.57 to each observation, since 57\% of the instances in the dataset resulted to a home team win.
The Brier score for this reference model is $\beta_{base}=0.26$, which is obviously of lower quality as compared to our model.
Next we examine the performance of {{\tt iWinRNFL}} as a function of the time elapsed from the beginning of the game, and in particular we consider the quarter of the predicted instance.
Table \ref{tab:results} presents the results for the accuracy, the Brier score as well as the probability accuracy of {{\tt iWinRNFL}}.
The latter is reflected on the 95\% confidence interval of the slope and intercept of the corresponding line (see Figure \ref{fig:fullwp}).
As we can see the performance of {{\tt iWinRNFL}} improves as the game progresses.
For example, during the first quarter the accuracy with which the winner is predicted is close to the state-of-the-art prediction accuracy of pre-game win probability models \cite{kpele-plosone}.
However, as the game progresses {{\tt iWinRNFL}} is able to adjust its predictions.
\begin{table*}
\centering
\begin{tabular}{c||*{2}{c|}c|c}
& \multicolumn{2}{|c|}{Probability line} & & \\ \hline \hline
Quarter & Slope & Intercept & Brier score & Accuracy \\ \hline
1 & [0.87,0.95] & [0.041,0.088] & 0.21 & 0.64 \\ \hline
2 & [0.89,0.97] & [0.035,0.071] & 0.18 & 0.71 \\ \hline
3 & [0.97,1.03] & [-0.01,0.05] & 0.14 & 0.77 \\ \hline
4 & [0.99,1.21] & [-0.11,0.04] & 0.11 & 0.84 \\ \hline
\end{tabular}
\caption{The performance of our model improves as the game progresses.}
\label{tab:results}
\end{table*}
\iffalse
Qtr1 --> Intercept: 0.01724886 0.08667382 Slope: 0.85147882 0.97192799 Brier: 0.21 (0.25)
Qtr2 --> Intercept: 0.03572466 0.09115063 Slope: 0.84114141 0.93719396 Brier: 0.18 (0.25)
Qtr3 --> Intercept: -0.01549244 0.05285255 Slope: 0.91602858 1.03443241 Brier: 0.14 (0.25)
Qtr4 --> Intercept: -0.1120115 -0.0448949 Slope: 0.9970946 1.2132215 Brier: 0.11 (0.25)
Accuracy --> 0.64 0.71 0.77 0.84
\fi
{\bf Anecdote {\em Evaluation}: }
As mentioned in the introduction, one of the motivating events for {{\tt iWinRNFL}} has been Super Bowl 51.
Super Bowl 51 has been labeled as the biggest comeback in the history of Super Bowl.
Late in the third quarter New England Patriots were trailing by 25 points and Pro Football Reference was giving Patriots a 1:1000 chance of winning the Super Bowl, while ESPN a 1:500 chance \cite{statsbylopez}.
PFR considers this comeback a once in a millennium comeback.
While this can still be the case\footnote{As mentioned earlier 1:$x$ chances does not mean that we observe $x$ {\em failures} first and then the one {\em success}.}, in retrospect Patriots' win highlights that these models might be too optimistic and confident to the team ahead.
On the contrary, the lowest probability during the game assigned to the Patriots by {{\tt iWinRNFL}} for winning the Super Bowl was 2.1\% or about 1:50.
We would like to emphasize here that the above does not mean that {{\tt iWinRNFL}} is ``{\em better}'' than other win probability models.
However, it is a simple, open model that assigns win probabilities in a conservative (i.e., avoids ``over-reacting''), yet accurate and well-calibrated way.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4]{plots/sb51.png
\caption{The lowest in-game win probability assigned to the Patriots by {{\tt iWinRNFL}} during Super Bowl 51 was 2.1\%, i.e., 1 in 50 chances. During the OT the model does not perform very accurately due to the sparsity of the relevant data.}
\label{fig:sb51}
\end{center}
\end{figure}
\section{Introduction}
\label{sec:intro}
In-game win probability models provide the likelihood that a certain team will win a game given the current state of the game.
Such models have become very popular during the last few years, mainly because they can provide the backbone for in-game decisions but also because they can potentially improve the viewing experience of the fans.
Among other events this year's super bowl sparked a lot of discussion around the validity and accuracy of these models \cite{ringer17} since at some point late in the third quarter the probability given by some of these models to New England Patriots to win was less than 0.1\% or in other words a 1 in 1,000 Super Bowls comeback \cite{statsbylopez}.
Clearly the result of the game was not the one projected at that point and hence, critiques of these models appeared.
In this paper, we present the design of an in-game win probability model for NFL ({{\tt iWinRNFL}}) focusing on its appropriate evaluation.
While {{\tt iWinRNFL}} is simple - at its core is based on a generalized linear model - our evaluations show that it is well-calibrated over the whole range of probabilities.
Our model was trained using NFL play-by-play data for the past 8 seasons.
The fact that New England Patriots won the Super Bowl even though they were given just 0.1\% probability to do so at the end of the third quarter, is not a {\em failure of the math}.
In fact, this number tells us that roughly for every 1,000 times that a team is in the same position there will be 1 instance where the trailing team will win.
Of course, the order with which we observe this instance is arbitrary - it can be the first time we ever observe this game setting (i.e., a team trailing with 25 points by the end of the third quarter) - which intensifies the opinion that {\em math failed}.
So the real accuracy question is how well the predicted probabilities capture the actual winning chances of a team at a given point in the game.
In other words, what fraction of the in-game instances where a team was given x\% probability of winning actually ended up with this team winning?
Ideally, we would like this fraction to be also x\%, and we show that this is the case for {{\tt iWinRNFL}} over the whole range of probabilities.
The rest of the paper is organized as follows:
Section \ref{sec:background} briefly presents background on the win probability models for NFL.
Section \ref{sec:model} presents our model, while Section \ref{sec:evaluation} presents the evaluation of {{\tt iWinRNFL}}.
Finally, Section \ref{sec:conclusions} concludes our work and presents our future directions.
\section{Introduction}
\label{sec:intro}
In-game win probability models provide the likelihood that a certain team will win a game given the current state of the game.
These models have become very popular during the last few years, mainly because they can provide the backbone for in-game decisions as well as, play call and personnel evaluations.
Furthermore, they can potentially improve the viewing experience of the fans.
Among other events Super Bowl 51 sparked a lot of discussion around the validity and accuracy of these models \cite{ringer17}.
During Super Bowl 51, late in the third quarter the probability assigned by some of these models to New England Patriots to win was less than 0.1\% or in other words a 1 in 1,000 Super Bowls comeback \cite{statsbylopez}.
Clearly the result of the game was not the one projected at that point and hence, critiques of these models appeared.
Of course, this is a single observation and a large part of the discussion was generated because of the high profile of the game and selection bias.
Nevertheless, designing and evaluating in-game win probability models is in general important if we are to rely on them for on-field decision making and for evaluating teams, players and coaches.
Furthermore, the proprietary nature of many of these models makes it imperative to develop ones that are open and can be further inspected, reviewed and improved by the community, while simpler and interpretable models are always preferable over more complicated and hard to interpret models (without sacrificing quality).
For example, \cite{lock2014using} is a very similar model to {{\tt iWinRNFL}}, with similar performance.
However, it makes use of a more complex model, that is, ensemble learning methods, that might be harder to be directly interpreted in a way similar to a linear model.
We would like to emphasize that our study does not aim at discrediting existing win probability models, but rather exploring the ability of simple and interpretable models to achieve similar performance and identify when they can {\em fail}.
Therefore, in this paper we present the design of {{\tt iWinRNFL}}, an in-game win probability model for NFL.
Our model was trained using NFL play-by-play data from 7 NFL seasons between 2009-2015, and while {{\tt iWinRNFL}} is simple - at its core is based on a generalized linear model - our evaluations show that it is well-calibrated over the whole range of probabilities.
The real {\bf robustness} question in these type of models is how well the predicted probabilities capture the actual winning chances of a team at a given point in the game.
In other words, what is the {\em reliability} of our predictions; i.e., what fraction of the in-game instances where a team was given x\% probability of winning actually ended up with this team winning?
Ideally, we would like this fraction to be also x\%.
For instance, the fact that New England Patriots won Super Bowl 51 even though they were given just 0.1\% probability to do so at the end of the third quarter, is not a {\em failure of the math}.
In fact, this number tells us that roughly for every 1,000 times that a team is in the same position there will be 1 instance (on expectation) where the trailing team will win.
Of course, the order with which we observe this instance is arbitrary, that is, it can be the first time we ever observe this game setting (i.e., a team trailing with 25 points by the end of the third quarter), which further intensifies the opinion that {\em math failed}.
Our evaluations show that this is the case for {{\tt iWinRNFL}} over the whole range of probabilities.
Furthermore, we evaluate more complex, non-linear, models, and in particular a Bayesian classifier and a neural network, using the same set of features.
The reason for examining more complex models are the boundary effects from the finite duration of a game that can create several non-linearities \cite{winston2012mathletics} and these are the cases that simple, linear modes can fail.
For example, when we examine the relationship between a drive's time length and a drive's yard length, a linear model between these two variables explains approximately 42\% of the variance.
However, when focusing at the end of the half/game, this linear model explains much less of this variance.
In fact, the closer we get to the end of the half/game, the lower the quality of the linear model as we can see in Figure \ref{fig:non-linear}.
In this figure, we present the variance explained ($R^2$) by a linear model between the two variables for two different types of drives, namely, drives that started $\tau$ minutes before the end of the half/game and drives that started outside of this window.
As we can see, the closer we get to the end of the half/game, the less variance this linear relationship can explain, serving as evidence that indeed there can be much more severe non-linearities towards the end of the second and fourth quarter - as compared to earlier in the first and second half.
However, as we will see in our evaluations, overall there are not significant performance improvements from these non-linear models over {{\tt iWinRNFL}}, despite these possibly un-modeled non-linear factors contributing to win probability!
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.45]{plots/non-linear
\caption{A linear hypothesis between time and yard length of a drive cannot explain a lot of the variance towards drives at the end of the half/game. }
\label{fig:non-linear}
\end{center}
\end{figure}
The rest of the paper is organized as follows:
Section \ref{sec:background} briefly presents background on (in-game) win probability models for NFL.
Section \ref{sec:model} presents our model(s), while Section \ref{sec:evaluation} presents the model evaluations.
Finally, Section \ref{sec:conclusions} concludes our work.
\section{Background}
\label{sec:background}
Predicting the outcome of a sports game - and in particular American football in our case - has been of interest for several decades now.
For example, \cite{stern91} used data from the 1981, 1983 and 1984 NFL seasons and found that the distribution of the win margin is normal with mean equal to the pregame point spread and standard deviation a little less than 14 points.
He then used this observation to estimate the probability distribution of the number of games won by a team.
The most influential work in the space, Win Probability Added, has been developed by \cite{wpa} who uses various variables such as field position, down etc. to predict the win probability added after every play.
This work forms the basis for ESPN's prediction engine, which uses an ensemble of machine learning models.
Inspired by the early work and observations from Stern, Winston developed an in-game win-probability model \cite{winston2012mathletics}, which was further adjusted from Pro-Football Reference to form their P-F-R win probability model using the notion of expected points \cite{pfrmodel}.
More recently, \cite{lock2014using} provided a random forest model for in-game win probability, while \cite{gambletron} created a system that uses real-time data from betting markets to estimate win probabilities.
As alluded to above \cite{lock2014using} used very similar covariates with {{\tt iWinRNFL}} and the performance of the two models is very similar.
One of the reasons the authors used random forests is the ability of the model to account for non-linear interactions between the covariates.
Our work shows that the improvements (if at all) over simpler and more interpretable, (generalized) linear models, do not justify the use of complex models.
At the wake of Super Bowl 51 \cite{rosenheck17} developed a simulation system that considers the strength of the offensive and defensive units, the current score differentials and the field position and simulates the rest of the game several times to obtain the current win probability.
This approach is computational expensive and while, the goal of Rosenheck was to retrospectively simulate the outcome of Super Bowl 51 under the assumption that the overtime rules for the NFL are similar to those of college football, its applicability in its current form is mainly for post-game analysis.
Finally, similar in-game win probability models exist for other sports (e.g., \cite{buttrey2011estimating,stern1994brownian}).
One of the main reasons we introduce {{\tt iWinRNFL}} despite the presence of several in-game win probability models is the fact that the majority of them are hard to reproduce either because they are proprietary or because they are not described in great detail.
Furthermore, an older model might not be applicable anymore ``as-is'' given the changes the game undergoes over the years (e.g., offenses rely much more on the passing game today as compared to a decade ago, changes in the rules etc.).
Our main objective with this work is to provide an {\em open} and fully {\em reproducible} win probability model for the NFL\footnote{Source code and data will be made publicly available.}.
\section{The iWinRNFL model}
\label{sec:model}
In this section we are going to present the data we used to develop our in-game probability model as well as the design details of {{\tt iWinRNFL}}.
{\bf Data: }In order to perform our analysis we utilize a dataset collected from NFL's Game Center for all the regular season games between the seasons 2009 and 2015.
We access the data using the Python {\tt nflgame} \cite{nflgame}.
The dataset includes detailed play-by-play information for every game that took place during these seasons.
Figure \ref{fig:pbp} presents an illustrative sample of the game logs we obtain.
This information is used to obtain the state of the game that will drive the design of {{\tt iWinRNFL}}.
In total, we collected information for 1,792 regular season games and a total of 295,844 snaps/plays.
\begin{figure*}[t]
\begin{center}
\includegraphics[scale=0.3]{plots/pbp
\caption{Through Python's {\tt nflgame} API we obtain a detailed log for every regular season NFL game between 2009-2016.}
\label{fig:pbp}
\end{center}
\end{figure*}
{\bf Model: }
{{\tt iWinRNFL}} is based on a logistic regression model that calculates the probability of the home team winning given the current status of the game as:
\begin{equation}
\Pr(H=1| \mathbf{x})= \frac{\exp(\mathbf{w}^T\cdot\mathbf{x})}{1+\exp(\mathbf{w}^T\cdot\mathbf{x})}
\label{eq:reg}
\end{equation}
where $H$ is the dependent random variable of our model representing whether the home team wins or not, $\mathbf{x}$ is the vector with the independent variables, while the coefficient vector $\mathbf{w}$ includes the weights for each independent variable and is estimated using the corresponding data.
In order to describe the status of the game we use the following variables:
\begin{enumerate}
\item {\bf Ball Possession Team:} This binary feature captures whether the home or the visiting team has the ball possession
\item {\bf Score Differential:} This feature captures the current score differential (home - visiting)
\item {\bf Timeouts Remaining:} This feature is represented by two independent variables - one for the home and one for the away team - and they capture the number of timeouts remaining for each of the teams
\item {\bf Time Elapsed: } This feature captures the time elapsed since the beginning of the game
\item {\bf Down:} This feature represents the down of the team in possession
\item {\bf Field Position:} This feature captures the distance covered by the team in possession from their own yard line
\item {\bf Yards-to-go:} This variables represents the number of yards needed for a first down
\item {\bf Ball Possession Time: } This variable captures the time that the offensive unit of the home team is on the field
\item {\bf Rating Differential: } This variable represents the difference in the ratings for the two teams (home - visiting)
\end{enumerate}
The last independent variable is representative of the strength difference between the two teams.
The rating of each team $T$ represents how many points better (or worse) $T$ is compared to a league-average team.
This rating differential {\em dictates} the win-probability at the beginning of the game, and its importance fades as the game progresses as we will see.
Appendix A describes in detail how we obtain these ratings, as well as other feature alternatives for representing the strength difference.
\iffalse
Most of the existing models that include such a variable are using the spread for each game as provided by the betting industry.
We choose not to do so for the following reason.
The objective of the betting line is not to predict game outcomes but rather distribute money across the different bets.
Exactly because of this objective the line is changing during the week before the game.
While this line can change due to new information for the competing teams (e.g., injury updates), the line is mainly changing when a particular team has accumulated the majority of the bets.
In this case it will also be hard to choose which line to use (e.g., the opening, the closing or some average of them).
Therefore, we choose to use the win percentage differential of the two teams as an indicator of their strength (even though this has its own issues given the uneven schedule in NFL).
However, note that if one would like to use the point spread as a variable this can be easily incorporated in the model.
\fi
Furthermore, we have included in the model three interaction terms between the ball possession team variable, and (i) the down count, (ii) the yards-to-go, and (iii) the field position variables.
This is crucial in order to capture the correlation between these variables and the probability of the home team winning.
More specifically, the interpretation of these three variables (down, yards-to-go and field position) is different depending on whether the home or visiting team possesses the ball and these interaction terms will allow the model to better distinguish between the two cases.
Finally, we have added an interaction term between the time lapsed and (i) the team ratings differential and (ii) the score differential, in order to examine whether and how the importance of these covariates changes as the game progresses.
Table \ref{tab:iwinrnfl} presents the coefficients of the logistic regression model of {{\tt iWinRNFL}} with standardized independent variables for better comparisons.
\begin{table}[ht]
\begin{center}
\def\sym#1{\ifmmode^{#1}\else\(^{#1}\)\fi}
\begin{tabular}{l*{1}{c}}
\toprule
&\multicolumn{1}{c}{Winner}\\
\midrule
Possession Team (H) & -0.88\sym{***}\\
Score Differential & 1.41\sym{***}\\
Home Timeouts & 0.06\sym{***}\\
Away Timeouts & -0.06\sym{***}\\
Ball Possession Time & - 0.46\sym{***}\\
Time Lapsed & 0.43\sym{***}\\
Rating Differential & 1.72\sym{***}\\
Down (1) & -0.39\sym{***} \\
Down (2) & -0.29\sym{***} \\
Down (3) & -0.20\sym{***}\\
Down(4) & -0.05\sym{***} \\
Field Position & -0.41\sym{***} \\
Yards-to-go & 0.07\sym{***} \\
{\bf Interaction terms} & \\
Possession Team (H)$\cdot$ Down (1) & 0.65\sym{***} \\
Possession Team (H)$\cdot$ Down (2) & 0.47\sym{***} \\
Possession Team (H)$\cdot$ Down (3) & 0.30\sym{***} \\
Possession Team (H)$\cdot$ Down (4) & 0.08\sym{***} \\
Possession Team (H)$\cdot$ Field Position & 1.05\sym{***}\\
Possession Team (H)$\cdot$ Yards-to-go & -0.18\sym{***}\\
Time Lapsed $\cdot$ Rating Differential & -0.65\sym{***}\\
Time Lapsed $\cdot$ Score Differential & 2.88\sym{***}\\
\midrule
Observations & 295,844 \\
\bottomrule
\multicolumn{2}{l}{\footnotesize \sym{.} \(p<0.1\), \sym{*} \(p<0.05\), \sym{**} \(p<0.01\), \sym{***} \(p<0.001\)}\\
\end{tabular}
\end{center}
\caption{Standardized logistic regression coefficients for {{\tt iWinRNFL}}.}
\label{tab:iwinrnfl}
\end{table}
As we can see, all of the factors considered are statistically significant for estimating the current win-probability for the home team.
Particular emphasis should be given to the interaction terms.
More specifically we see that - as one might have expected - having the ball on a first down provides a higher win probability as compared to a third or fourth down (for the same yards-to-go).
Similarly, the probability of winning for the home team increases when its offensive unit is closer to the goal line (Field Position variable), while fewer yards to go for a first down are also associated with a larger win probability.
Furthermore, an interesting point is the symmetric impact of the number of timeouts left for the home and visiting team.
With regards to teams strength difference this appears to be crucial at the win probability at the beginning of the game, but its impact fades as time lapses.
This is evident from the negative coefficient of the interaction term (Time Lapsed $\cdot$ Rating differential).
In particular, the effect of the team rating differential on the win probability depends also on the time lapsed, and it is equal to $1.72-0.65\cdot(Time Lapsed)$.
In other words, the coefficient for the rating differential (i.e., 1.7) captures only the effect of the rating differential at the very start of the game (i.e., Time Lapsed is 0).
In contrast, as the game progresses the impact of the score differential on the (home) win probability increases, and it is equal to $1.41+2.88\cdot(Time Lapsed)$
Figure \ref{fig:rtg-effect} shows the declining impact of team rating differential as the game progresses in contrast with the increasing impact of current score differential on win probability.
Finally, it is worth noting that the intercept of the model is 0.
One could have expected the intercept to capture the home field advantage \cite{kpele-plosone}, but the teams' rating differential has already included the home edge (see Appendix A).
In the following section we provide a detailed performance evaluation of {{\tt iWinRNFL}}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.5]{plots/rating-score-effect
\caption{The effect of the team's strength differential decays as the game progresses, while that of the score differentials increases significantly ($x$-axis is in logarithmic scale for better visualization).}
\label{fig:rtg-effect}
\end{center}
\end{figure}
\section{Model Evaluation}
\label{sec:evaluation}
Before describing and computing our evaluation metrics, we will briefly describe two alternative models for estimating the win probability.
In particular, we use the same features as above, but we evaluate two non-linear models, namely, a {\bf (naive) Bayesian classifier} and a {\bf feedforward neural network} (FNN).
A naive Bayes classifiers computes the conditional probability of each class (home team win/loss in our case) for a data instance $k$ with independent variables {\bf x}=$(x_1,x_2,\dots,x_n)$, assuming conditional independence between the features given the true class $C_k$; i.e., $\Pr[x_i | x_1,x_2,\dots,x_n,C_k] = \Pr[x_i | C_k]$.
Under this assumption the conditional probability for the class $C_k$ of data instance $k$ is given by: $\Pr[C_k | \mathbf{x}] = \Pr[C_k] \prod_{i=1}^n \Pr[x_i|C_k]$.
\def2.5cm{2.5cm}
\def5cm{5cm}
\begin{figure*}
\centering
\begin{tikzpicture}[shorten >=1pt,->,draw=black!50, node distance=2.5cm]
\tikzstyle{every pin edge}=[<-,shorten <=1pt]
\tikzstyle{neuron}=[circle,fill=black!25,minimum size=17pt,inner sep=0pt]
\tikzstyle{input neuron}=[neuron, fill=green!50];
\tikzstyle{output neuron}=[neuron, fill=red!50];
\tikzstyle{hidden neuron}=[neuron, fill=blue!50];
\tikzstyle{hidden neuron2}=[neuron, fill=blue!50];
\tikzstyle{annot} = [text width=4em, text centered]
\ensuremath{v}[input neuron, pin=left: Ball Possession] (I-1) at (0,5) {};
\ensuremath{v}[input neuron, pin=left: Score Differential] (I-2) at (0,4) {};
\ensuremath{v}[input neuron, pin=left: Home Timeouts] (I-3) at (0,3) {};
\ensuremath{v}[input neuron, pin=left: Away Timeouts] (I-4) at (0,2) {};
\ensuremath{v}[input neuron, pin=left: Time Lapsed] (I-5) at (0,1) {};
\ensuremath{v}[input neuron, pin=left: Down] (I-6) at (0,0) {};
\ensuremath{v}[input neuron, pin=left: Filed Position] (I-7) at (0,-1) {};
\ensuremath{v}[input neuron, pin=left: Yards-to-Go] (I-8) at (0,-2) {};
\ensuremath{v}[input neuron, pin=left: Ball Possession Time] (I-9) at (0,-3) {};
\ensuremath{v}[input neuron, pin=left: Rating Differential] (I-10) at (0,-4) {};
\path[yshift=0.5cm]
node[hidden neuron] (H-1) at (2.5cm,3 cm) {};
\path[yshift=0.5cm]
node[hidden neuron] (H-2) at (2.5cm,2 cm) {};
\path[yshift=0.5cm]
node[hidden neuron] (H-3) at (2.5cm,1 cm) {};
\path[yshift=0.5cm]
node[hidden neuron] (H-4) at (2.5cm,0 cm) {};
\path[yshift=0.5cm]
node[hidden neuron] (H-5) at (2.5cm,-1 cm) {};
\path[yshift=0.5cm]
node[hidden neuron] (H-6) at (2.5cm,-2 cm) {};
\path[yshift=0.5cm]
node[hidden neuron2] (HH-1) at (5cm,1.5 cm) {};
\path[yshift=0.5cm]
node[hidden neuron2] (HH-2) at (5cm,0.5 cm) {};
\path[yshift=0.5cm]
node[hidden neuron2] (HH-3) at (5cm,-0.5 cm) {};
\ensuremath{v}[output neuron,pin={[pin edge={->}]right:Home WP}, right of=HH-2] (O) {};
\foreach \ensuremath{s} in {1,...,10}
\foreach \ensuremath{t} in {1,...,6}
\path (I-\ensuremath{s}) edge (H-\ensuremath{t});
\foreach \ensuremath{s} in {1,...,6}
\foreach \ensuremath{t} in {1,...,3}
\path (H-\ensuremath{s}) edge (HH-\ensuremath{t});
\foreach \ensuremath{s} in {1,...,3}
\path (HH-\ensuremath{s}) edge (O);
\end{tikzpicture}
\caption{The Feedforward Neural Network we used for the win probability includes two hidden layers (blue nodes).} \label{fig:fnn}
\end{figure*}
We also build a win probability model using a feedforward neural network with 2 hidden layers (Figure \ref{fig:fnn}).
The first hidden layer has a size of 6 nodes, while the second hidden layer has a size of 3 nodes.
While the goal of our work is not to identify the {\em optimal} architecture for the neural network, we have experimented with a different numbers and sizes of hidden layers and this architecture provided us with the best performance on a validation set\footnote{The performance of other architectures was not much different.}.
We begin by evaluating how well the output probabilities of {{\tt iWinRNFL}} follow what happens in reality.
When a team is given a 70\% probability of winning at a given state of the game, this essentially means that if the game was played from that state onwards 1,000 times, the team is expected to win approximately 700 of them.
Of course, it is clear that we cannot have the game played more than once so one way to evaluate the probabilities of our model is to consider all the instances where the model provided a probability for team winning $x\%$ and calculate the fraction of instances that ended up in a win for this team.
Ideally we would expect this fraction to be $x\%$ as well.
This is exactly the definition of the reliability curve of a probability model.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.3]{plots/fullwinprob2.pdf
\caption{{{\tt iWinRNFL}} is as well-calibrated as the non-linear models examined over the whole range of probabilities.}
\label{fig:fullwp}
\end{center}
\end{figure}
In order to obtain these results we split our data in a training and test set in a 70-30\% proportion respectively.
Figure \ref{fig:fullwp} presents the results on our test set, where we used bins of a 0.05 probability range.
In particular, as we can see the predicted probabilities match very well with the actual outcome of these instances.
The fitted line ($R^2 = 0.998$) has a slope of 0.98 (with a 95\% confidence interval of [0.97,1.01]), while the intercept is 0.008 (with a 95\% confidence interval [-0.001, 0.02]).
Simply put the line is for all practical purposes the $y=x$ line, which translates to a fairly consistent and accurate win probability.
We have also calculated the accuracy of our binary predictions on the test set.
In particular, the home team is projected to win if $\Pr(H=1| \mathbf{x})>0.5$.
The accuracy of {{\tt iWinRNFL}} is equal to 76.5\%.
The accuracy of the other two models examined is very similar, with naive Bayes exhibiting a 75\% accuracy, while the feedforward neural network has an accuracy of 76.3\%.
Another metric that has been traditionally used in the literature to evaluate the performance of a probabilistic prediction is the Brier score $\beta$ \cite{brier1950verification}.
In the case of a binary probabilistic prediction the Brier score is calculated as:
\begin{equation}
\beta = \dfrac{1}{N}\sum_{i=1}^N (\pi_i-y_i)^2
\label{eq:brier}
\end{equation}
where $N$ is the number of observations, $\pi_i$ is the probability assigned to instance $i$ being equal to 1 and $y_i$ is the actual (binary) value of instance $i$.
Brier scores takes values between 0 and 1 and does not evaluate the accuracy of the predicted probabilities but rather the calibration of these probabilities, that is, the level of certainty they provide.
The lower the value of $\beta$ the better the model performs in terms of calibrated predictions.
{{\tt iWinRNFL}} exhibits a Brier score $\beta$ of 0.158.
Typically the Brier score of a model is compared to a baseline value $\beta_{base}$ obtained from a {\em climatology} model \cite{mason2004using}.
A climatology model assigns the same probability to every observation (that is, home team win in our case), which is equal to the fraction of positive labels in the whole dataset.
Hence, in our case the climatology model assigns a probability of 0.57 to each observation, since 57\% of the instances in the dataset resulted to a home team win.
The Brier score for this reference model is $\beta_{base}=0.26$, which is obviously of lower quality as compared to our model.
Both the naive Bayes and the FNN models exhibit similar performance with {{\tt iWinRNFL}} with Brier scores of 0.163 and 0.156 respectively.
As alluded to above one of the reasons we examined the performance of more complex, non-linear, models is the fact that the finite duration of the half/game can introduce non-linearities that are not possible to be captured by {{\tt iWinRNFL}}.
Therefore, apart from the overall performance of the different models, we would also like to examine performance of {{\tt iWinRNFL}} as a function of the time elapsed from the beginning of the game and compare it with the naive Bayes and FNN.
More specifically, we consider 5-minutes intervals during the game; e.g., the first interval includes predictions that took place during the first 5 minutes of the game, while interval 7 includes predictions that took place during the fist five minutes of the third quarter.
Figure \ref{fig:performance-time} depicts our results.
As we can see the performance of all models is very close to each other and improves as the game progresses, as one might have expected.
Furthermore, the prediction accuracy during the beginning of the game is very close to the state-of-the-art prediction accuracy of pre-game win probability models \cite{kpele-plosone}.
This is again expected since at the beginning of the game the teams' rating differential is important, while as the game progresses the importance of this covariate reduces as we saw earlier (see Figure \ref{fig:rtg-effect}).
More importantly, we see some improvement of FNN over {{\tt iWinRNFL}} particularly with regards to the Brier score and during the end of the game (intervals 11 and 12), but this improvements is marginal and cannot practically justify the use of a more complex model over a simple (interpretable) general linear model.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.3,angle=270]{plots/brier-accuracy.pdf
\caption{All models' performance improves later in the game, while FNN provides only incremental improvements over {{\tt iWinRNFL}} towards the end of the half/game.}
\label{fig:performance-time}
\end{center}
\end{figure}
\iffalse
\begin{table*}
\centering
\begin{tabular}{c||*{2}{c|}c|c}
& \multicolumn{2}{|c|}{Probability line} & & \\ \hline \hline
Quarter & Slope & Intercept & Brier score & Accuracy \\ \hline
1 & [0.84,0.91] & [0.05,0.088] & 0.21 & 0.64 \\ \hline
2 & [0.87,0.96] & [0.037,0.069] & 0.18 & 0.72 \\ \hline
3 & [0.97,1.03] & [-0.01,0.04] & 0.14 & 0.78 \\ \hline
4 & [0.98,1.21] & [-0.11,0.04] & 0.11 & 0.85 \\ \hline
\end{tabular}
\caption{The performance of our model improves as the game progresses.}
\label{tab:results}
\end{table*}
\fi
\iffalse
Qtr1 --> Intercept: 0.01724886 0.08667382 Slope: 0.85147882 0.97192799 Brier: 0.21 (0.25)
Qtr2 --> Intercept: 0.03572466 0.09115063 Slope: 0.84114141 0.93719396 Brier: 0.18 (0.25)
Qtr3 --> Intercept: -0.01549244 0.05285255 Slope: 0.91602858 1.03443241 Brier: 0.14 (0.25)
Qtr4 --> Intercept: -0.1120115 -0.0448949 Slope: 0.9970946 1.2132215 Brier: 0.11 (0.25)
Accuracy --> 0.64 0.71 0.77 0.84
\fi
{\bf Anecdote {\em Evaluation}: }
As mentioned in the introduction, one of the motivating events for {{\tt iWinRNFL}} has been Super Bowl 51.
Super Bowl 51 has been labeled as the biggest comeback in the history of Super Bowl.
Late in the third quarter New England Patriots were trailing by 25 points and Pro Football Reference was giving Patriots a 1:1000 chance of winning the Super Bowl, while ESPN a 1:500 chance \cite{statsbylopez}.
PFR considers this comeback a once in a millennium comeback.
While this can still be the case\footnote{As mentioned earlier 1:x chances does not mean that we observe x {\em failures} first and then the one {\em success}.}, in retrospect Patriots' win highlights that these models might be too optimistic and confident to the team ahead.
On the contrary, the lowest probability during the game assigned to the Patriots by {{\tt iWinRNFL}} for winning the Super Bowl was 2.1\% or about 1:50.
We would like to emphasize here that the above does not mean that {{\tt iWinRNFL}} is ``{\em better}'' than other win probability models.
However, it is a simple and most importantly transparent model that assigns win probabilities in a conservative (i.e., avoids ``over-reacting''), yet accurate and well-calibrated way.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4]{plots/sb51.png
\caption{The lowest in-game win probability assigned to the Patriots by {{\tt iWinRNFL}} during Super Bowl 51 was 2.1\%, i.e., 1 in 50 chances. During the OT the model does not perform very accurately due to the sparsity of the relevant data.}
\label{fig:sb51}
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In this paper, motivated by several recent comebacks that seemed improbable at the moment, we designed {{\tt iWinRNFL}}, a simple and open generalized linear model for in-game win probabilities in NFL.
{{\tt iWinRNFL}} uses 10 independent variables to assess the win probability of a team for any given state of the game.
Our evaluations indicate that the probabilities provided by our model are consistent and well-calibrated.
We have also explored more complex models, using the same covariates, and there are not significant improvements over {{\tt iWinRNFL}} that justify the use of a more complex probability model over a simple and interpretable one.
We would like to reiterate that our study does not aim at discrediting existing win probability models, but rather explore the ability of simple and interpretable models achieving similar performance to more complicated ones.
One crucial point is that similar types of models need to be re-evaluated frequently.
The game changes rapidly both due to changes in the rules but also because of changes in players skills or even due to analytics.
This is true not only in the NFL but in other sports/leagues as well.
For example, see the explosion of three-point shots in basketball, or the number of NFL teams that run a pass-first offense.
Similar changes can have an implication on how {\em safe} a score differential of $x$ points is, since teams can cover the difference faster.
For example, this can manifest itself into different coefficient for the interaction term between time lapsed and score differential.
Hence, the details of the model can also change due to these changes.
Finally, win probability models, while currently are mainly used from media in order to enhance sports storytelling, can form a central component for the evaluation of NFL players - of course NFL teams might be already doing this, they do so - understandably - in a proprietary manner.
In particular, the added win probability from each player can form the dependent variable in a (adjusted) plus-minus type of regression.
Nevertheless, the latter is a very challenging technical problem itself, given the severe co-linearities that can appear due to high overlap between the personnel of different snaps.
In the future we plan to explore similar app
\small
\section{The \MakeLowercase{i}W\MakeLowercase{inr}NFL model}
\label{sec:model}
In this section we are going to present the data we used to develop our in-game probability model as well as the design details of {{\tt iWinRNFL}}.
{\bf Data: }In order to perform our analysis we utilize a dataset collected from NFL's Game Center for all the regular season games between the seasons 2009 and 2016.
We access the data using the Python {\tt nflgame} API \cite{nflgame}.
The dataset includes detailed play-by-play information for every game that took place during these seasons.
This information is used to obtain the state of the game that will drive the design of {{\tt iWinRNFL}}.
In total, we collected information for 2,048 regular season games and a total of 338,294 snaps/plays.
{\bf Model: }
{{\tt iWinRNFL}} is based on a logistic regression model that calculates the probability of the home team winning given the current status of the game as:
\begin{equation}
\Pr(H=1| \mathbf{x})= \frac{\exp(\mathbf{w}^T\cdot\mathbf{x})}{1+\exp(\mathbf{w}^T\cdot\mathbf{x})}
\label{eq:reg}
\end{equation}
where $H$ is the dependent random variable of our model representing whether the home team wins or not, $\mathbf{x}$ is the vector with the independent variables, while the coefficient vector $\mathbf{w}$ includes the weights for each independent variable and is estimated using the corresponding data.
For a game of infinite duration a linear model could be a very good approximation.
However, the boundary effects from the finite duration of a game create several non-linearities \cite{winston2012mathletics}.
For this reason, we enhance our model - using the same set of features - with a Support Vector Machine classifier with radial kernel for the last three minutes of regulation.
In order to obtain a probability output from the SVM classifier, we further use Platt's scaling \cite{platt1999probabilistic}:
\begin{equation}
\Pr(H=1| \mathbf{x})= \frac{1}{1+\exp{(Af(x)+B)}}
\label{eq:platt}
\end{equation}
where $f(x)$ is the uncalibrated value produced by the SVM classifier:
\begin{equation}
f(x) = \sum_{i} (\alpha_i y_i k(\mathbf{x}_i\cdot\mathbf{x}))+ b
\label{eq:svm}
\end{equation}
where $k(\mathbf{x},\mathbf{x}')$ is the kernel used for the SVM.
Figure \ref{fig:iwinrNFL} depicts the simple flow chart of {{\tt iWinRNFL}}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.35]{plots/iwinrNFL.pdf
\caption{{{\tt iWinRNFL}} includes a linear and a non-linear component.}
\label{fig:iwinrNFL}
\end{center}
\end{figure}
In order to describe the status of the game we use the following variables:
\begin{enumerate}
\item {\bf Ball Possession Team:} This binary feature captures whether the home or the visiting team has the ball possession
\item {\bf Score Differential:} This feature captures the current score differential (home - visiting)
\item {\bf Timeouts Remaining:} This feature is represented by two independent variables - one for the home and one for the away team - and they capture the number of timeouts remaining for each of the teams
\item {\bf Time Elapsed: } This feature captures the time elapsed since the beginning of the game
\item {\bf Down:} This feature represents the down of the team in possession
\item {\bf Field Position:} This feature captures the distance covered by the team in possession from their own yard line
\item {\bf Yards-to-go:} This variables represents the number of yards needed for a first down
\item {\bf Ball Possession Time: } This variable captures the time that the offensive unit of the home team is on the field
\item {\bf Ranking Differential: } This variable represents the difference of the win percentage for the two team (home - visiting)
\end{enumerate}
The last independent variable is representative of the power ranking difference between the two teams.
Most of the existing models that include such a variable are using the Vegas line spread for each game.
We choose not to do so for the following reason.
The objective of the Vegas line is not to predict game outcomes but rather distribute money across the different bets.
Exactly because of this objective the line is changing during the week before the game.
While this line can change due to new information for the competing teams (e.g., injury updates), the line is mainly changing when a particular team has accumulated the majority of the bets.
In this case it will also be hard to choose which line to use (e.g., the opening, the closing or some average of them).
Therefore, we choose to use the win percentage differential of the two teams as an indicator of their strength (even though this has its own issues given the uneven schedule in NFL).
However, note that if one would like to use the point spread as a variable this can be easily incorporated in the model.
Table \ref{tab:iwinrnfl} presents the coefficients of the logistic regression model of {{\tt iWinRNFL}} with standardized independent variables for better comparisons.
\begin{table}[ht]
\begin{center}
\def\sym#1{\ifmmode^{#1}\else\(^{#1}\)\fi}
\begin{tabular}{l*{1}{c}}
\toprule
&\multicolumn{1}{c}{(1)}\\
&\multicolumn{1}{c}{Winner}\\
\midrule
Possession Team (H) & 0.41\sym{***}\\
& (49.19) \\
\addlinespace
Score Differential & 3.59\sym{***}\\
& (247.34) \\
\addlinespace
Home Timeouts & 0.12\sym{***}\\
& (8.74) \\
\addlinespace
Away Timeouts & -0.11\sym{***}\\
& (-12.47) \\
\addlinespace
Ball Possession Time & -0.05.\\
& (-1.66) \\
\addlinespace
Time Lapsed & -0.05.\\
& (-1.66) \\
\addlinespace
Down & -0.01 \\
& (0.04) \\
\addlinespace
Field Position & 0.02\sym{**} \\
& (2.71) \\
\addlinespace
Yards-to-go & -0.01 \\
& (0.23) \\
\addlinespace
Rating differential & 0.75\sym{***}\\
& (80.47) \\
\addlinespace
Intercept & 0.57\sym{*}\\
& (2.09) \\
\midrule
Observations & 338,294 \\
\bottomrule
\multicolumn{2}{l}{\footnotesize \textit{t} statistics in parentheses}\\
\multicolumn{2}{l}{\footnotesize \sym{$_.$} \(p<0.1\), \sym{*} \(p<0.05\), \sym{**} \(p<0.01\), \sym{***} \(p<0.001\)}\\
\end{tabular}
\end{center}
\caption{Standardized logisitic regression coefficients for {{\tt iWinRNFL}}.}
\label{tab:iwinrnfl}
\end{table}
As we can see, as one might have expected the current scoring differential exhibits the strongest correlation with the in-game win probability.
The only factors that do not appear to be statistically significant predictors of the dependent variable are the down and the yards-to-go.
Even though the corresponding coefficients are negative as one might have expected (e.g., being at an earlier down gives you more chances to advance the ball), they are not significant in estimating the win probability.
On the contrary, all else being equal timeouts appear to be quiet important since they can help a team stop the clock, while teams with better win percentage appear to have an advantage as well, since this can be a sign of a better team.
In the following section we provide a detailed evaluation of {{\tt iWinRNFL}}.
\section{Introduction}
\label{sec:intro}
In-game win probability models provide the likelihood that a certain team will win a game given the current state of the game.
Such models have become very popular during the last few years, mainly because they can provide the backbone for in-game decisions but also because they can potentially add to the viewing experience of the fans. | {
"attr-fineweb-edu": 1.760742,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUf3TxK7FjYEXSHbzw |
\subsection{Evolution of team performances}
\label{sec:results-team-performances}
Figure~\ref{fig:teamgrowth} shows the evolution of the game ratings for Manchester City, Real Madrid, and Barcelona computed as a 15-game moving average since the start of the 2016/2017 season. We compute a team's game rating by summing the values for all the team's actions, which corresponds to summing the ratings for all the team's players in a particular game. The average game rating for Manchester City has been steadily increasing since the end of the 2016/2017 season, which was their first under the management of Pep Guardiola. Manchester City seem unbeatable and topped the Premier League table with 43 points from a possible 45 in their opening 15 games of the 2017/2018 season.
In contrast, Real Madrid had a poor start to the 2017/2018 season and ranked only fourth in the Primera Division after 14 games with 28 points from a possible 42. Their Portuguese star player Cristiano Ronaldo seems to be completely out of shape and does not appear near the top of our rankings. Rivals Barcelona finished their 2016/2017 season on a high with seven consecutive victories in their final league games of the season. The \textit{Blaugrana} also had an excellent start to their 2017/2018 season but have been struggling to convincingly win games more recently. The evolution of their game ratings suggests Barcelona might have been overperforming and are now regressing towards their regular level.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{teamgrowth-tomdv2.pdf}
\caption{The evolution of the game ratings for Manchester City, Real Madrid, and Barcelona computed as a 15-game moving average since the start of the 2016/2017 season. A team's game rating is computed by summing the values for all its actions.}
\label{fig:teamgrowth}
\end{figure}
Figure~\ref{fig:teamdiff1617} shows the average contribution per game for the goalkeepers, defenders, midfielders, and strikers of Barcelona, Real Madrid, and Manchester City during the 2016/2017 season. Barcelona's front line, which consisted of Neymar, Luis Su\'arez, and Lionel Messi in most games, was responsible for the largest share of their average contribution per game. In contrast, Real Madrid's midfielders contributed more than their strikers, while Manchester City's midfielders and strikers contributed roughly equally.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{teamdiff1617-lotteb.pdf}
\caption{The average contribution per game for the goalkeepers, defenders, midfielders, and strikers of Barcelona, Real Madrid, and Manchester City during the 2016/2017 season.}
\label{fig:teamdiff1617}
\end{figure}
Similarly, Figure~\ref{fig:teamdiff1718} shows the average contribution per game for each line of Barcelona, Real Madrid, and Manchester City during the 2017/2018 season. Despite their loss of Neymar to Paris Saint-Germain, Barcelona still have the strongest attack by far. Real Madrid have seen their average contribution per game go down in midfield and offense, while Manchester City have seen notable increases in both those lines.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{teamdiff1718-lotteb.pdf}
\caption{The average contribution per game for the goalkeepers, defenders, midfielders, and strikers of Barcelona, Real Madrid, and Manchester City during the 2017/2018 season.}
\label{fig:teamdiff1718}
\end{figure}
\section*{Acknowledgements}
Tom Decroos is supported by the Research Foundation-Flanders (FWO-Vlaanderen). Jesse Davis is partially supported by the KU Leuven Research Fund (C22/15/015) and FWO-Vlaanderen (G.0356.12, SBO-150033).
\section{Related work}
\label{sec:related-work}
Although the valuation of player actions is an important task with respect to player recruitment and valuation, this subject has remained virtually unexplored in the soccer analytics community due to the challenges resulting from the dynamic and low-scoring nature of soccer. The approaches from \citet{norstebo2016valuing} for soccer, \citet{routley2015markov} for ice hockey, and \citet{cervone2014pointwise} for basketball come closest to our framework. They address the task of valuing individual actions by modeling each game as a Markov game~\citep{littman1994markov}. In contrast to \citet{norstebo2016valuing} and \citet{routley2015markov}, which divide the pitch into a fixed number of zones, our approach models the precise spatial locations of each action. Unlike \citet{cervone2014pointwise}, which is restricted to valuing only three types of on-the-ball actions, our approach considers any relevant on-the-ball action during a game. However, our definitions of player actions, action sets and games are similar to those used by these works as well as earlier research for soccer~\citep{rudd2011framework, hirotsu2002using}, American football~\citep{goldner2012markov}, and baseball~\citep{tango2007book}.
Most of the related work on soccer either focuses on a limited number of player-action types like passes and shots or fails to account for the circumstances under which the actions occurred. \citet{decroos2017starss}, \citet{knutson2017introducing}, and \citet{gregory2017how} address the task of valuing the actions leading up to a goal attempt, whereas \citet{bransen2017valuing} addresses the task of valuing individual passes. The former approaches naively assign credit to the individual actions by accounting for a limited amount of contextual information only, while the latter approach is limited to a single type of action only.
Furthermore, this work is also related to the work on expected-goals models, which estimate the probability of a goal attempt resulting into a goal \citep{lucey2014quality,caley2015premier,altman2015beyond,mackay2016introducing,aalbers2016expected,mackay2017predicting}. In our framework, computing the expected-goals value of a goal attempt boils down to estimating the value of the game state prior to the goal attempt.
\section{Conclusion}
\label{sec:conclusion}
This paper introduced an advanced soccer metric named \algoname that quantifies the performances of players during games. Our metric values any individual player action on the pitch based on its expected influence on the scoreline. In contrast to most existing metrics, our metric offers the benefits that it (1) values all types of actions (e.g., passes, crosses, dribbles, and shots), (2) bases its valuation on the game context, and (3) reasons about an action's possible effect on the subsequent actions. Intuitively, the player actions that increase a team's chance of scoring receive positive values while those actions that decrease a team's chance of scoring receive negative values.
We presented \algoname as a concrete instantiation of our more general action-valuing framework named \frameworkname for use with play-by-play event data. Several illustrative use cases based on an analysis of the data for the top five European leagues highlighted the inner workings of \algoname. Furthermore, we also proposed a language for representing play-by-play event data that is designed with the goal of facilitating data analysis.
A limitation of \algoname is its focus on valuing on-the-ball actions whereas defensive skill often manifests itself through positioning and anticipation abilities that are used to deny certain action possibilities. Therefore, including full optical tracking data would be an interesting direction for future research.
\subsection{Identification of the players who stand out}
\label{sec:results-outperformers}
One talent pipeline often exploited by larger clubs is identifying the players on less successful top division clubs whose skills have the potential to flourish in a more competitive environment. Thus, a natural question to ask is: Can our player rating metric help identify promising talent toiling at lesser clubs that larger clubs could target in the transfer market? When scouting such players from an objective perspective, one challenge is that the value of a metric often will partially reflect the team context. In this case, that means being surrounded by less-talented players, which may adversely affect a player's rating. Therefore, to find players that stand out compared to their teammates' performances, we look at the highest-ranked players on teams who finished outside the top 5 in their respective league. Table~\ref{tbl:outliers} lists the players who stood out at smaller clubs during the 2016/2017 season.
\begin{table}[H]
\centering
\tabcolsep=0.05cm
\begin{tabular}{clllr}
\toprule
\textbf{Rank} & \textbf{Player} & \textbf{Team} & \textbf{Position} & \textbf{Rating}\\
\midrule
1 & Junior Stanislas & Bournemouth & Winger & 0.58\\
2 & Dimitri Payet & West Ham United & Winger & 0.55\\
3 & Iago Aspas & Celta de Vigo & Central striker & 0.52\\
4 & Max Kruse & SV Werder Bremen & Central striker & 0.50\\
5 & Ryad Boudebouz & Montpellier & Attacking midfielder & 0.47\\
6 & Fin Bartels & SV Werder Bremen & Central striker & 0.46\\
7 & Allan Saint-Maximum & Bastia & Winger & 0.46\\
8 & Ross Barkley & Everton & Winger & 0.44\\
9 & Romelu Lukaku & Everton & Central striker & 0.44\\
10 & Federico Viviani & Bologna & Central midfielder & 0.43\\
\bottomrule
\end{tabular}
\caption{The highest-ranked players on teams who finished outside the top 5 in their respective league during the 2016/2017 season according to our metric.}
\label{tbl:outliers}
\end{table}
Table~\ref{tbl:outliers} contains a number of interesting names. Junior Stanislas plays winger for Bournemouth in the English Premier League, and he is especially strong at shooting. Bournemouth performed exceptionally well in the 2016/2017 season, finishing 9th after finishing 16th the previous season. Another interesting player is Ryad Boudebouz, an attacking midfielder for Montpellier last season. He has since been transferred to Real Betis, but was on the wish list for a number of other clubs as well. The list also contains a number of recognized talents such as Dimitri Payet, who was a key performer for France at EURO 2016, Romelu Lukaku, who moved to Manchester United after the 2016/2017 season and is playing well there, and Ross Barkley, who moved to Chelsea in the previous winter transfer window.
\section{Action types}
\label{sec:action-types}
Table~\ref{tbl:action-types} provides an overview of the action types in the dataset alongside their descriptions.
\begin{table}[H]
\begin{tabular}{llll}
\toprule
\textbf{Action type} & \textbf{Description} & \textbf{Successful?} & \textbf{Special result} \tabularnewline\midrule
Pass & Normal pass in open play & Reaches teammate & Offside \tabularnewline\midrule
Cross & Cross into the box & Reaches teammate & Offside \tabularnewline\midrule
Throw-in & Throw-in & Reaches teammate & - \tabularnewline\midrule
Crossed corner & Corner crossed into the box & Reaches teammate & Offside \tabularnewline\midrule
Short corner & Short corner & Reaches teammate & Offside \tabularnewline\midrule
Crossed free-kick & Free kick crossed into the box & Reaches teammate & Offside \tabularnewline\midrule
Short free-kick & Short free-kick & Reaches team mate & Offside \tabularnewline\midrule
Take on & Dribble past opponent & Keeps possession & - \tabularnewline\midrule
Foul & Foul & Always fail & Red or yellow card \tabularnewline\midrule
Tackle & Tackle on the ball & Regains possession & Red or yellow card \tabularnewline\midrule
Interception & Interception of the ball & Always success & - \tabularnewline\midrule
Shot & Shot attempt not from penalty or free-kick & Goal & Own goal \tabularnewline\midrule
Shot from penalty & Penalty shot & Goal & Own goal \tabularnewline\midrule
Shot from free-kick & Direct free-kick on goal & Goal & Own goal \tabularnewline\midrule
Save by keeper & Keeper saves a shot on goal & Always success & - \tabularnewline\midrule
Claim by keeper & Keeper catches a cross & Does not drop the ball & - \tabularnewline\midrule
Punch by keeper & Keeper punches the ball clear & Always success & - \tabularnewline\midrule
Pick-up by keeper & Keeper picks up the ball & Always success & - \tabularnewline\midrule
Clearance & Player clearance & Always success & - \tabularnewline\midrule
Bad touch & Player makes a bad touch and loses the ball & Always fail & - \tabularnewline\midrule
Dribble & Player dribbles at least 3 meters with the ball & Always success & - \tabularnewline\midrule
Run without ball & Player runs without the ball & Always success & - \tabularnewline
\bottomrule
\end{tabular}
\caption{Overview of the action types in the data set alongside their descriptions. The \textit{Success?} column specifies the condition the action needs to fulfill to be considered successful, while the \textit{Special} column lists additional possible result values.}
\label{tbl:action-types}
\end{table}
\section{Five best-ranked players per position for the 2016/2017 season}
\label{sec:best-players-2016-2017}
This section lists the five best-ranked players per position for the 2016/2017 season.
\includegraphics[width=.8\textwidth]{2016-2017_Central-Strikers_table.pdf}
\includegraphics[width=.8\textwidth]{2016-2017_Wingers_table.pdf}
\includegraphics[width=.8\textwidth]{2016-2017_Midfielders_table.pdf}
\includegraphics[width=.8\textwidth]{2016-2017_Wingbacks_table.pdf}
\includegraphics[width=.8\textwidth]{2016-2017_CentreBacks_table.pdf}
\includegraphics[width=.8\textwidth]{2016-2017_Goalkeepers_table.pdf}
\section{Five best-ranked players per position for the 2017/2018 season}
\label{sec:best-players-2017-2018}
This section lists the five best-ranked players per position for the 2017/2018 season.
\includegraphics[width=.8\textwidth]{2017-2018_Central-Strikers_table.pdf}
\includegraphics[width=.8\textwidth]{2017-2018_Wingers_table.pdf}
\includegraphics[width=.8\textwidth]{2017-2018_Midfielders_table.pdf}
\includegraphics[width=.8\textwidth]{2017-2018_Wingbacks_table.pdf}
\includegraphics[width=.8\textwidth]{2017-2018_CentreBacks_table.pdf}
\includegraphics[width=.8\textwidth]{2017-2018_Goalkeepers_table.pdf}
\subsection{Selection of 2016/2017 team of the season}
\label{sec:results-best-players}
Figure~\ref{fig:lineup20162017} shows the best possible line-up for the 2016/2017 season according to our metric. For each position, the line-up includes the highest-ranked player who played at least 900 minutes, which is the equivalent of ten full games, in that particular position. The offensive line features the likes of Eden Hazard (Chelsea), the inevitable Lionel Messi (Barcelona), and teenage star Kylian Mbapp\'e, who joined Paris Saint-Germain on a loan from AS Monaco last summer. The French striker will move to the French giants on a permanent basis next summer for a transfer fee rumoured to be around 90 million euros.\footnote{\url{https://www.transfermarkt.com/kylian-mbappe/profil/spieler/342229}} The midfield consists of Kevin De Bruyne (Manchester City), Isco (Real Madrid), and Cesc F\`abregas (Chelsea), who were all key figures for their respective teams during the previous campaign. However, the composition of the defensive line is somewhat more surprising. Serie A centre backs Vlad Chirices (Napoli) and Leonardo Bonucci (Juventus) combine their strength with excellent passing abilities. Bundesliga wing-backs Markus Suttner (FC Ingolstadt 04) and Lukasz Piszczek (Borussia Dortmund) are known for overlapping and providing support in offense. Goalkeeper Jordan Pickford got relegated with Sunderland last season but moved to Everton over the summer nevertheless. These somewhat surprising names in the defensive line reveal one limitation of \algoname. That is, the algorithm only values on-the-ball actions, while defending is often more about preventing your opponent from gaining possession of the ball by clever positioning and anticipation. More specifically, goalkeepers are rewarded for their interventions but not punished for the goals they concede.
The inclusion of Eden Hazard in our \textit{Team of the 2016/2017 Season} shows the strength of our metric at identifying impactful players. The Belgian winger, who had a crucial role in Chelsea's Premier League title, is the seventh-highest rated player on our metric but ranks only 133rd in terms of goals and assists per 90 minutes with 10 goals and 3 assists. Similarly, wing-back Lukasz Piszczek ranks 19th on our metric but only appears in 292nd position for goals and assists per 90 minutes with 5 goals and 1 assist. In contrast, notable omissions from the team are high-profile players like Robert Lewandowski (54th), \'Alvaro Morata (61st), Edinson Cavani (77th), and Edin Dzeko (265th), who were all directly involved in more than one goal or assist per 90 minutes in the 2016/2017 season.
Figure~\ref{fig:lineup20172018} shows the best possible line-up for the 2017/2018 season up through November 5th 2017 according to our metric. For each position, the line-up includes the highest-ranked player who played at least 450 minutes in that particular position. The average rating for the players for the 2017/2018 season (0.659) is significantly higher than the average rating for the players on the 2016/2017 season (0.551). However, we expect the average rating to regress towards the average for last season as the season progresses.
Appendix~\ref{sec:best-players-2016-2017} lists the five highest-rated players in each position for the 2016/2017 season. Appendix~\ref{sec:best-players-2017-2018} lists the five highest-rated players in each position for the 2017/2018 season until November 5th 2017.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Lineup_2016_2017.png}
\caption{The best possible line-up for the 2016/2017 season according to our metric. For each position, the line-up includes the highest-ranked player who played at least 900 minutes in that particular position.}
\label{fig:lineup20162017}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Lineup_2017_2018.png}
\caption{The best possible line-up for the 2017/2018 season until November 5th 2017 according to our metric. For each position, the line-up includes the highest-ranked player who played at least 450 minutes in that particular position.}
\label{fig:lineup20172018}
\end{figure}
\section{\repname: A language for representing player actions}
\label{sec:representation}
Valuing player actions requires a dedicated language that is \textit{human-interpretable}, \textit{simple} and \textit{complete} to accurately define and describe these actions. The human-interpretability allows reasoning about what happens on the pitch and verifying whether the action values correspond to soccer experts' intuitions. The simpleness reduces the chance of making mistakes when automatically processing the language. The completeness enables to express all the information required to value actions in their full context.
Based on domain knowledge and feedback from soccer experts, we introduce \repname (\repfull). \repname represents each action as a tuple of nine attributes:
\begin{description}
\item[StartTime:] the exact timestamp for when the action started;
\item[EndTime:] the exact timestamp for when the action ended;
\item[StartLocation:] the $(x,y)$ location where the action started;
\item[EndLocation:] the $(x,y)$ location where the action ended;
\item[Player:] the player who performed the action;
\item[Team:] the team of the player;
\item[Type:] the type of the action;
\item[BodyPart:] the body part used by the player for the action;
\item[Result:] the result of the action.
\end{description}
We distinguish between 21 possible types of actions including, among others, \textit{passes}, \textit{crossed corners}, \textit{dribbles}, \textit{runs without ball}, \textit{throw-ins}, \textit{tackles}, \textit{shots}, \textit{penalty shots}, \textit{clearances}, and \textit{keeper saves}. These action types are interpretable and specific enough to accurately describe what happens on the pitch yet general enough such that similar actions have the same type.
Depending on the type of the action, we consider up to four different body parts and up to six possible results. The possible body parts are \textit{foot}, \textit{head}, \textit{other}, and \textit{none}. The two most common results are \textit{success} or \textit{fail}, which indicates whether the action had its intended result or not. For example, a pass reaching a teammate or a tackle recovering the ball. The four other possible results are \textit{offside} for passes resulting in an off-side call, \textit{own goal}, \textit{yellow card}, and \textit{red card}.
We represent a game as a sequence of action sets, where each action set describes the actions performed by the players in between two consecutive touches of the ball. More formally, each action set $A$ consists of one on-the-ball action and $n-1$ off-the-ball actions, where $n$ is the total number of players on the pitch. Each game is a sequence of action sets $<A_1,A_2,\ldots, A_m>$, where $m$ is the total number of touches of the ball.
In addition to being human-interpretable, simple and complete, \repname has the added advantage of being able to naturally unify both event data and tracking data collected by providers such as Wyscout, Opta, and STATS. The representations used by these companies have multiple different objectives (e.g., providing information to the media or informing clubs) and are not necessarily designed to facilitate data analysis. Furthermore, each representation uses a slightly different terminology when describing the events that occur during a game. \repname is an attempt to unify the existing description languages into a common vocabulary that enables subsequent data analysis. The following sections operate on data in the \repname format.
\section{Introduction}
How will a player's actions impact his or her team's performances in games? This question is among the most relevant questions that needs to be answered when a professional soccer club is considering whether to sign a player. Nevertheless, the task of objectively quantifying the impact of the individual actions performed by soccer players during games remains largely unexplored to date. What complicates the task is the low-scoring and dynamic nature of soccer games. While most actions do not impact the scoreline directly, they often do have important longer-term effects. For example, a long pass from one flank to the other may not immediately lead to a goal but can open up space to set up a goal chance several actions down the line.
To help fill the gap in objectively quantifying player performances, we propose a novel advanced soccer metric that assigns a value to any individual player action on the pitch, be it with or without the ball, based on its impact on the game outcome. Intuitively, our action values reflect the actions' expected influence on the scoreline. That is, an action valued at +0.05 is expected to contribute 0.05 goals in favor of the team performing the action, whereas an action valued at -0.05 is expected to yield 0.05 goals for their opponent. Unlike most existing advanced metrics, our proposed metric considers all types of actions (e.g., passes, crosses, dribbles, take-ons, and shots) and accounts for the circumstances under which each of these actions happened as well as their possible longer-term effects.
Our metric was designed to take a step towards addressing three important limitations of most existing advanced soccer metrics~\citep{routley2015markov}. The first limitation is that existing metrics largely ignore actions other than goals and shots. The soccer analytics community's focus has very much been on the concept of the expected value of a goal attempt in recent years \citep{lucey2014quality,caley2015premier,altman2015beyond,mackay2016introducing,aalbers2016expected,mackay2017predicting}. The second limitation is that existing approaches tend to assign a fixed value to each action, regardless of the circumstances under which the action was performed. For example, many pass-based metrics treat passes between defenders in the defensive third of the pitch without any pressure whatsoever and passes between attackers in the offensive third under heavy pressure from the opponents similarly. The third limitation is that most metrics only consider short-term effects and fail to account for an action's effects a bit further down the line. These limitations render many of the existing metrics virtually useless for player recruitment purposes.
Using our metric, we analyzed the 2016/2017 campaign to construct a \textit{Team of the 2016/2017 Season}. When applied to on-the-ball actions like passes, dribbles, and shots alone, Barcelona's Lionel Messi unsurprisingly headlines the team as the highest-ranked player. His average action value per game last season was 26\% higher than his nearest competitor's. Other members featuring on the team include forward Kylian Mbapp\'e then playing for AS Monaco, Real Madrid midfielder Isco, Manchester City playmaker Kevin De Bruyne as well as Chelsea teammates Eden Hazard and Cesc F\`abregas. To identify young talent, we also ranked the best players under 21 years old from the 2016/2017 season according to our metric. Teenage star Mbapp\'e, who moved to French giants Paris Saint-Germain last summer, tops this list. He appears ahead of his fellow countrymen Ousmane Demb\'el\'e, who moved to Barcelona from Borussia Dortmund over the summer, and midfielder Maxime Lopez of Olympique Marseille.
In summary, this paper presents the following four contributions:
\begin{enumerate}
\item \repname: A powerful but flexible language for representing player actions, which is described in Section~\ref{sec:representation}.
\item \frameworkname: A general framework for valuing player actions based on their contributions to the game outcome, which is introduced in Section~\ref{sec:framework}.
\item \algoname: An algorithm for valuing on-the-ball player actions as a concrete instance of the general framework, which is outlined in Section~\ref{sec:algorithm}.
\item A number of use cases showcasing our most interesting results and insights, which are presented in Section~\ref{sec:results}.
\end{enumerate}
\section{Use cases}
\label{sec:results}
In this section, we present a number of use cases to demonstrate the possible applications of our proposed metric. We focus our analysis on the English Premier League, Spanish Primera Division, German 1. Bundesliga, Italian Serie A, and the French Ligue 1. We apply the \algoname algorithm to 9582 games played since the start of the 2012/2013 season. We only include league games and thus ignore all friendly, cup, and European games. We train the predictive models on the games in the 2012/2013 through 2015/2016 seasons and report results for the 2016/2017 season as well as the ongoing 2017/2018 season until Sunday November 5th 2017. We represent each game as a sequence of roughly 1750 on-the-ball-actions. The most frequently occurring actions in our dataset are passes (53\%) and dribbles (24\%). In contrast, shots are much rarer and represent just 1.4\% of the actions with only 11\% of them resulting in a goal.
The remainder of this section is structured as follows. Section~\ref{sec:results-intuition} explains the intuition behind our metric by means of Kevin De Bruyne's goal for Manchester City against Arsenal on Sunday November 5th 2017.
Section~\ref{sec:results-distributions} provides insights into the distribution of the action values.
Section~\ref{sec:results-best-players} shows the best possible line-up for the 2016/2017 season based on our metric.
Section~\ref{sec:results-best-talents} discusses the five highest-rated players born after January 1st 1997 for the 2016/2017 season. Section~\ref{sec:results-outperformers} identifies a number of players who stood out at smaller clubs during the 2016/2017 season.
Section~\ref{sec:results-playing-styles} explains how our metric can be used to compare players in terms of their playing styles.
Section~\ref{sec:results-team-performances} shows how the performances of Manchester City, Real Madrid, and Barcelona have evolved since the start of the 2016/2017 season.
Section~\ref{sec:deployment} discusses how our metric is used by SciSports, a Dutch data analytics company providing expertise to soccer clubs.
\input{tex/results_intuition.tex}
\input{tex/results_distributions.tex}
\input{tex/results_best_players.tex}
\input{tex/results_best_talents.tex}
\input{tex/results_outperformers.tex}
\input{tex/results_playing_styles.tex}
\input{tex/results_team_performances.tex}
\input{tex/results_deployment.tex}
\subsection{Distribution of the action values}
\label{sec:results-distributions}
Figure~\ref{fig:nr_action_mean} shows the number of actions that players execute on average per 90 minutes and the average value of their actions for those players who played at least 900 minutes during the 2016/2017 season. Naturally, there is a tension between these two quantities. If a player performs a high number of actions, then it is harder for each action to have a high value. The 15 highest-rated players according to our metric are highlighted in red.
The grey dotted isoline shows the gap in total contribution between Messi and other players. This isoline is curved since a player's total contribution is computed as the average value per action (\emph{x-axis}) multiplied by the number of actions per 90 minutes (\emph{y-axis}).
The plot shows that strikers like Harry Kane (Tottenham Hotspur), Luis Su\'arez (Barcelona), Kylian Mbapp\'e (AS Monaco), and Pierre-Emerick Aubameyang (Borussia Dortmund) are less involved in the game as they perform a relatively low number of actions on average. However, the actions they do perform tend to be highly valued. In contrast, players like Arjen Robben (Bayern Munich), Eden Hazard (Chelsea), and Philippe Coutinho (Liverpool) perform more actions although the average value of their actions is considerably lower. Cesc F\`abregas (Chelsea), Isco (Real Madrid), and James Rodr\'iguez (Real Madrid) perform more actions per 90 minutes than them while maintaining a higher average value per action. Finally, as shown by the isoline and more traditional statistics,\footnote{\url{https://fivethirtyeight.com/features/lionel-messi-is-impossible/}} Lionel Messi is clearly in a class of his own.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{nr_value_actions-tomd.pdf}
\caption{Scatter plot that contrasts the average number of actions performed per 90 minutes with the average value of these actions for each player who played at least 900 minutes during the 2016/2017 season. The 15 highest-rated players according to our metric are highlighted in red.}
\label{fig:nr_action_mean}
\end{figure}
For nine positions on the pitch, Figure~\ref{fig:distr_pos} shows the distribution of the average ratings per game for those players who played at least 900 minutes during the 2016/2017 season. The highest-rated player for each position is highlighted in red.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{distr_pos-tomd.pdf}
\caption{Distribution of average per game rating for players who played at least 900 minutes in the 2016/2017 season.}
\label{fig:distr_pos}
\end{figure}
\subsection{Deployment in the soccer industry}
\label{sec:deployment}
The SciSports Datascouting department leverages our action values for providing data-driven advice to soccer clubs and soccer associations with respect to player recruitment and opponent analysis. Until recently, the SciSports datascouts almost exclusively relied upon more traditional metrics and statistics as well as the company's SciSkill Index, which ranks all professional soccer players in the world in terms of their actual and expected future contributions to their teams' performances. The SciSkill Index provides intuitions about the general level of a player, whereas our action values offer more insights into how each player contributes to his team's performances. While our action values are currently only available for internal use by the SciSports datascouts, they will also be made available in the SciSports Insight\footnote{\url{https://insight.scisports.com}} online scouting platform.
\subsection{Characterization of playing styles}
\label{sec:results-playing-styles}
Clubs are beginning to consider player types during the recruitment process in order to focus on identifying those players who best fit a team's preferred style of play (e.g., short passes and high defending vs. long balls and defensive play). Currently, scouts and experts are typically tasked with judging playing style. These experts' time is almost always the limiting resource in the player recruitment process, which makes it difficult to consider the entire pool of players. Therefore, advanced metrics offer the potential to help select a set of players that are worthy of additional attention. The metrics can be used to assess a player's ability at performing different types of actions. With our metric, this can be accomplished by computing a player's total value per 90 minutes for each type of action.
To showcase this use case, we analyze the playing styles of Lionel Messi, Harry Kane, and Kylian Mbapp\'e,
who are all counted among the best forward players in the world. Figure~\ref{fig:playercharacteristics} shows the total contributions per 90 minutes for the passes, crosses, dribbles, and shots performed by these three players. Messi rates excellent at all four aspects and is an \textit{allrounder}. In comparison to Messi, Kane rates poorly at passing, dribbling and particularly crossing. However, he outperforms Messi in shooting and is clearly a \textit{finisher}, which is also reflected in the fact that he has scored 23 goals while providing only one assist in the ongoing season. In comparison to Messi, Mbapp\'e only rates poorly at passing and even outperforms him in crossing.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{playercharacteristics-lotteb.pdf}
\caption{Overview of the total contribution per 90 minutes for different types of actions for Lionel Messi, Harry Kane, and Kylian Mbappé.}
\label{fig:playercharacteristics}
\end{figure}
As another use case, consider FC Barcelona's attempts to offset the loss of Neymar by acquiring Borussia Dortmund's Ousmane Demb\'{e}l\'{e} and Liverpool's Philippe Coutinho. Figure \ref{fig:neymar} compares Demb\'{e}l\'{e}, Coutinho and Neymar's total values per 90 minutes for four action types. According to our metric, both Demb\'{e}l\'{e} and Coutinho's passes receive a much higher value than Neymar's. Demb\'{e}l\'{e} is the best crosser, with Neymar and Coutinho receiving nearly identical values for this skill. Neymar is a superior dribbler, and is ranked as the third best dribbler out of all players we analyzed in the 2016/2017 season. However, Demb\'{e}l\'{e} is also exceptionally strong at dribbling and is ranked as the tenth best dribbler, whereas Coutinho is ranked thirty fourth. From a stylistic perspective, this breakdown suggests that Demb\'{e}l\'{e} was a reasonable target in that he comes close to replicating Neymar's signature skill of dribbling.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{playercharacteristics2-lotteb.pdf}
\caption{Overview of the total contribution per 90 minutes for different types of actions for Neymar, Ousmane Demb\'el\'e, and Philippe Coutinho.}
\label{fig:neymar}
\end{figure}
\subsection{Identification of young talents}
\label{sec:results-best-talents}
Table~\ref{tbl:talents2016} shows the five highest-rated players born after January 1st 1997 who played at least 900 minutes during the 2016/2017 season. Kylian Mbapp\'{e}, who is recognized as one of the biggest talents in the world, tops this list with a rating nearly twice as high as his nearest competitor. He has seamlessly transitioned from Monaco to Paris Saint-Germain this season, and has continued to gain acclaim for his play.
Allan Saint-Maximin who played midfielder for Bastia in the French Ligue 1 last season is second-ranked. His play earned him both a transfer to Nice after the season and plaudits from the soccer intelligensia.\footnote{\href{http://www.squawka.com/news/allan-saint-maximin-the-monaco-wonderkid-you-havent-heard-of-yet-and-europes-take-on-king/919430}{http://www.squawka.com/news/allan-saint-maximin-the-monaco-wonderkid-\\you-havent-heard-of-yet-and-europes-take-on-king/919430}} Ousmane Demb\'{e}l\'{e} is also a huge talent, who parlayed his outstanding season for Borussia Dortmund into a summer move to FC Barcelona, where he was injured early in the season. Maxime Lopez and Malcom play in the Ligue 1 and remained with their respective clubs where they continue to play well and are attracting significant interest from bigger clubs.
\begin{table}[H]
\centering
\begin{tabular}{cllclr}
\toprule
\textbf{Rank} & \textbf{Player} & \textbf{Team} & \textbf{Age} & \textbf{Position} & \textbf{Rating} \tabularnewline
\midrule
1 & Kylian Mbapp\'e & AS Monaco & 18 & Central striker & 0.82 \tabularnewline
2 & Allan Saint-Maximin & Bastia & 20 & Winger & 0.46 \tabularnewline
3 & Ousmane Demb\'el\'e & Borussia Dortmund & 20 & Winger & 0.38 \tabularnewline
4 & Maxime Lopez & Olympique Marseille & 19 & Attacking midfielder & 0.30 \tabularnewline
5 & Malcom & Girondins Bordeaux & 20& Winger & 0.26 \tabularnewline
\bottomrule
\end{tabular}
\caption{The highest-ranked players born after January 1st 1997 during the 2016/2017 season according to our metric.}
\label{tbl:talents2016}
\end{table}
Next, we consider a slightly larger age range and also consider players under 23 years old. Figure~\ref{fig:growth} shows the 15-game moving average for our metric for Leroy San\'{e}, Mikel Oyarzabal, and Karol Linetty. Leory San\'{e} was a big signing for Pep Guardiola in the summer of 2016, and is widely recognized for his high level of play this season with Manchester City. Mikel Oyarzabal currently plays for mid-table Primera Division team Real Sociedad. However, the 20-year-old winger, who debuted for the Spanish national team last year, is being linked with big clubs throughout Europe.
Karol Linetty is a 22-year-old central midfielder playing for Sampdoria in Serie A. He is much less well known than the other two players, but our metric suggests he is playing at a level commensurate with these more highly touted youngsters, and hence the Pole may be one to watch.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{growth-player-tomd.pdf}
\caption{The 15-game moving average for our metric for Leroy San\'{e} (Manchester City), Mikel Oyarzabal (Real Sociedad), and Karol Linetty (Sampdoria) since the start of the 2016/2017 season.}
\label{fig:growth}
\end{figure}
\section{\algoname: An algorithm for valuing on-the-ball actions}
\label{sec:algorithm}
In this section, we describe the \algoname (\algofull) algorithm for valuing on-the-ball player actions as an instantiation of our general framework. As a data source, we consider play-by-play event data, which means that each action set contains exactly one on-the-ball action and no other actions. We employ machine learning to estimate the probabilities $P_{hg}$ and $P_{vg}$ from the stream of actions. Consequently, we frame this as a binary classification problem and train a probabilistic classifier to estimate the probabilities. Our implementation involves three key tasks: (1) transforming the stream of actions into a feature-vector format, (2) selecting and training a probabilistic classifier, and (3) aggregating the individual action values to arrive at a rating for a player.
\subsection{Constructing features}
Applying standard machine learning algorithms requires converting the sequence of action sets $<A_1,A_2,\ldots, A_m>$ describing an entire game into examples in the feature-vector format. Thus, one training example is constructed for each game state $S_i$. A game state $S_i$ is labeled positive if the team possessing the ball after action set $A_i$ scored a goal within the next ten actions. A goal in this time frame could arise from either a converted shot by the team possessing the ball after $A_i$ or an own goal by the opposing team.
For each example, instead of defining features based on the entire current game state $S_i = <A_1,...,A_i>$, we only consider the previous three action sets $<A_{i-2},A_{i-1},A_i>$. Approximating the game state in this manner offers several advantages. First, most machine learning techniques require examples to be described by a fixed number of features. Converting game states with varying numbers of actions, and hence different amounts of information, into this format would necessarily result in a loss of information. Second, considering a small window focuses attention on the most relevant aspects of the current context. The number of action sets to consider in the approximation is a parameter of the approach, and three sets was empirically found to work well as shown in Section \ref{sec:estimating-probabilities}.
Since each action set $A_i$ only consists of one on-the-ball action $a_i$ in our data source, we denote the actions we consider as $<a_{i-2},a_{i-1},a_i>$.
From these actions, we define features that will impact the probability of a goal being scored in the near future. Based on the \repname representation, we consider three categories of features.
First, for each of the three actions, we define a number of categorical and real-valued features based on information explicitly included in the \repname representation. There are categorical features for an action's $Type$, $Result$, and $BodyPart$. Similarly, there are continuous features for the $(x,y)$-coordinates of its start location, the $(x,y)$-coordinates of its end location, and the time elapsed since the start of the game.
Second, we define a number of complex features that combine information within an action and across consecutive actions. Within each action, these include (1) the distance and angle to the goal for both the action's start and end locations, and (2) the distance covered during the action in both the $x$ and $y$ directions. Between two consecutive actions, we compute the distance and elapsed time between the start position and time of an action, and the end position and time of the next action. These features provide an intuition about the current speed of play in the game. Additionally, there is also a feature indicating whether the ball changed possession between these two actions.
Finally, to capture the game context, we add as features (1) the number of goals scored in the game by the team possessing the ball after action $a_i$, (2) the number of goals scored in the game by the defending team after action $a_i$, and (3) the goal difference in the game after action $a_i$.
\subsection{Estimating probabilities}
\label{sec:estimating-probabilities}
We investigated which learner to use as well as the number of actions prior to the action of interest to consider. To properly evaluate our classifiers, we used play-by-play event data for Europe's top five competitions. We trained models on all game states for the 2012/2013 through 2014/2015 seasons and predicted the goal probabilities for all game states for the 2015/2016 season.
First, we investigated which learner to use for this task. Logistic Regression is the prevalent method in the soccer analytics community, while Random Forest and Neural Network are popular choices for addressing machine-learning tasks. We compared the performance of these three learners as implemented in the H2O software package\footnote{\url{https://www.h2o.ai}} on three commonly-used evaluation metrics in probabilistic classification~\citep{ferri2009experimental}: (1) logarithmic loss, (2) area under the receiver operating characteristic curve (ROC AUC), and (3) Brier score. A Random Forest classifier with 1000 trees won on all metrics and achieved a ROC AUC of 79.7\%. Furthermore, it was the best calibrated classifier as shown in Figure~\ref{fig:calibration-classifier}. Our observation that Random Forest outperforms Logistic Regression on the task of probabilistically predicting goals is in line with earlier work~\cite{decroos2017predicting}.
\begin{figure}[H]
\includegraphics[width=\textwidth]{calibration-tomd.pdf}
\caption{Calibration curves of the three classifiers under consideration. The probabilities produced by the Random Forest model are calibrated better than the probabilities produced by the other two models.}
\label{fig:calibration-classifier}
\end{figure}
Second, we investigated the number of previous actions to consider. Adding too few actions might leave valuable contextual information unused, while adding too many actions can make the feature set unnecessarily noisy. We trained five different Random Forest classifiers ranging the number of previous actions from one through five as shown in Table~\ref{tbl:eval-actionnb}. We found that three actions is the best number, which is in line with earlier work by~\citet{mackay2017predicting}.
\begin{table}[H]
\centering
\begin{tabular}{crrr}
\toprule
\textbf{Actions} & \textbf{Logarithmic loss} & \textbf{ROC AUC} & \textbf{Brier score} \tabularnewline
\midrule
1 &0.0548 &0.7955 &0.0107 \tabularnewline
2 &0.0546 &0.7973 &0.0107 \tabularnewline
\textbf{3} &\textbf{0.0546} &\textbf{0.7977} &\textbf{0.0107} \tabularnewline
4 &0.0546 &0.7970 &0.0107 \tabularnewline
5 &0.0547 &0.7965 &0.0107 \tabularnewline
\bottomrule
\end{tabular}
\caption{Comparison of five Random Forest models taking into account a varying number of actions prior to the action of interest. For the logarithmic loss and the Brier score a lower value is better, while for the ROC AUC a higher value is better. The best results are in bold.}
\label{tbl:eval-actionnb}
\end{table}
\subsection{Rating players}
To this point, our method assigns a value to each individual action. However, our method also allows aggregating the individual action values into a player rating for multiple time granularities as well as along several different dimensions. A player rating could be derived for any given time frame, where the most natural ones would include a time window within a game, an entire game, or an entire season. Regardless of the given time frame, we compute a player rating in the same manner. Since spending more time on the pitch offers more opportunities to contribute, we compute the player ratings per 90 minutes of game time. For each player, we first sum the values for all the actions performed during the given time frame, then divide this sum by the total number of minutes he played and finally multiply this ratio by 90 minutes.
Players can also be compared along several different axes. First, players have different positions, and the range of values for the rating may be position dependent. Therefore, comparisons could be done on a per-position basis. Similarly, some players are versatile and what position they play may vary depending on the game. Therefore, it may be interesting to examine a player's rating for each position he or she plays. Second, instead of summing over all actions, it is possible to compute a player's rating for each action type. This would allow constructing a player profile, which may enable identifying different playing styles.
\section{\frameworkname: A framework for valuing player actions}
\label{sec:framework}
Broadly speaking, most actions in a soccer game are performed with the intention of (1) increasing the chance of scoring a goal, or (2) decreasing the chance of conceding a goal. Given that the influence of most actions is temporally limited, one way to assess an action's effect is by calculating how much it alters the chances of both scoring and conceding a goal in the near future. We treat the effect of an action on scoring and conceding separately as these effects may be asymmetric in nature and context dependent.
In this section, we introduce the \frameworkname (\frameworkfull) framework for valuing actions performed by players. In our framework, valuing an action boils down to estimating the probabilities that a team will score and concede a goal in the near future for both the game state before the action was performed and the game state after the action was performed.
Now, we will more formally define our metric. For ease of exposition, we will use $h$ to denote the home team and $v$ the visiting team, and will focus on the perspective of the home team. Given any game state $S_i=<A_1, \ldots, A_{i}>$, we need to estimate the short-term probability of a home goal ($hg$) and a visiting goal ($vg$), which we denote by:
\begin{eqnarray*}
P_{hg}(S_i) &=& P(hg \in F^k_i | S_i) \\
P_{vg}(S_i) &=& P(vg \in F^k_i | S_i)
\end{eqnarray*}
where $F^k_i = <A_{i+1}, \ldots, A_{i+k}>$ is the sequence of $k$ action sets that follow action set $A_i$, and $k$ is a user-defined parameter. These probabilities form the basis of our action-rating framework.
Valuing an action requires assessing the \emph{change} in probability for both $P_{hg}$ and $P_{vg}$ as a result of action set $A_i$ moving the game from state $S_{i-1}$ to state $S_i$.\footnote{The challenge of distributing the payoffs of the joint actions that a group takes across the individuals constituting the group goes beyond the scope of this paper but is a well-studied topic in the field of cooperative game theory~\citep{driessen2013cooperative}. The Shapley value is one possible solution to this challenge and has been successfully applied to soccer already~\citep{altman2016finding}.} The change in probability of the home team scoring can be computed as:
\begin{equation*}
\Delta P_{hg} = P_{hg}(S_i) - P_{hg}(S_{i-1}).
\end{equation*}
\noindent This change will be positive if the action increased the probability that the home team will score. The change can be computed in an analogous manner for $P_{vg}$ as:
\begin{equation*}
\Delta P_{vg} = P_{vg}(S_i) - P_{vg}(S_{i-1}).
\end{equation*}
Finally, before combining these two terms, we must contend with the subtlety that the ball may change possession as a result of $A_i$. To account for this, we always normalize the value to be computed from the perspective of the team that has possession after the $i^{th}$ action set. If the home team has possession after action set $A_i$, then the value is calculated as:
\begin{equation*}
V(A_i) = \Delta P_{hg} - \Delta P_{vg}
\end{equation*}
For this valuing scheme, higher scores represent more valuable actions so the change in $P_{vg}$ is subtracted from the change in $P_{hg}$ because it is advantageous for the home to decrease its chance of conceding. If the visiting team had possession after action set $A_i$, the two terms would be swapped.
The \frameworkname framework provides a simple approach to valuing actions that is independent of the representation used to describe the actions. The strength of the framework lies in the fact that it transforms the subjective task of valuing an action into the objective task of predicting the likelihood of a future event in a natural way. One possible limitation is that game-state transitions correspond to on-the-ball actions, whereas some off-the-ball actions (e.g., a smart overlap from a wing-back) can span several consecutive on-the-ball actions. As a result, accurately valuing such off-the-ball actions would require the additional step of aggregating the values of the constituting subactions.
\subsection{Intuition behind the action values}
\label{sec:results-intuition}
Figure~\ref{fig:de-bruyne} visualizes the goal from Manchester City midfielder Kevin De Bruyne against Arsenal on Sunday November 5th 2017. The table at the top of the figure shows the action values assigned to the shot that resulted in the goal as well as the twelve prior actions.
\begin{figure*}[h!]
\centering
\includegraphics[width=.8\textwidth]{De_Bruyne.pdf}
\caption{Visualization of Kevin De Bruyne's 19th-minute goal for Manchester City against Arsenal on Sunday November 5th 2017. The table at the top shows the values assigned to each of the actions performed in the build-up to the shot.
\label{fig:de-bruyne}
\end{figure*}
The attack starts with Argentine forward Sergio Ag\"uero who first takes on an opponent (Action~1), then dribbles into the box (Action~2), and finally delivers a cross that fails to reach a teammate (Action~3), which gets a negative value of -0.045. The clearance from Arsenal defender Laurent Koscielny (Action~4) is collected by De Bruyne, who attempts a shot on target (Action~5). The Belgian midfielder sees his shot saved by Arsenal goalkeeper Peter Cech (Action~6), whose save gets a positive value of 0.014. However, Manchester City are able to recover the ball, which returns to De Bruyne following passes from Leroy San\'e (Action~7) and Fabian Delph (Action~8). De Bruyne first dribbles a bit towards the middle of the pitch (Action~9) and sets up a one-two pass with teammate Fernandinho (Actions~10~and~11), then dribbles into the box (Action~12), and finally sends the ball into the lower-right corner of the goal with a powerful driven shot (Action~13). The dribble into the box and the shot get positive values of 0.040 and 0.888, respectively.
The attack leading to De Bruyne's goal is a clear example of how our metric works. Actions increasing a team's chances of scoring (e.g., a dribble or pass to a more dangerous location on the pitch like Actions~11~and~12) or decreasing the opponent's chances of scoring (e.g., a clearance and a save by the goalkeeper like Actions~4~and~6) receive positive values, whereas actions decreasing a team's chances of scoring like the failed cross from Ag\"uero (Action~3) receive negative values. In this particular game, the 19th-minute goal from De Bruyne is the highest-valued action, while a 47th-minute foul from Arsenal's Nacho Monreal causing a penalty is the lowest-valued action.
| {
"attr-fineweb-edu": 1.546875,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdKw4dbghRjxcswOX | \section{Introduction}
\subsection{Problem formulation}
Football is a typical low-scoring game and games are frequently decided through single events in the game. These events may be extraordinary individual performances, individual errors, injuries, refereeing errors or just lucky coincidences. Moreover, during a tournament there are most of the time teams and players that are in exceptional shape and have a strong influence on the outcome of the tournament. One consequence is that every now and then alleged underdogs win tournaments and reputed favorites drop out already in the group phase.
The above effects are notoriously difficult to forecast.
Despite this fact, every team has its strengths and weaknesses (e.g., defense and attack) and most of the results reflect the qualities of the teams. In order to model the random effects and the ``deterministic'' drift forecasts should be given in terms of probabilities.
Among football experts and fans alike there is mostly a consensus on the top favorites, e.g. Senegal, Cameroon or Egypt, and more debate on possible underdogs. However, most of these predictions rely on subjective opinions and are not quantifiable. An additional difficulty is the complexity of the tournament, with billions of different outcomes, making it very difficult to obtain accurate guesses of the probabilities of certain events. In the particular case of the African championship it is still more unclear to estimate the strengths of the participating teams or even to determine the divergence of the teams' strengths, since many teams or players are not so well-known as the teams from Europe or South America. Hence, the focus of this article is not to make an exact forecast, which seems not reasonable due to many unpredictable events, but to make the discrepancy between the participating teams \textit{quantifiable} and to measure the chances of each team. This approach is underlined by the fact that supporters of the participating teams typically study the tournament structure after the group draw in order to figure out whether their teams have a rather simple or hard way to the final. Hence, the aim is to quantify the difficulty for each team to proceed to the different stages of the tournament.
\subsection{State of the art}
We give some background on modelling football matches. A series of statistical models have been proposed in the literature for the prediction of football outcomes. They can be divided into two broad categories. The first one, the result-based model, models directly the probability of a game outcome (win/loss/draw), while the second one, the score-based model, focusses on the prediction of the exact match score. In this article the second approach is used since the match score is a non-neglecting, very important factor in the group phase of the championship and it also implies a model for the first one. In contrast to the FIFA World Cup, where the two best teams in each group of the preliminary round qualify for the round of $16$, the situation becomes more difficult in the Africa Cup of Nations 2019, where also the four best third-placed teams in the group phase qualify for the round of $16$. As we have seen in former World Cups before 1994 or during the European Championship 2016, in most cases the goal difference is the crucial criterion which decides whether a third-placed team moves on to the round of 16 or is eliminated in the preliminary round. This underlines the importance and necessity of estimating the exact score of each single match and not only the outcome (win/loss/draw).
There are several models for this purpose and most of them involve a Poisson model. The easiest model, \cite{Le:97}, assumes independence of the goals scored by each team and that each score can be modeled by a Poisson regression model. Bivariate Poisson models were proposed earlier by \cite{Ma:82} and extended by \cite{DiCo:97} and \cite{KaNt:03}. A short overview on different Poisson models and related models like generalised Poisson models or zero-inflated models are given in \cite{ZeKlJa:08} and \cite{ChSt:11}.
Possible covariates for the above models may be divided into two major categories: those containing ``prospective'' informations and those containing ``retrospective'' informations. The first category contains other forecasts, especially bookmakers' odds, see e.g. \cite{LeZeHo:10a}, \cite{LeZeHo:12} and references therein. This approach relies on the fact that bookmakers have a strong economic incentive to rate the result correctly and that they can be seen as experts in the matter of the forecast of sport events. However, their forecast models remain undisclosed and rely on information that is not publicly available.
The second category contains only historical data and no other forecasts.
Models based on the second category allow to explicitly model the influence of the covariates (in particular, attack/defense strength/weakness). Therefore, this approach is pursued using a Poisson regression model for the outcome of single matches.
Since the Africa Cup of Nations 2019 is a more complex tournament, involving for instance effects such as group draws, e.g. see \cite{De:11}, and dependences of the different matches, Monte-Carlo simulations are used to forecast the whole course of the tournament. For a more detailed summary on statistical modeling of major international football events, see \cite{GrScTu:15} and references therein.
Different similar models based on Poisson regression of increasing complexity (including discussion, goodness of fit and comparing them in terms of scoring functions) were analysed and used in \cite{gilch-mueller:18} for the prediction of the FIFA World Cup 2018. Among the models therein, in this article we will make use of the most promising Poisson model and omit further comparison and validation of different (similar) models. The model under consideration will not only use for estimating the teams' chances to win the Africa Cup but also to answer questions like how the possible qualification of third-ranked teams in the group phase affects the chances of the top favourites.
Moreover, since the tournament structure of the Africa Cup of Nations 2019 has changed in this edition to 24 participating teams, a comparison with previous editions of this tournament seems to be quite difficult due to the heavy influence of possible qualifiers for the round of 16 as third-ranked teams.
Finally, let me say some words on the data available for feeding our regression model.
These days a lot of data on possible covariates for forecast models is available. \cite{GrScTu:15} performed a variable selection on various covariates and found that the three most significant retrospective covariates are the FIFA ranking followed by the number of Champions league and Euro league players of a team. In this article the Elo ranking (see \texttt{http://en.wikipedia.org/wiki/World\_Football\_Elo\_Ratings}) is preferably considered instead of the FIFA ranking (which is a simplified Elo ranking since July 2018), since the calculation of the FIFA ranking changed over time and the Elo ranking is more widely used in football forecast models. See also \cite{GaRo:16} for a discussion on this topic and a justification of the Elo ranking. At the time of this analysis the composition and the line ups of the teams have not been announced and hence the two other covariates are not available.
This is one of the reasons that the model under consideration is solely based on the Elo points and matches of the participating teams on neutral ground since 2010. The obtained results show that, despite the simplicity of the model, the model under consideration shows a good fit, the obtained forecast is conclusive and give \textit{quantitative insights} in each team's chances. In particular, we quantify the chances of each team to proceed to a specific phase of the tournament, which allows also to compare the challenge for each team to proceed to the final.
\subsection{Questions under consideration}
\label{subsec:goals}
The simulation in this article works as follows: each single match is modeled as $G_{A}$:$G_{B}$, where $G_{A}$ (resp.~$G_{B}$) is the number of goals scored by team A (resp.~by team B). So much the worse not only a single match is forecasted but the course of the whole tournament. Even the most probable tournament outcome has a probability, very close to zero to be actually realized. Hence, deviations of the true tournament outcome from the model's most probable one are not only possible, but most likely. However, simulations of the tournament yield estimates of the probabilities for each team to reach different stages of the tournament and allow to make the different team's chances \textit{quantifiable}. In particular, we are interested to give quantitative insights into the following questions:
\begin{enumerate}
\item How are the probabilities that a team wins its group or will be eliminated in the group stage?
\item Which team has the best chances to become new African champion?
\item What is the effect of the fact that the four best third-ranked teams in the group phase qualify for the round of 16? How does it affect the chances of the top favourites?
\end{enumerate}
As we will see, the model under consideration in this article favors Senegal (followed by Nigeria) to win the Africa Cup of Nations 2019.
\section{The model}
\label{sec:model}
\subsection{Involved data}
The model used in this article was proposed in \cite{gilch-mueller:18} (together with several similar bi-variate Poisson models) as \textit{Nested Poisson Regression} and is based on the World Football Elo ratings of the teams. It is based on the Elo rating system, see \cite{Elo:78}, but includes modifications to take various football-specific variables (like home advantage, goal difference, etc.) into account. The Elo ranking is published by the website \texttt{eloratings.net}.
The Elo ratings as they were on 12 april 2019 for the top $5$ participating nations (in this rating) are as follows:
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Senegal & Nigeria & Morocco & Tunisia & Ghana \cr
\hline
1764 & 1717 & 1706 & 1642 & 1634 \cr
\hline
\end{tabular}
\end{center}
The forecast of the outcome of a match between teams $A$ and $B$ is modelled as
$$
G_A\ : \ G_B,
$$
where $G_A$ (resp. $G_{B}$) is the number of goals scored by team $A$ (resp. $B$).
The model is based on a Poisson regression model, where we assume $(G_{A}, G_{B})$ to be a bivariate Poisson distributed random variable;
see \cite[Section 8]{gilch-mueller:18} for a discussion on other underlying distributions for $G_A$ and $G_B$. The distribution of $(G_{A}, G_{B})$
will depend on the current Elo ranking $\elo{A}$ of team $A$ and Elo ranking $\elo{B}$ of team $B$. The model is fitted
using all matches of Africa Cup of Nations 2019 participating teams on \textit{neutral} playground between 1.1.2010 and 12.04.2019. Matches, where one team plays at home, have usually a drift towards the home team's chances, which we want to eliminate. In average, we have for each team $29$ matches from the past and for the top teams even more.
In the following subsection we explain the model for forecasting a single match, which in turn is used for simulating the whole tournament and determining the likelihood of the success for each participant.
\subsection{Nested Poisson regression}
We now present a \textit{dependent} Poisson regression approach which will be the base for the whole simulation. The number of goals $G_A$, $G_B$ respectively, shall be a Poisson-distributed random variable with rate $\lambda_{A|B}$, $\lambda_{B|A}$ respectively. As we will see one of the rates (that is, the rate of the weaker team) will depend on the concrete realisation of the other random variable (that is, the simulated number of scored goals of the stronger team).
\par
In the following we will always assume that $A$ has \textit{higher} Elo score than $B$. This assumption can be justified, since usually the better team dominates the weaker team's tactics. Moreover the number of goals the stronger team scores has an impact on the number of goals of the weaker team. For example, if team $A$ scores $5$ goals it is more likely that $B$ scores also $1$ or $2$ goals, because the defense of team $A$ lacks in concentration due to the expected victory. If the stronger team $A$ scores only $1$ goal, it is more likely that $B$ scores no or just one goal, since team $A$ focusses more on the defence and secures the victory.
\par
The Poisson rates $\lambda_{A|B}$ and $\lambda_{B|A}$ are now determined as follows:
\begin{enumerate}
\item In the first step we model the number of goals $\tilde G_{A}$ scored by team $A$ only in dependence of the opponent's Elo score $\elo{}=\elo{B}$. The random variable $\tilde G_{A}$ is modeled as a Poisson distribution with parameter $\mu_{A}$. The parameter $\mu_{A}$ as a function of the Elo rating $\elo{\O}$ of the opponent $\O$ is given as
\begin{equation}\label{equ:independent-regression1}
\log \mu_A(\elo{\O}) = \alpha_0 + \alpha_1 \cdot \elo{\O},
\end{equation}
where $\alpha_0$ and $\alpha_1$ are obtained via Poisson regression.
\item Teams of similar Elo scores may have different strengths in attack and defense. To take this effect into account we model the number of goals team $B$ receives against a team of Elo score $\elo{}=\elo{A}$ using a Poisson distribution with parameter $\nu_{B}$. The parameter $\nu_{B}$ as a function of the Elo rating $\elo{\O}$ is given as
\begin{equation}\label{equ:independent-regression2}
\log \nu_B(\elo{\O}) = \beta_0 + \beta_1 \cdot \elo{\O},
\end{equation}
where the parameters $\beta_0$ and $\beta_1$ are obtained via Poisson regression.
\item Team $A$ shall in average score $\mu_A\bigr(\elo{B}\bigr)$ goals against team $B$, but team $B$ shall have $\nu_B\bigl(\elo{A}\bigr)$ goals against. As these two values rarely coincides we model the numbers of goals $G_A$ as a Poisson distribution with parameter
$$
\lambda_{A|B} = \frac{\mu_A\bigl(\elo{B}\bigr)+\nu_B\bigl(\elo{A}\bigr)}{2}.
$$
\item The number of goals $G_B$ scored by $B$ is assumed to depend on the Elo score $E_A=\elo{A}$ and additionally on the outcome of $G_A$. More precisely, $G_B$ is modeled as a Poisson distribution with parameter $\lambda_B(E_A,G_A)$ satisfying
\begin{equation}\label{equ:nested-regression1}
\log \lambda_B(E_A,G_A) = \gamma_0 + \gamma_1 \cdot E_A+\gamma_2 \cdot G_A.
\end{equation}
The parameters $\gamma_0,\gamma_1,\gamma_2$ are obtained by Poisson regression. Hence,
$$
\lambda_{B|A} = \lambda_B(E_A,G_A).
$$
\item The result of the match $A$ versus $B$ is simulated by realizing $G_A$ first and then realizing $G_B$ in dependence of the realization of $G_A$.
\end{enumerate}
For a better understanding, we give an example and consider the match Senegal vs. Ivory Coast: Senegal has $1764$ Elo points while Ivory Coast has $1612$ points. Against a team of Elo score $1612$ Senegal is assumed to score in average
$$
\mu_{\textrm{Senegal}}(1612)=\exp(2.73 -0.00145\cdot 1612)=1.48
$$
goals, while Ivory Coast receives against a team of Elo score $1764$ in average
$$
\mu_{\textrm{Ivory Coast}}(1764)=\exp(-4.0158 + 0.00243\cdot 1764)=1.31
$$
goals. Hence, the number of goals, which Senegal will score against Ivory Coast, will be modelled as a Poisson distributed random variable with rate
$$
\lambda_{\textrm{Senegal}|\textrm{Ivory Coast}}=\frac{1.48+1.31}{2}=1.395.
$$
The average number of goals, which Ivory Coast scores against a team of Elo score $1764$ provided that $G_A$ goals against are received, is modelled by a Poisson random variable with rate
$$
\lambda_{\textrm{Ivory Coast}|\textrm{Senegal}}=\exp(1.431 -0.000728\cdot 1764+ 0.137\cdot G_A);
$$
e.g., if $G_A=1$ then $\lambda_{\textrm{Ivory Coast}|\textrm{Senegal}}=1.33$.
\par
As a final remark, let me mention that the presented dependent approach may also be justified through the definition of conditional probabilities:
$$
\mathbb{P}[G_A=i,G_B=j] = \mathbb{P}[G_A=i]\cdot \mathbb{P}[G_B=j \mid G_A=i] \quad \forall i,j\in\mathbb{N}_0.
$$
For comparision of this model in contrast to similar Poisson models, we refer once again to \cite{gilch-mueller:18}. In the following subsections we present some regression plots and will test the goodness of fit.
\subsection{Regression plots}
As two examples of interest, we sketch in Figure \ref{fig:regression-plot-attack} the results of the regression in (\ref{equ:independent-regression1}) for the number of goals scored by Senegal and Cameroon. The dots show the observed data (i.e, number of scored goals on the $y$-axis in dependence of the opponent's strength on the $x$-axis) and the line is the estimated mean $\mu_A$ depending on the opponent's Elo strength.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=6cm]{RegressionPlot-Senegal.jpeg}
\hfill
\includegraphics[width=6cm]{RegressionPlot-Cameroon.jpeg}
\end{center}
\caption{Plots for the number of goals scored by Senegal and Cameroon in regression \eqref{equ:independent-regression1}.}
\label{fig:regression-plot-attack}
\end{figure}
Analogously, Figure \ref{fig:regression-plot-defense} sketches the regression in (\ref{equ:independent-regression2}) for the (unconditioned) number of goals against of Nigeria and Egypt in dependence of the opponent's Elo ranking. The dots show the observed data (i.e., the number of goals against in the matches from the past) and the line is the estimated mean $\nu_B$ for the number of goals against.
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{RegressionPlotGAA-Nigeria.jpeg}
\hfill
\includegraphics[width=6cm]{RegressionPlotGAA-Egypt.jpeg}
\end{center}
\caption{Plots for the number of goals against for Nigeria and Egypt in regression \eqref{equ:independent-regression2}.}
\label{fig:regression-plot-defense}
\end{figure}
\subsection{Goodness of fit tests}\label{subsubsection:gof}
We check goodness of fit of the Poisson regressions in (\ref{equ:independent-regression1}) and (\ref{equ:independent-regression2}) for all participating teams. For each team $\mathbf{T}$ we calculate the following $\chi^{2}$-statistic from the list of matches from the past:
$$
\chi_\mathbf{T} = \sum_{i=1}^{n_\mathbf{T}} \frac{(x_i-\hat\mu_i)^2}{\hat\mu_i},
$$
where $n_\mathbf{T}$ is the number of matches of team $\mathbf{T}$, $x_i$ is the number of scored goals of team $\mathbf{T}$ in match $i$ and $\hat\mu_i$ is the estimated Poisson regression mean in dependence of the opponent's historical Elo points.
\par
We observe that most of the teams have a very good fit, except Namibia with a $p$-value of $0.048$. In average, we have a $p$-value of $0.476$. In Table \ref{table:godness-of-fit} the $p$-values for some of the top teams are given.
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Team & Senegal & Nigeria & Egypt & Ivory Coast & South Africa
\\
\hline
$p$-value & 0.74 &0.10 & 0.60 & 0.94 &0.72
\\
\hline
\end{tabular}
\caption{Goodness of fit test for the Poisson regression in (\ref{equ:independent-regression1}) for some of the top teams. }
\label{table:godness-of-fit}
\end{table}
Similarly, we can calculate a $\chi^{2}$-statistic for each team which measures the goodness of fit for the regression in (\ref{equ:independent-regression2}) which models the number of goals against. Here, we get an average $p$-value of $0.67$; see Table \ref{table:godness-of-fit2}.
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Team & Senegal & Nigeria & Egypt & Ivory Coast & South Africa
\\
\hline
$p$-value & 0.99 &0.79 & 0.38 & 0.51 &0.76
\\
\hline
\end{tabular}
\caption{Goodness of fit test for the Poisson regression in (\ref{equ:independent-regression2}) for some of the top teams. }
\label{table:godness-of-fit2}
\end{table}
Finally, we test the goodness of fit for the regression in (\ref{equ:nested-regression1}) which models the number of goals against of the weaker team in dependence of the number of goals which are scored by the stronger team. We obtain an average $p$-value of $0.33$; see Table \ref{table:godness-of-fit3}. As a conclusion, the $p$-values suggest good fits.
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Team & Senegal & Nigeria & Egypt & Ivory Coast & South Africa
\\
\hline
$p$-value & 0.99 &0.38 & 0.27 & 0.78 &0.74
\\
\hline
\end{tabular}
\caption{Goodness of fit test for the Poisson regression in (\ref{equ:nested-regression1}) for some of the top teams. }
\label{table:godness-of-fit3}
\end{table}
\subsection{Deviance analysis}
We calculate the null and residual deviances for each team for the regressions in (\ref{equ:independent-regression1}), (\ref{equ:independent-regression2}) and (\ref{equ:nested-regression1}). Tables \ref{table:deviance-IndPR1}, \ref{table:deviance-IndPR2} and \ref{table:deviance-NPR1} show the deviance values and the $p$-values for the residual deviance for some of the top teams. Most of the $p$-values are not low, except for Nigeria. We remark that the level of significance of the covariates is also of fluctuating quality, but it is still reasonable in many cases.
\begin{table}[ht]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Team & Null deviance & Residual deviance & $p$-value
\\
\hline
Senegal & 28.14 & 26.34 & 0.66 \\
Nigeria & 71.36 & 66.39 & 0.03\\
Egypt & 43.94 & 38.15 & 0.29\\
Cote d'Ivoire & 47.15 & 46.8& 0.71\\
South Africa & 12.0 & 10.49 & 0.65 \\
\hline
\end{tabular}
\caption{Deviance analysis for some top teams in regression (\ref{equ:independent-regression1})}
\label{table:deviance-IndPR1}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Team & Null deviance & Residual deviance & $p$-valu
\\
\hline
Senegal & 19.41 & 19.21 & 0.94\\
Nigeria & 58.50 & 45.35 & 0.87\\
Egypt & 49.63 & 38.09 & 0.29 \\
Cote d'Ivoire & 69.97 & 59.61 & 0.25 \\
South Africa & 12.14 & 11.92 & 0.53\\
\hline
\end{tabular}
\caption{Deviance analysis for some top teams in regression (\ref{equ:independent-regression2})}
\label{table:deviance-IndPR2}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Team & Null deviance & Residual deviance & $p$-valu
\\
\hline
Senegal & 28.1 & 24.8 & 0.69\\
Nigeria & 71.4 & 62.1 & 0.05\\
Egypt & 43.94 & 37.98 & 0.25 \\
Cote d'Ivoire & 47.15 & 45.45 & 0.73 \\
South Africa & 12.01 & 10.36 & 0.58\\
\hline
\end{tabular}
\caption{Deviance analysis for some top teams in regression (\ref{equ:nested-regression1})}
\label{table:deviance-NPR1}
\end{table}
\section{Africa Cup of Nations 2019 Simulations}
\label{sec:simulation}
Finally, we come to the simulation of the Africa Cup of Nations 2019, which allows us to answer the questions formulated in Section \ref{subsec:goals}. We simulate each single match of the Africa Cup of Nations 2019 according to the model presented in Section \ref{sec:model}, which in turn allows us to simulate the whole Africa Cup tournament. After each simulated match we update the Elo ranking according to the simulation results. This honours teams, which are in a good shape during a tournament and perform maybe better than expected.
Overall, we perform $100.000$ simulations of the whole tournament, where we reset the Elo ranking at the beginning of each single tournament simulation.
\subsection{Single Matches}
As the basic element of our simulation is the simulation of single matches, we visualise how to quantify the outcomes of single matches. Group C starts with the match between Senegal and Tanzania. According to our model we have the probabilities presented in Figure \ref{table:SN-TZ} for the result of this match: the most probable score is a $2-0$ victory of Senegal, but a $3-0$ or $1-0$ win would also be among the most probable scores.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=15cm]{SEN-TANZ.jpg}
\end{center}
\caption{Probabilities for the score of the match Senegal vs. Tanzania in Group C.}
\label{table:SN-TZ}
\end{figure}
\subsection{Group Forecast}
Among football experts and fans a first natural question after the group draw is to ask how likely it is that the different teams survive the group stage and move on to the round of $16$. Since the individual teams' strength and weaknesses are rather hard to quantify in the sense of tight facts, one of our main aims is to quantify the chances for each participating team to proceed to the round of $16$. With our model we are able to quantify the chances in terms of probabilities how the teams will end up in the group stage. In the following tables \ref{tab:groupA}-\ref{tab:groupF} we present these probabilities obtained from our simulation, where we give the probabilities of winning the group, becoming runner-up, to qualify as one the best third-placed teams or to be eliminated in the group stage. In Group D, the toughest group of all, a head-to-head fight between Morocco, Ivory Coast and South Africa is expected with slight advantage for the team from Ivory Coast.
\begin{table}[ht]
\centering
\begin{tabular}{|r|cccc|}
\hline
Team & 1st & 2nd & Qualified as Third & Preliminary Round \\
\hline
Egypt & 51.00 & 28.30 & 11.30 & 9.50 \\
DR of Congo & 32.00 & 31.80 & 15.60 & 20.70 \\
Uganda & 4.70 & 14.10 & 16.00 & 65.10 \\
Zimbabwe & 12.40 & 25.80 & 21.20 & 40.60 \\
\hline
\end{tabular}
\caption{Probabilities for Group A}
\label{tab:groupA}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{|r|cccc|}
\hline
Team & 1st & 2nd & Qualified as Third & Preliminary Round \\
\hline
Nigeria & 53.90 & 26.90 & 10.90 & 8.40 \\
Guinea & 25.80 & 31.70 & 17.20 & 25.40 \\
Madagascar & 16.10 & 25.90 & 20.50 & 37.60 \\
Burundi & 4.30 & 15.60 & 17.20 & 62.90 \\
\hline
\end{tabular}
\caption{Probabilities for Group B}
\label{tab:groupB}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{|r|cccc|}
\hline
Team & 1st & 2nd & Qualified as Third & Preliminary Round \\
\hline
Senegal & 54.40 & 27.80 & 10.80 & 7.10 \\
Algeria & 28.50 & 31.90 & 17.40 & 22.10 \\
Kenya & 12.30 & 24.80 & 21.20 & 41.70 \\
Tanzania & 4.80 & 15.50 & 16.70 & 63.10 \\
\hline
\end{tabular}
\caption{Probabilities for Group C}
\label{tab:groupC}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{|r|cccc|}
\hline
Team & 1st & 2nd & Qualified as Third & Preliminary Round \\
\hline
Morocco & 29.40 & 27.10 & 17.50 & 26.00 \\
Ivory Coast & 33.60 & 28.80 & 16.70 & 20.90 \\
South Africa & 30.40 & 29.00 & 17.20 & 23.40 \\
Namibia & 6.60 & 15.10 & 17.40 & 60.90 \\
\hline
\end{tabular}
\caption{Probabilities for Group D}
\label{tab:groupD}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|r|cccc|}
\hline
Team & 1st & 2nd & Qualified as Third & Preliminary Round \\
\hline
Tunisia & 49.60 & 28.60 & 13.50 & 8.30 \\
Mali & 32.10 & 37.50 & 19.00 & 11.40 \\
Mauritania & 4.10 & 9.10 & 11.40 & 75.40 \\
Angola & 14.30 & 24.80 & 27.00 & 33.90 \\
\hline
\end{tabular}
\caption{Probabilities for Group E}
\label{tab:groupE}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|r|cccc|}
\hline
Team & 1st & 2nd & Qualified as Third & Preliminary Round \\
\hline
Cameroon & 38.80 & 42.60 & 11.90 & 6.80 \\
Ghana & 55.70 & 32.00 & 7.90 & 4.40 \\
Benin & 4.60 & 19.70 & 33.70 & 42.00 \\
Guinea-Bissau & 0.90 & 5.70 & 11.00 & 82.30 \\
\hline
\end{tabular}
\caption{Probabilities for Group F}
\label{tab:groupF}
\end{table}
\subsection{Playoff Round Forecasts}
Finally, according to our simulations we summarise the probabilities for each team to win the tournament, to reach certain stages of the tournament or to qualify for the round of last 16 as one of the best thirds. The result is presented in Table \ref{tab:nested18}. E.g., Senegal will at least reach the quarterfinals with a probability of $67,70\%$, while Ghana has a $17\%$ chance to reach the final.
The regression model favors Senegal, followed by Nigeria, Ivory Coast and Egypt, to become new football champion of Africa.
\begin{table}[H]
\centering
\begin{tabular}{|r|ccccc|}
\hline
Team & Champion & Final & Semifinal & Quarterfinal & Last16 \\
\hline
Senegal & 15.40 & 25.20 & 41.20 & 67.70 & 92.90 \\
Nigeria & 12.10 & 22.70 & 37.30 & 59.90 & 91.60 \\
Ivory Coast & 10.20 & 17.70 & 31.10 & 51.90 & 79.10 \\
Egypt & 10.10 & 19.20 & 34.60 & 56.60 & 90.60 \\
Ghana & 8.60 & 17.00 & 30.50 & 57.20 & 95.40 \\
South Africa & 8.40 & 15.50 & 28.50 & 48.80 & 76.50 \\
Morocco & 8.30 & 15.30 & 28.20 & 48.20 & 73.90 \\
Tunisia & 5.80 & 11.90 & 23.20 & 45.50 & 91.70 \\
Algeria & 5.10 & 10.30 & 21.40 & 43.30 & 77.80 \\
Guinea & 3.40 & 8.10 & 17.90 & 37.60 & 74.60 \\
Cameroon & 3.00 & 9.00 & 22.30 & 50.70 & 93.30 \\
DR Congo & 3.00 & 7.70 & 19.00 & 40.00 & 79.10 \\
Mali & 1.60 & 5.00 & 13.20 & 32.70 & 88.50 \\
Madagascar & 1.60 & 4.10 & 10.50 & 25.40 & 62.40 \\
Kenya & 1.10 & 3.10 & 9.10 & 23.90 & 58.40 \\
Angola & 1.00 & 2.80 & 8.00 & 22.10 & 66.10 \\
Zimbabwe & 0.40 & 1.80 & 7.40 & 22.80 & 59.50 \\
Namibia & 0.30 & 1.20 & 4.20 & 13.20 & 39.10 \\
Uganda & 0.10 & 0.50 & 2.60 & 10.30 & 34.90 \\
Tanzania & 0.10 & 0.50 & 2.60 & 10.10 & 36.90 \\
Mauritania & 0.10 & 0.40 & 1.50 & 5.90 & 24.40 \\
Benin & 0.10 & 0.60 & 3.40 & 15.10 & 58.00 \\
Burundi & 0.00 & 0.20 & 1.60 & 7.90 & 37.00 \\
Guinea-Bissau & 0.00 & 0.00 & 0.30 & 2.60 & 17.60 \\
\hline
\end{tabular}
\caption{Africa Cup of Nations 2019 simulation results for the teams' probabilities to proceed to a certain stage}
\label{tab:nested18}
\end{table}
\subsection{Simulation without third-placed qualifiers}
\label{subsec:withoutThirds}
One important and often asked question is whether the current tournament structure, which allows third-placed teams in the preliminary round still to qualify for the round of 16, is reasonable or not. In particular, it is the question whether this structure is good or bad for the top teams and to quantify this factor.
Hence, the simulation was adapted in the sense that third-placed teams in the group stage are definitely eliminated, while the winners of those groups, which are intended to play against a third-ranked team in the round of 16, move directly to the quarter finals. This leads to the results in Table \ref{table:ohneDritte}: it shows that the top teams have now slightly higher chances to win the tournament.
\begin{table}[ht]
\centering
\begin{tabular}{|r|ccccc|ccc|}
\hline
Team & Champion & Final & 1/2 & 1/4 & Last16 & 1st & 2nd & Pre.Round \\
\hline
Senegal & 15.80 & 25.40 & 43.50 & 74.10 & 82.20 & 54.50 & 27.70 & 17.90 \\
Nigeria & 14.50 & 28.40 & 45.30 & 72.30 & 80.60 & 53.90 & 26.70 & 19.40 \\
Egypt & 11.70 & 22.60 & 41.10 & 67.90 & 79.30 & 50.50 & 28.70 &20.70 \\
Ivory Coast & 9.90 & 17.00 & 30.40 & 51.50 & 62.50 & 33.50 & 29.00 & 37.50 \\
South Africa & 7.90 & 14.40 & 27.40 & 47.90 & 59.60 & 30.80 & 28.70 & 40.50 \\
Ghana & 7.80 & 16.00 & 28.40 & 53.40 & 87.70 & 55.60 & 32.10 & 12.30 \\
Morocco & 7.60 & 13.50 & 26.20 & 45.90 & 56.50 & 29.20 & 27.20 & 43.60 \\
Algeria & 5.10 & 10.20 & 21.90 & 45.90 & 60.10 & 28.50 & 31.70 & 39.80 \\
Tunisia & 4.70 & 9.60 & 19.00 & 39.00 & 77.70 & 49.30 & 28.50 & 22.20 \\
\hline
\end{tabular}
\caption{Adapted Africa Cup of Nations 2019 simulation results, where third-placed teams are definitely eliminated}
\label{table:ohneDritte}
\end{table}
In Table \ref{table:ohneDritteDifferenzPlayoff} we compare the probabilities of reaching different stages in the case of the adapted tournament (third-ranked teams are definitely eliminated) versus the real tournament structure, which still allows third-ranked teams to qualify for the round of $16$. As one can see, the differences are rather marginal. However, the top favourite teams would profit from the adapted setting slightly. Moreover, many teams have a chance of 10\% or more to qualify for the round of $16$ as one of the best four third-ranked teams. Thus, the chances to win the African championship remain more or less the same, making it neither harder nor easier for top ranked teams to win.
\begin{table}[H]
\centering
\begin{tabular}{|l|rrrrr|}
\hline
Team & Champion & Final & Semifinal & Quarterfinal & Last16 \\
\hline
Senegal & 0.40 & 0.20 & 2.30 & 6.40 & -10.70 \\
Nigeria & 2.40 & 5.70 & 8.00 & 12.40 & -11.00 \\
Egypt & 1.60 & 3.40 & 6.50 & 11.30 & -11.30 \\
Ivory Coast & -0.30 & -0.70 & -0.70 & -0.40 & -16.60 \\
South Africa & -0.50 & -1.10 & -1.10 & -0.90 & -16.90 \\
Ghana & -0.80 & -1.00 & -2.10 & -3.80 & -7.70 \\
Morocco & -0.70 & -1.80 & -2.00 & -2.30 & -17.40 \\
Algeria & 0.00 & -0.10 & 0.50 & 2.60 & -17.70 \\
Tunisia & -1.10 & -2.30 & -4.20 & -6.50 & -14.00 \\
\hline
\end{tabular}
\caption{Difference of probabilities of adapted tournament simulation vs. real tournament structure}
\label{table:ohneDritteDifferenzPlayoff}
\end{table}
\section{Discussion on Related Models}
\label{sec:discussion}
In this section we want to give some quick discussion about the used Poisson models and related models. Of course, the Poisson models we used are not the only natural candidates for modeling football matches. Multiplicative mixtures may lead to overdispersion. Thus, it is desirable to use models having a variance function which is flexible enough to deal with overdispersion and underdispersion. One natural model for this is the \textit{generalised Poisson model}, which was suggested by \cite{Co:89}. We omit the details
but remark that this distribution has an additional parameter $\varphi$ which allows to model the variance as $\lambda/\varphi^2$; for more details on generalised Poisson regression we refer to \cite{St:04} and \cite{Er:06}. Estimations of $\varphi$ by generalised Poisson regression lead to the observation that $\varphi$ is close to $1$ for the most important teams; compare with \cite{gilch-mueller:18}. Therefore, no additional gain is given by the use of the generalised Poisson model.
\par
Another related candidate for the simulation of football matches is given by the \textit{negative binomial distribution}, where also another parameter comes into play to allow a better fit. However, the same observations as in the case of the generalised Poisson model can be made, that is, the estimates of the additional parameter lead to a model which is almost just a simple Poisson model. We refer to \cite{JoZh:09} for a detailed comparison of generalized Poisson distribution and negative Binomial distribution.
\par
For further discussion on adaptions and different models, we refer once again to the discussion section in \cite{gilch-mueller:18}
\section{Conclusion}
A team-specific Poisson regression model for the number of goals in football matches facing each other in international tournament matches has been used for quantifying the chances of the teams participating in the Africa Cup of Nations 2019. They all include the Elo points of the teams as covariates and use all matches of the teams since 2010 as underlying data.The fitted model was used for Monte-Carlo simulations of the Africa Cup of Nations 2019. According to this simulation, Senegal (followed by Nigeria) turns out to be the top favorite for winning the title. Besides, for every team probabilities of reaching the different stages of the cup are calculated.
A major part of the statistical novelty of the presented work lies in the construction of the nested regression model. This model outperforms previous studied models, that use (inflated) bivariate Poisson regression, when tested on the previous FIFA World Cups 2010, 2014 and 2018; see the technical report \cite{gilch-mueller:18}
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 1.69043,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeSLxK4tBVgEFWF_u | \section{Introduction}
\label{sec:intro}
Locating the ball in team sports is an important complement to player detection~\cite{Parisot2017} and tracking~\cite{Kumar2013a}, both to feed sport analytics~\cite{Thomas2017a} and to enrich broadcasted content~\cite{Fernandez2010}.
In the context of real-time automated production of team sports events~\cite{Chen2011,Chen2010d,Chen2010c,Chen2016a,Keemotion}, knowing the ball position with \emph{accuracy} and \emph{without delay} is even more critical.
In the past ten years, the task of detecting the ball has been largely investigated, leading to industrial products like the automated line calling system in tennis or the goal-line technology in soccer~\cite{HawkEye}. However, the ball detection problem remains unsolved for cases of important practical interest because of two main issues.
First, the task becomes quite challenging when the ball is partially occluded due to frequent interactions with players, as often encountered in team sports. Earlier works have generally addressed this problem by considering a calibrated multi-view acquisition setup to increase the candidates detection reliability by checking their consistency across the views~\cite{Parisot2011, Ren2008, intelTrueView, HawkEye, Lampert2012}. To better deal with instants at which the ball is held by players, some works have even proposed to enrich multiview ball detection with cues derived from players tracking~\cite{Wang2014a, Wang2014, Maksai2016}. However, those solutions require the installation of multiple cameras around the field, which is significantly more expensive than single viewpoint acquisition systems~\cite{intelTrueView} and appears difficult to deploy in many venues\footnote{From our personal experience of 100+ installations in professional basketball arenas.}.
Second, the weakly contrasted appearance of the ball, as generally encountered in indoor environments, leads to many false detections. This problem is often handled by exploiting a ballistic trajectory prior in order to discriminate between true and false detected candidates~\cite{Chen2008, Chen2012, Chakraborty2013, Kumar2011, Zhou2013, Yan2006, Parisot2011}.
In addition to this prior, some works also use the cues derived from the players tracking~\cite{Zhang2008, Maksai2016}, or even use the identity and team affiliation of every player in every frame~\cite{Wei2016}.
However, this prior induces a significant delay, making those solutions inappropriate for real-time applications. Furthermore, they require detecting the ball at high frame rates, inducing hardware and computational constraints and making them unusable for single instants applications.
Overall, for scenarios considering instant images in indoor scenes captured from a single view point, the ball detection problem remains largely unsolved. This is especially true for basketball (involving fast dynamics and many players interactions) where state-of-the-art solutions saturates around $40\%$ detection rate, restricted in the basket area~\cite{Parisot2019b}.
Those limitations force current real-time game analysis solutions to rely on a connected ball~\cite{youtube}, which requires to insert a transmitter within the ball ; or use multiple cameras and multiple servers to process the different data streams~\cite{HawkEye, Pingali2000, intelTrueView}.
\smallskip
In this paper, we propose a learning-based method reaching close to $70\%$ ball detection rate on unseen venues, with very few false positives, demonstrated on basketball images. In order to train and validate our method, we created a new dataset of single view basketball scenes featuring many interactions between the ball and players, and low-contrasted backgrounds.
\smallskip
Similar to most modern image analysis solutions, our method builds on a Convolutional Neural Network (CNN)~\cite{Lecun2015a}.
In earlier works, several CNNs have been designed to address the object detection problem, and among the most accurate, \emph{Mask~R-CNN}~\cite{He2017} has been trained to detect a large variety of objects, including balls.
The experiments presented in Section~\ref{subsec:compare} reveal that applying the universal \emph{Mask~R-CNN} detector to our problem largely fails. However, fine tuning the pre-trained weights to deal with our dataset significantly improves the detection performance (despite staying $\sim 5\%$ below our method). However, \emph{Mask~R-CNN} is too complex to run in real-time on an affordable architecture.
To provide a computationally simple alternative to \emph{Mask~R-CNN}, a few works have designed a CNN model to specifically detect the ball in a sports context.
\cite{Reno2018a} adopts a \emph{classification} strategy by splitting the image in a grid of overlapping patches, each of which is fed to a CNN that assesses whether or not it contains a ball. However, the model is far from real-time\footnote{We measured $3.6$fps on an Nvidia~RTX~2080~Ti with an overlap of 10~pixels between the patches. See their paper for implementation details.}, and only two different games are considered to train and validate the method.
In the Robocup Soccer context, \cite{Speck2017a} formulates the detection problem as a \emph{regression} task aiming at predicting the coordinates of the ball in the image. The performance remains relatively poor despite the reasonable contrast
between the white ball and the green field. Furthermore, regression of object coordinates is known to be poorly addressed by CNNs~\cite{Liu2018}. Hence, this strategy is expected to poorly generalize to real team sport scenes.
In comparison to those initial attempts to exploit CNNs to detect the ball, the contributions of our work are multiple and multifaceted.
Primarily, we propose to formulate the ball detection problem as a \emph{segmentation} problem, for which CNNs are known to be quite effective~\cite{Chen2017a}. Such formulation is especially relevant in our team sport ball detection context since there is only one object-of-interest that we aim to detect. Hence, the CNN does not need to handle the object instantiation problem.
In addition, we use a pair of consecutive images coming from a fixed (or motion compensated) viewpoint to take advantage of the ball dynamics without the delay caused by a temporal regularization, allowing low-latency applications.
We show that this approach allows fast and reliable ball detection in weakly contrasted or cluttered scenes.
Furthermore, we show that the use of test-time data augmentation\footnote{Generating multiple transformed versions of an input sample, in order to predict an ensemble of outputs for that input sample} permits a significant increase in the detection accuracy at small false positive rates. This is in the continuation of recent works showing that test-time data augmentation can be used to estimate the prediction uncertainty of a CNN model~\cite{Ayhan2018, Wang2019}.
Finally, we make the royalty-free part of our dataset publicly available\footnote{\url{https://sites.uclouvain.be/ispgroup/Softwares/DeepSport}}.
Beyond providing pairs of consecutive images, it offers a representative sample of professional basketball images gathered on multiple different arenas. It features a large variety of game actions and lighting conditions, cluttered scenes and complex backgrounds. This solves a weakness of several previous works suffering from a limited set of validation data, having very few different games considered~\cite{Parisot2019b, Reno2018a, Speck2017a}.
\section{Method}
\label{sec:model}
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth]{method_overview2.pdf}
\end{center}
\vspace{-1em}
\Description{The input image with input difference is given to a lightweight CNN. The heatmap it outputs is compared to the target segmentation mask. At inference, a decision rule is used to infer the ball position from the output heatmap.}
\caption{
Our detector is based on a segmentation task performed by a fully convolutional network that outputs a heatmap of the ball position. At inference, a detection rule is used to predict the ball location from the heatmap.}
\label{fig:overview}
\end{figure*}
Our proposed solution is illustrated in Figure~\ref{fig:overview}. It formulates the ball detection problem as a segmentation problem
where a fully convolutional neural network is trained to output a heatmap predicting the ball segmentation mask.
At inference, a detection rule is used to extract a ball candidate from the heatmap.
The fully convolutional workflow makes it possible to work with any input sizes.
\paragraph{CNN implementation.}
Among fully convolutional neural networks, multi-scale architectures are known to better trade-off complexity and accuracy on large images by combining wide shallow branches that manipulate fine image details through a reduced number of layers, with deep narrow branches that access smaller images and can thus afford a deeper sequence of layers~\cite{Yu2018,Mazzini2019,Poudel2019,Zhao2018}. The major benefit of this tradeoff is a fast processing allowing real-time applications.
In practice, the CNN used to evaluate our method is the ICNet implementation available at~\cite{hellochick} except that (i) the 3 input resolutions were changed from {\footnotesize $\{(1),(1/2),(1/4)\}\times\text{input size}$} to {\footnotesize $\{(1),(1),(1/2)\}\times\text{input size}$} in order to better handle the small size of the ball at the lowest resolutions; and (ii) the input layers were adapted to handle the 6 channels input data (see Eq.~(\ref{eq:diff})). The output heatmap is produced by applying a softmax on the last layer and removing the channel relative to the background.
From a computational point of view, the selected CNN offers satisfying real-time performances (see Section~\ref{subsec:compare} Table~\ref{tab:fps}) but any segmentation network could be used instead; and we could expect even higher detection speed using recent fast segmentation networks~\cite{Yu2018,Mazzini2019,Poudel2019}.
\paragraph{Detection rule.}
We infer ball candidates as points lying above a threshold $\tau$ in the predicted heatmap. Since we know {\it a priori} there is only one ball in the scene, the detection rule is further constrained by limiting the number of ball candidates per scene. This approach, named top-$k$, detects (up to) the $k$ highest spots in the heatmap, as long as they are higher than the threshold $\tau$. In practice, to avoid multiple detections for the same heatmap hotspot, highest points in the heatmap are selected first, and their surrounding pixels are ignored for subsequent detections, in a greedy way.
Note that the computational cost associated with this detection rule is negligible compared to the CNN inference.
\paragraph{Exploiting the ball dynamics.}
When not held by a player, the ball generally moves rapidly, either due to a pass between players, a shot, or dribbles. To provide the opportunity to exploit motion information, we propose to feed the network with the information carried by two consecutive images denoted $\mathcal{I}_a$ and $\mathcal{I}_b$, where each image is composed of the 3 conventional channels in the $RGB$ space.
Two strategies were considered. The first one, consisting in feeding the network with the concatenation of $\mathcal{I}_a$ and $\mathcal{I}_b$ on the channels axis, gave poor results and is not presented in this work. The second strategy consists in concatenating the image of interest together with its difference with the previous image. Hence:
\begin{equation}
\small
\text{Input} =
\left(R_{\mathcal{I}_a},G_{\mathcal{I}_a},B_{\mathcal{I}_a},
\lvert R_{\mathcal{I}_a}-R_{\mathcal{I}_b}\rvert ,
\lvert G_{\mathcal{I}_a}-G_{\mathcal{I}_b}\rvert,
\lvert B_{\mathcal{I}_a}-B_{\mathcal{I}_b}\rvert \right)
\label{eq:diff}
\end{equation}
\paragraph{Training.}
Because of the custom number of input channels required by our method, no pre-training of the network is done.
The training is performed using the Stochastic Gradient Descent optimization algorithm applied on the mean cross-entropy loss, at the pixel level, between the output heatmap and the binary segmentation mask of the ball.
The meta parameters were selected based on grid searches. The learning rate has been set to $0.001$ (decay by a factor of $2$ every $40$ epochs), batch size to $4$, and number of epochs to $150$ in all our experiments. The weights obtained at the iteration with the smallest error on the validation set were kept for testing.
The network is fed with $1024\times512$ pixels inputs obtained by a data augmentation process including mirroring (around vertical axis), up- and down- scaling (maintaining the ball size between 15 and 45 pixels, which corresponds to the size range of balls observed by the cameras used to acquire the dataset) and cropping (keeping the ball within the crop).
\paragraph{Terminology.}
We will use the term \emph{\bf scene} when referring to the original image captured by the camera, and \emph{\bf random-crop} to denote a particular instance of random cropping, scaling, and mirroring parameters. Multiple different random crops can thus be extracted from a single scene.
\section{Validation Methodology}
\label{sec:validation}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{where_diff_helps.png}
\end{center}
\vspace{-0.5em}
\caption{Image samples (top) and corresponding mean differences to previous image (bottom) for scenes in which the ball is detected when providing the difference to the previous image to the network, but remains undetected by the model that ignores the difference.}
\Description{difference between two consecutive images shows the circular shape of the ball}
\label{fig:where_diff_helps}
\end{figure*}
\paragraph{Dataset.}
\label{sec:dataset}
The experiments were conducted on a rich dataset of 280 basketball scenes coming from professional games that occurred in 30 different arenas on multiple continents. The cameras captured half of a basketball court and have a resolution between 2Mpx and 5Mpx. The resulting images have a definition varying between 65px/m (furthest point on court in the arena with the lowest resolution cameras) and 265px/m (closest point on court in the arena with the highest resolution cameras). For each scene, two consecutive images were captured. The delay between those two captures is 33ms or 40ms, depending on the acquisition frame rate. As shown in Figure~\ref{fig:results}, the dataset presents a large variety of game configurations and various lighting conditions. The ball was manually annotated in the form of a binary segmentation mask. In all images, at least half of the ball is visible.
\paragraph{K-fold testing.}
To validate our work, a $K$-fold training/testing strategy has been adopted in all our experiments. It partitions the dataset into $K$ subsets, named folds, and runs $K$ iterations of the training/testing procedure. Each iteration preserves one fold for testing, and shuffle the other folds before splitting the samples in $90\%$ for the training set and $10\%$ for the validation set.
To assess the generalization capabilities of our model to unseen games and arenas, the $K$ folds were defined so that each fold only contains images from arenas that are not present in the other folds.
In practice, $K$ has been set to 7, which means that each fold contains about 40 scenes coming from 3 or 4 different arenas.
\paragraph{ROC curves and metrics.}
For a given top-$k$ detection rule, the accuracy is assessed based on ROC curves. Each ROC plots the detection rate (TPR) as a function of the false positive rate (FPR), while progressively changing the detection threshold $\tau$. The detection rate measures the fraction of scenes for which the ball has been detected, while the false positive rate measures the mean number of false candidates that are detected per scene. In practice, a detection is considered as being a true (false) ball detection if it lies inside (outside) the surface covered by the ball in the annotated segmentation mask.
\paragraph{Test-time data augmentation.}
Multiple different random crops from the same scene were considered at test time by aggregating their output heatmap. As the ball location is unknown at inference, the random crops that were combined were similar (IoU $> 0.9$). It is like providing small variations of an input in which the ball is expected to be detected (see Figure~\ref{fig:stacking_strategy}~(left)).
\section{Experimental results}
\label{sec:results}
This section assesses our Ball Segmentation method (named \emph{BallSeg}) based on the dataset introduced in Section~\ref{sec:validation}.
First, we validate the CNN input and the detection rule.
Then, we compare \emph{BallSeg} with the \emph{Mask~R-CNN} fine tuned with our dataset.
Finally, we use test-time data augmentation to analyze how different random crops impact the detection. In addition, we take advantage of test-time data augmentation to improve detection accuracy at the cost of increased computational complexity, by merging the heatmaps associated with multiple random crops of the same scene.
\subsection{Inputs choice and detection rule validation}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\columnwidth]{roc_single_vs_diff.pdf}\hspace{2em}
\includegraphics[width=0.45\columnwidth]{roc_topk.pdf}
\end{center}
\caption{\emph{BallSeg} ROC curves. Left: adding the difference with the previous image to the network input improves accuracy (top-$1$ detection rule). Right: top-$1$ detection rule achieves better true/false positives trade-offs.}
\Description{ROC curves for image + difference and image only}
\label{fig:roc_baseline}
\end{figure}
We compared the performances of our \emph{BallSeg} when feeding the network with and without the difference between consecutive images. Figure~\ref{fig:roc_baseline} (left) shows that presenting the input as described in Eq.~(\ref{eq:diff}) significantly improves the detection accuracy. Figure~\ref{fig:where_diff_helps} reveals that using the difference with the previous image allows detecting the ball in low contrasted scenes.
Different top-$k$ detection rules are compared in Figure~\ref{fig:roc_baseline} (right). A top-$1$ detection rule achieves better true/false positive trade-offs at low false positive rates (for $\tau=0.01$: {\sc tpr}=$0.66$ and {\sc fpr}=$0.33$). This is far better than all previous methods presented in the introduction, especially given that our dataset includes a large variety of scenes, presenting the ball in all kinds of game situations.
In the rest of the paper, unless specified otherwise, results use a top-$1$ detection rule, and the input described by Eq.~(\ref{eq:diff}).
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth, trim={28pt, 11pt, 21pt, 5pt}, clip]{success.pdf}
\hrule
\includegraphics[width=\textwidth, trim={28pt, 10pt, 21pt, 1pt}, clip]{failures.pdf}
\end{center}
\vspace{-10pt}
\caption{Our \emph{BallSeg} gives accurate results in many different configurations. Top 6 rows: success cases. Bottom 3 rows: failure cases. First column: \emph{Ground-Truth}. Second column: the class "ball" from the out-of-the-shelf \emph{Mask~R-CNN}. Third column: fined-tuned \emph{Mask~R-CNN} on our dataset (\emph{Ball~R-CNN}). Fourth column: our method, using a segmentation CNN (\emph{BallSeg}).}
\label{fig:results}
\Description{success cases and failure cases}
\end{figure*}
\subsection{Comparison with state-of-the-art object detection method}
\label{subsec:compare}
To better evaluate the value of our \emph{BallSeg} model, we compared it with the results obtained with an universal \emph{Mask~R-CNN} model. We used the implementation of \cite{matterport_maskrcnn_2017} that provides weights trained on the MS-COCO dataset~\cite{coco} and already has a class for the ball.
For a fairest comparison, the \emph{Mask~R-CNN} network has also been fine tuned to deal with our dataset. The resulting model (denoted \emph{Ball R-CNN}) was obtained by running a conventional optimization with stochastic gradient descent optimizer with a learning rate $lr$. First, we updated the front part of the network for $n_f$ epochs, then we updated the whole network for $n_w$ epochs. A grid search has been performed to select the $(lr, n_f,n_w)$ triplet. Best $K$-fold mean test set accuracy was obtained with $lr=10^{-3}$, $n_f=10$ and $n_w=20$
\begin{align*}
lr &\in \{10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}\} \\
n_f &\in \{0, 1, 10\} \\
n_w &\in \{0, 1, 10, 20, 100\}
\end{align*}
In Figure~\ref{fig:roc_rcnn}, we observe that \emph{BallSeg} outperforms \emph{Ball~R-CNN}. Besides, the fact that \emph{Mask~R-CNN} largely fails reveals the need to have sport-specific datasets for the complex task of ball detection in a team sport context. Figure~\ref{fig:results} shows a visual comparison between \emph{BallSeg}, the out-of-the-shelf \emph{Mask~R-CNN} and \emph{Ball~R-CNN} on different image samples from our dataset.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\columnwidth]{roc_mrcnn.pdf}\hspace{2em}
\includegraphics[width=0.45\columnwidth]{roc_mrcnn2.pdf}
\end{center}
\caption{Comparison between \emph{BallSeg} and two computationally complex \emph{Mask~R-CNN} instances: \emph{Mask~R-CNN} pre-trained on MS-COCO, and the same network fine tuned on our dataset (\emph{Ball~R-CNN}). Left: top-$1$ detection rule; Right: top-$2$ detection rule.}
\label{fig:roc_rcnn}
\Description{ROC curves with Mask-R-CNN against BallSeg}
\end{figure}
In addition to this qualitative analysis, Table~\ref{tab:fps} presents the measured computational complexity of the two models: \emph{BallSeg} using the \emph{ICNet} implementation from~\cite{hellochick} and \emph{Ball~R-CNN} being \emph{Mask~R-CNN} from~\cite{matterport_maskrcnn_2017}.
We observe that, besides reaching a better accuracy than \emph{Ball~R-CNN}, \emph{BallSeg} is significantly faster. Note that those numbers highly depends on the implementation. Indeed, \emph{ICNet} can benefit from a $6\times$ speed improvement once compressed with filter-pruning, while keeping the same accuracy~\cite{icnet_implem,fss,Li2019}.
\newcommand{\shape}[3]{{\footnotesize $ #1 \times #2 \times #3 $}}
\begin{table}
\begin{tabular*}{\columnwidth}{l@{}rrr}
\toprule
& single image & image + diff & image + diff \\
& \shape{1024}{512}{3} & \shape{1024}{512}{6} & \shape{1280}{720}{6} \\
\midrule
\emph{Mask~R-CNN} & 4.33 fps & N/A & N/A \\
\emph{BallSeg} (our method) & 38.39 fps & 24.67 fps & 12.08 fps \\
\bottomrule
\end{tabular*}
\caption{Framerate of the two methods compared on an Nvidia~GTX~1080~Ti, with a batch size of $2$, without using filter-pruning optimization. The segmentation approach is significantly faster than the state-of-the-art \emph{Mask~R-CNN}.}
\label{tab:fps}
\end{table}
\subsection{Test-time data augmentation}
\label{subsec:testtimeaugment}
This section investigates how different random crops of the same scene impacts the detection. It first observes
that the ball segmentation errors rarely affect all random crops. In other words, prediction errors correspond to high entropy output distributions, which by definition correspond to a high uncertainty. This is in line with the observation made by~\cite{Ayhan2018, Wang2019} that wrong predictions are associated with high uncertainty levels, i.e. to large diversity of predictions for transformed inputs.
More interestingly, our experiments also reveal that false positives are not spatially consistent across the ensemble of heatmaps. This is an important novel observation, since it gives the opportunity to significantly increase ($\sim10\%$) the detection accuracy at small false positive rate by aggregating the ensemble of heatmaps obtained by multiple different random-crops.
\subsubsection{Random crops consistency and failure case analysis}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{topk.pdf}
\caption{1000 random-crops were created for each scene.
Distribution, over the 280 dataset scenes, of the percentage of random crops in which the ball is detected for top-$1$, top-$2$, and top-$3$ detection rules.}
\label{fig:topk_helps}
\end{center}
\Description{multiple random crops are considered}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{accuracy_vs_number_of_random_crops.pdf}
\caption{1000 random-crops were created for each scene.
Ball detection rate as a function of the number of random crops considered for a scene, for a top-$1$ detection rule. The ball is detected in the scene if it is detected in at least one of the random crops considered.}
\label{fig:multiple_random_crops}
\end{center}
\Description{multiple random crops are considered}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=0.33 \textwidth, trim={30pt, 10pt, 20pt, 10pt}, clip]{strasbourg_covering.png}
\includegraphics[width=0.33 \textwidth, trim={30pt, 10pt, 20pt, 10pt}, clip]{strasbourg_hitmap.png}
\includegraphics[width=0.33 \textwidth, trim={30pt, 10pt, 20pt, 10pt}, clip]{strasbourg_heatmap.png}
\end{center}
\vspace{-10pt}
\caption{Averaging the heatmaps over random image samples helps to discriminate the ball among candidates. Left: Five random crops having similar IoU in a given scene are used; Middle: Scatter of their heatmaps most salient points; Right: Mean heatmap intensity using the five input heatmaps.}
\Description{The agregation of multiple heatmaps}
\label{fig:stacking_strategy}
\end{figure*}
Figure~\ref{fig:topk_helps} presents the distribution, over our 280 scenes, of the percentage of random crops in which the ball is detected for top-$1$, top-$2$, and top-$3$ detection rules and $\tau=0$.
We observe that when the ball is not detected in all random crops (about half of the scenes), it is generally detected in some random crops.
This suggests that the detection rate could be improved by combining multiple random crops (see Section~\ref{subsec:tradeoff}). This opportunity is supported by Figure~\ref{fig:multiple_random_crops}, which presents, for a top-$1$ detection rule, the ball detection rate as a function of the number of random crops considered for a scene. We observe that for $93\%$ of the dataset scenes, when 20 different random crops are considered for each scene, the ball is detected as top-$1$ in at least one of those random crops.
\subsubsection{Accuracy/complexity trade-off}
\label{subsec:tradeoff}
This section investigates how to improve the true/false positive trade-off based on the computation of more than one heatmap per scene.
It builds on the fact that the ball has a higher chance to be detected if more random crops are considered (see Figure~\ref{fig:multiple_random_crops}),
and on the observation that the ball generally induces a significant spot in the heatmap of every random crop, even if this spot is not always the highest one (see in Figure~\ref{fig:topk_helps} where a top-$2$ or top-$3$ detection rule allows to increase the number of scenes in which the ball is detected for all random crops).
Figure~\ref{fig:stacking_strategy} shows an example where the heatmaps predicted for distinct random crops of the same scene are generally more consistent at the actual ball location than for the false candidates.
From this observation, we propose to merge the heatmaps of different random image samples by averaging their intensity. Figure~\ref{fig:roc_multiple_images} shows that applying a top-$1$ detection rule on the averaged heatmap significantly improves the true vs. false positive rate tradeoff ($\sim 10\%$ detection rate increase at small false positive rate), already when only two random image samples are averaged.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\columnwidth]{roc_Nimages_stacked_1.pdf}\hspace{2em}
\includegraphics[width=0.45\columnwidth]{roc_Nimages_stacked_2.pdf}
\caption{Left: top-$1$ and Right: top-$2$ ROC curves obtained when heatmaps coming from $1$, $2$ or $5$ similar random crops of the same scene are averaged.}
\label{fig:roc_multiple_images}
\end{center}
\Description{ROC multiple images}
\end{figure}
This gives the opportunity to improve accuracy at the cost of additional complexity, which in turn can support accurate automatic annotation of unlabelled data for more accurate models training, thereby following the data distillation paradigm~\cite{Radosavovic2018}.
\section{Conclusion}
\label{sec:conclusion}
This paper proposes a new approach for ball detection using a simple camera setup by (i) adopting a CNN-based segmentation paradigm, taking advantage of the ball uniqueness in the scene, and (ii) using two consecutive frames to give cues about the ball motion while keeping a very low latency.
Furthermore, the approach benefits from recent advances made by fast segmentation networks, allowing real-time inference.
In particular, the segmentation network used to demonstrate the approach reduces drastically the computational complexity compared to the conventional \emph{Mask~R-CNN} detector, while achieving better results: close to $70\%$ detection rate (with a very small false positive rate) on unseen games and arenas, based on an arduous dataset.
The dataset made available with this paper is unique in term of number of different arenas considered. In addition, we show that, despite being relatively small, it offers enough variety to provides good performances, especially for a method that doesn't use pre-trained weights.
\section*{Acknowledgments}
This research is supported by the DeepSport project of the Walloon Region, Belgium. C. De Vleeschouwer is funded by the F.R.S.-FNRS (Belgium).
The dataset was acquired using the Keemotion automated sports production system.
We would like to thank Keemotion for participating in this research and letting us use their system for raw image acquisition during live productions, and the LNB for providing rights on their images.
\begin{center}
\begin{tabular}{cc}
\raisebox{0.12em}{\includegraphics[width=2cm]{wallonia.jpg}} & \includegraphics[width=1.8cm]{keemotion.png}
\end{tabular}
\end{center}
\bibliographystyle{ACM-Reference-Format}
\balance
| {
"attr-fineweb-edu": 1.929688,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcD7xK1ThhCdy7Sbp | \section{Introduction}\label{sec:introduction}
Tourism is one of the most profitable and fast-growing economic sectors in the world. In 2017, the tourism industry contributed more than 8.27 trillion U.S. dollars to global economy.
The massive scale of the tourism industry calls for more intelligent services to improve user experiences and reduce labor costs of the industry.
Trip recommendation is one of such services. Trip recommendation aims to recommend a sequence of \emph{places of interest} (POIs) for a user to vist
to maximize the user's satisfaction. Such a service benefits users by relieving them from the time and efforts for trip planning, which in return
further boosts the tourism industry.
\begin{figure}
\centering
\includegraphics[width = 3.2 in]{map.eps}
\vspace{-5mm}
\caption{Impact of co-occurring POIs}
\label{fig:map}
\vspace{-5mm}
\end{figure}
Most existing studies on trip recommendations consider POI popularities or user preferences towards the POIs when making recommendations~\cite{cheng2011personalized,lim2015personalized}.
Several recent studies~\cite{chen2016learning,rakesh2017probabilistic} consider the last POI visited when recommending the next POI to visit.
These studies do not model the following two characteristics that we observe from real-world user trips (detailed in Section~\ref{sec:empirical}).
(i) A POI to be recommended is impacted not only by the last POI visited but also all other POIs co-occurring in the same trip.
For example, in Fig.~\ref{fig:map}, a user has just visited ``ST Kilda Beach'' and ``Esplanade Market''. She may be tired after the long walk along the beach and the market. Thus, compared with ``Luna Park'' which is a theme park nearby, the user may prefer a restaurant (e.g., ``Republica ST Kilda'' or ``Claypots Seafood Bar'') to get some rest and food.
The user plans to visit ``Botanic Garden'' later on. Thus, she decides to visit ``Claypots Seafood Bar'' since it is on the way from the beach to the garden.
Here, the visit to ``Claypots Seafood Bar'' is impacted by the visits of not only ``Esplanade Market'' but also ``ST Kilda Beach'' and ``Botanic Garden.''
(ii) POI popularities, user preferences, and co-occurring POIs together impact the POIs to be recommended in a trip. In the example above,
there can be many restaurants on the way to ``Botanic Garden.'' The choice of ``Claypots Seafood Bar'' can be impacted by not only ``Botanic Garden'' but also
the fact that the user is a seafood lover and that ``Claypots Seafood Bar'' is highly rated by other users.
Most existing models~\cite{liu2016exploring, rakesh2017probabilistic} learn the impact of each factor separately and simply combine them by linear summation, which may not reflect the joint impact accurately.
In this study, we model the two observations above with a \emph{context-aware POI embedding model} to jointly learn the impact of POI popularities, user preferences, and co-occurring POIs.
We start with modeling the impact of co-occurring POIs.
Existing studies model the impact of the last POI with a first-order Markov model~\cite{chen2016learning, kurashima2010travel, rakesh2017probabilistic}. Such a model requires a large volume of data
to learn the impact between every pair of adjacent POIs. However, real-world POI visits are sparse and highly skewed. Many POIs may not be adjacent in any trip and their impacts cannot be learned.
Extending such a model to multiple co-occurring POIs requires a higher-order Markov model, which suffers further from the data sparsity limitation.
We address the above data sparsity limitation
by embedding the POIs into a space where POIs that co-occur frequently are close to each other.
This is done based on our observation that a trip can be seen as a ``sentence'' where each POI visit is a ``word.''
The occurrence of a POI in a trip is determined by all the co-occurring words (POIs) in the same sentence (trip).
This enables us to learn a POI embedding similar to the Word2Vec model~\cite{mikolov2013distributed} that embeds words into a space where words with a similar context are close to each other.
To further incorporate the impact of user preferences into the embedding, we project users into the same latent space of the POIs,
where the preferences of each user is modeled by the proximity between the user and the POIs.
We also extend the embedding of each POI by adding a dimension (a bias term) to represent the POI popularity.
We jointly learn the embeddings of users and POIs via \emph{Bayesian Pairwise Ranking}~\cite{rendle2009bpr}.
To showcase the effectiveness of our proposed context-aware POI embedding, we apply it to a trip recommendation problem name TripRec where a user and her time budget is given.
We propose two algorithms for the problem. The first algorithm, \emph{C-ILP}, models the trip recommendation problem as an integer linear programming problem. It solves the problem
with an integer linear programming technique~\cite{berkelaar2004lpsolve}.
C-ILP offers exact optimal trips, but it may be less efficient for large time budgets.
To achieve a high efficiency, we further propose a heuristic algorithm named \emph{C-ALNS} based on the \emph{adaptive large neighborhood search} (ALNS) technique~\cite{ropke2006adaptive}.
C-ALNS starts with a set of initial trips and optimizes them iteratively by replacing POIs in the trips with unvisited POIs that do not break the user time budget.
We use the POI-user proximity computed by our context-aware POI embedding to guide
the optimization process of C-ALNS. This leads to high quality trips with low computational costs.
This paper makes the following contributions:
\begin{enumerate}
\item We analyze real-world POI check-in data to show the impact of co-occurring POIs
and the joint impact of contextual factors on users' POI visits.
\item
We propose a novel model to learn the impact of all co-occurring POIs rather than just the last POI in the same trip.
We further propose a context-aware POI embedding model to jointly learn the impact of POI popularities, co-occurring POIs, and user preferences on POI visits.
\item
We propose two algorithms C-ILP and C-ALNS to generate trip recommendations based on our context-aware POI embedding model.
C-ILP transforms trip recommendation to an integer linear programming problem and provides exact optimal trips.
C-ALNS adapts the approximate large neighborhood search technique and provides heuristically optimal trips close to the exact optimal trips with a high efficiency.
\item
We conduct extensive experiments on real datasets. The results show that our proposed algorithms outperform state-of-the-art algorithms consistently
in the quality of the trips recommended as measured by the F$_1$-score. Further, our heuristic algorithm C-ALNS produces trip recommendations
that differ in accuracy from those of C-ILP by only 0.2\% while reducing the running time by 99.4\%.
\end{enumerate}
The rest of this paper is structured as follows. Section~\ref{sec:related} reviews related studies.
Section~\ref{sec:empirical} presents an empirical analysis on real-world check-in datasets to show the impact factors of POI visits.
Section~\ref{sec:problem} formulates the problem studied. Section~\ref{sec:model}
details our POI embedding model, and Section~\ref{sec:algorithms} details our trip recommendation algorithms based on the model.
Section~\ref{sec:experiments} reports experiment results. Section~\ref{sec:conclusions} concludes the paper.
\section{Related work}
\label{sec:related}
We compute POI embeddings to enable predicting POI sequences (trips) to be recommended to users.
We review inference models for predicting a POI to be recommended in Section~\ref{sec:lit_model}.
We review trip generation algorithms based on these models in Section~\ref{sec:lit_gen}.
\subsection{POI Inference Model\label{sec:lit_model}}
Most existing inference models for trip recommendations assume POIs to be independent from each other, i.e., the probability of a POI to be recommended is independent from that of any other POIs~\cite{brilhante2013shall,ge2011cost,lim2015personalized, wang2016improving}. For example, Brilhante et al.~\cite{ge2011cost} assume that the probability
of a POI to be recommended is a weighted sum of a popularity score and a user interest score, where the user interest score is computed via user-based collaborative filtering.
Assuming independence between POIs loses the POI co-occurrence relationships, we do not discuss studies based on this assumption further.
Kurashima et al.~\cite{kurashima2010travel} propose the first work that captures POI dependency. They use the Markov model to capture the dependence of a POI $l_{i+1}$ on its preceding POI $l_{i}$ in a trip as the transition probability from $l_i$ to $l_{i+1}$.
Rakesh et al.~\cite{rakesh2017probabilistic} also assume that each POI visit depends on its preceding POI. They unify such dependency with other factors (e.g., POI popularities) into a latent topic model.
The model represents each user's preference as a probability distribution over a set of latent topics. Each latent topic in turn is represented as a probability distribution over POIs. To capture the dependency between consecutive POI visits, they assume that the probability distribution of a latent topic changes with the preceding POI visit.
Both these two studies~\cite{kurashima2010travel,rakesh2017probabilistic}
suffer from the data sparsity problem as they aim to learn the transition probability between any two adjacent POIs.
For many POI pairs, there may not be enough transitions between them observed in real-world POI check-in data,
because check-ins at POIs are highly skewed towards the most popular POIs.
This may lead to unreliable transition probabilities and suboptimal trip recommendations.
Our model does not require the POIs to be adjacent to learn their transition probability.
This helps alleviate the data sparsity problem, which leads to improved trip recommendations.
Chen et al.~\cite{chen2016learning} also use the Markov model to capture the dependence between POIs. To overcome the data sparsity problem, they factorize the transition probability between two POIs as the product of the pairwise transition probabilities w.r.t. five pre-defined features: POI category, neighborhood (geographical POI cluster membership), popularity (number of distinct visitors), visit counts (total number of check-ins), and average visit duration.
These five features can be considered as an embedding of a POI. Such an embedding is manually designed rather than being learned from the data. It may not reflect the salient features of a POI.
POI dependency is also considered in POI recommendations~\cite{cheng2012fused, liu2017experimental,miao2016s,ye2013s},
which aim to recommend an individual POI instead of a POI sequence. Such studies do not need
to consider the dependence among the POIs in a trip.
For example, Ye et al.~\cite{ye2013s} propose a \emph{hidden Markov model} (HMM) for POI recommendation. This model captures the transition probabilities between POIs assuming the POI categories as the hidden states. To recommend a POI, Ye et al. first predict the POI category of the user's next check-in. Then, they predict a POI according to the user's preferences over POIs within the predicted POI category.
Feng et al.~\cite{feng2015personalized} project POIs into a latent space where the pairwise POI distance represents the transition probabilities between POIs. Liu et al.~\cite{liu2016exploring} also use a latent space for POI recommendations. They first learn the latent vectors of POIs to capture the dependence between POIs. Then, they fix the POI vectors and learn the latent vectors of users from the user-POI interactions.
These studies differ from ours in three aspects: (i) Their models learn the impact of POIs and the impact of user preferences independently, while our model learns the impact of the two factors jointly,
which better captures the data characteristics and leads to an improved trip recommendation quality as shown in our experimental study. (ii) Their models focus on user preferences and do not consider the impact of POI popularities, while ours take both into consideration. (iii) These studies do not consider constraints such as time budgets while ours does.
\subsection{Trip Generation\label{sec:lit_gen}}
Trip recommendation aims to generate a trip, i.e., a sequence of POIs, that meets user constraints and maximizes user satisfaction.
Different user constraints and user satisfaction formulation differentiate trip recommendation studies.
For example, Brilhante et al.~\cite{brilhante2013shall} consider a user given time budget.
They partition historical trips into segments each of which is associated with a time cost. Then, they reduce the trip recommendation problem to a \emph{generalized maximum coverage} (GMC) problem that finds trip segments whose time costs together do not exceed the user time budget, while
a user satisfaction function is maximized.
Gionis et al.~\cite{gionis2014customized} assume a given sequence of POI categories and the minimum and maximum numbers of POIs to recommend for each category.
They use dynamic programming to compute trip recommendations.
Lim et al.~\cite{lim2015personalized} formulate trip recommendation as an \emph{orienteering problem} that recommends a trip given a starting POI, an ending POI,
and a time budget. They adopt the \emph{lpsolve} linear programming package~\cite{berkelaar2004lpsolve} to solve the problem.
To showcase the applicability of our context-aware POI embedding model, we apply it to the trip recommendation problem studied by Lim et al.~\cite{lim2015personalized}.
As we consider the joint impact of contextual factors,
our user satisfaction formulation becomes nonlinear, which cannot be optimized by Lim et al.'s approach.
Among the studies that consider POI dependency,
Hsieh et al.~\cite{hsieh2014mining} and Rakesh et al.~\cite{rakesh2017probabilistic} assume a given
starting POI $l_s$, a given time budget $t_q$, and a given time buffer $b$.
They build a trip recommendation by starting from $l_s$ and progressively adding more POIs to the trip until the trip time reaches $t_q-b$.
They repeatedly add the unvisited POI that has the highest transition probability from the last POI in the trip.
As discussed earlier, their transition probabilities depend only on the last POI but not any other co-occurring POIs.
Chen et al.~\cite{chen2016learning} assume given starting and ending POIs and a time budget.
They formulate trip recommendation as an orienteering problem in a directed graph, where every vertex represents a POI and the weight of an edge represents the transition probability from its source vertex to its end vertex. Our trip recommendation problem share similar settings. However, Chen et al.'s algorithm does not apply to our problem as we not only consider the transition probabilities between adjacent POIs
but also the impact of all POIs in a trip.
\section{Observations on POI Check-ins}
\label{sec:empirical}
We start with an empirical study on real-world POI check-in data to observe
users' check-in patterns.
We aim to answer the following three questions: (1) Are users' check-ins at a POI impacted by other POIs co-occurring in the same trip?
(2)~Are users' check-ins at a POI impacted by (other users') historical check-ins at the POI, i.e., the popularity of the POI?
(3)~Are the impact of co-occurring POIs and the impact of POI popularity independent from each other?
\begin{table}
\renewcommand*{\arraystretch}{1.0}
\centering
\small
\begin{scriptsize}
\caption{Dataset Statistics\label{tab:datasets}}
\vspace{-3mm}
\begin{threeparttable}
\begin{tabular}{lllll}
\toprule[1pt]
Dataset & \#users & \#POI visits & \#trips & POIs/trip\\ \midrule[1pt]
Edinburgh & 82,060 & 33,944 & 5,028 & 6.75 \\\midrule
Glasgow & 29,019 & 11,434 & 2,227 & 5.13 \\\midrule
Osaka & 392,420 & 7,747 & 1,115 & 6.95 \\\midrule
Toronto & 157,505 & 39,419 & 6,057 & 6.51 \\\bottomrule[1pt]
\end{tabular}
\vspace{-4mm}
\end{threeparttable}
\end{scriptsize}
\end{table}
We analyze four real check-in datasets used in trip recommendation studies~\cite{chen2016learning,lim2015personalized}.
These four datasets are extracted from the Yahoo!Flickr Creative Commons 100M (YFCC100M) dataset~\cite{thomee2016yfcc100m}.
They contain check-ins in the cities of Edinburgh, Glasgow, Osaka, and Toronto respectively.
Table~\ref{tab:datasets} summarizes the statistics of the four datasets.
For example, the Edinburgh dataset contains 33,944 POI visits from 82,060 users (consecutive check-ins at the same POI is counted as a POI visit).
The POI visits form 5,028 different trips, i.e., sequences of POI visits by the same user within
an eight-hour period. There are 6.75 POI visits per trip on average.
\textbf{Impact of co-occurring POIs.}
To verify the impact of co-occurring POIs, for each POI $l$,
we compute the frequency distribution of the co-occurring POIs of $l$.
If such frequency distributions of different POIs are different, then the POIs can be distinguished by such
frequency distributions, and a POI visit can be determined by visits to the co-occurring POIs.
This verifies the impact of co-occurring POIs.
For each dataset, we perform a hypothesis test on whether
two POIs have different frequency distributions of co-occurring POIs as follows.
We randomly sample $50\%$ of the trips.
From the sampled dataset, for each POI $l$, we compute an $|\mathcal{L}|$-dimensional distribution named the \emph{co-occurrence distribution},
where $\mathcal{L}$ is the set of all POIs in the dataset, and dimension $i$ represents the normalized frequency of POI $l_i$ occurring in the same trip as $l$.
We perform a \emph{chi-square two sample test} for each pair of POIs on their co-occurrence distributions,
where the null hypothesis is that ``the two distributions conform the same underlying distribution'' and the significance level is 0.05.
If the hypothesis is rejected, we say that the two POIs form an \emph{independent POI pair}.
We generate 100 sample datasets and report the average ratio of independent POI pairs over all POI pairs.
Figure~\ref{fig:analysis_contextual} shows the result, where each gray dot represents the ratio of a sample dataset, and the rectangles denote the 25 percentile, median, and 75 percentile. On average, independent POI pairs take up as least 32.5 (Osaka) and up to 87.5\% (Edinburgh) of all POI pairs.
This means that a non-trivial portion of POIs have different co-occurrence distributions, which confirms the impact of co-occurring POIs.
\begin{figure}[h]
\vspace{-5mm}
\centering
\subfloat[Independent POI pairs]{\includegraphics[width = 1.5 in]{test_context.eps}\label{fig:analysis_contextual}}\hspace{1mm}
\subfloat[Impacted users]{\includegraphics[width = 1.5 in]{test_pop.eps}\label{fig:analysis_pop}}
\vspace{-2mm}
\caption{Observations on POI check-ins}
\label{fig:emp_analysis}
\vspace{-3mm}
\end{figure}
\textbf{Impact of POI popularity.}
POI popularity is commonly perceived to have a major impact on POI visits~\cite{ge2011cost}.
We add further evidence to this perception.
For each city, we randomly split its dataset into two subsets, each of which consists of the POI visits of half of the users.
We use one of the subsets as a \emph{historical dataset}, from which we compute a rank list of the POIs in $\mathcal{L}$ by their number of visits in the historical dataset.
A POI with more visits ranks higher and is considered to be more popular.
We use the other subset as a \emph{testing dataset}. For each user $u$ in the testing dataset, we test whether she visits the popular POIs in $\mathcal{L}$
more often than the less popular POIs. We compute the average rank of her visited POIs. If the average rank is higher than $|\mathcal{L}|/2$,
we consider $u$ to be an \emph{impacted users} whose visits are impacted by POI popularity.
We report the ratio of impacted users averaged over 100 runs of the procedure above (with random selection for dataset splitting).
As Fig.~\ref{fig:analysis_pop} shows, all datasets have more than 70\% impacted users, which demonstrates the importance
of POI popularity.
\textbf{Joint impact of co-occurring POIs and POI popularity.}
The empirical study above confirms the impact of co-occurring POIs and the impact of POI popularity.
A side observation when comparing Fig.~\ref{fig:analysis_contextual} and Fig.~\ref{fig:analysis_pop} is that these factors
have a joint impact rather than independent one. In general, for the cities where
co-occurring POIs have a greater impact, POI popularity has a less impact (e.g., Edinburgh), and vice versa (e.g., Osaka).
This brings a challenge
on designing a model that can learn the impact of the factors jointly and can adapt
to the different levels of joint impact across different datasets.
\renewcommand{\arraystretch}{1.0}
\begin{table}
\centering
\small
\caption{Frequently Used Symbols}
\vspace{-3mm}
\label{tab:symbols}
\begin{threeparttable}
\begin{tabular}{c l}
\toprule[1pt]
Symbol & Description\\ \midrule[1pt]
$\mathcal{R}$ & a set of check-in records\\ \midrule
$\mathcal{L}$ & a set of POIs\\ \midrule
$\mathcal{U}$ & a set of users\\ \midrule
$l$ & a POI\\ \midrule
$u$ & a user\\ \midrule
$s^u$ & a trip of user $u$\\ \midrule
$\vec{l}$ & the latent vector of $l$\\ \midrule
$\vec{u}$ & the latent vector of $u$\\ \midrule
$\vec{c(l)}$ & the latent vector of the co-occurring POIs of $l$\\ \bottomrule[1pt]
\end{tabular}
\end{threeparttable}
\vspace{-4mm}
\end{table}
\section{Problem formulation}
\label{sec:problem}
We aim to learn a context-aware POI embedding such that POIs co-occurring more frequently are closer in the embedded space.
We map POIs and users to this embedded space and make trip recommendations based on their closeness in the embedded space.
To learn such an embedding, we use a POI check-in dataset $\mathcal{R}$ (e.g., the datasets summarized in Table~\ref{tab:datasets}). Each check-in record $r\in\mathcal{R}$ is a 3-tuple $\langle u,l,t\rangle$, where $u$ denotes the check-in user, $l$ denotes the POI, and $t$ denotes the check-in time. An example check-in record is $\langle \texttt{10012675@N05},\ \texttt{Art Gallery of Ontario}, \texttt{1142731848}\rangle$, which denotes that user \texttt{10007579@N00} checked-in at \texttt{Art Gallery of Ontario} on \texttt{19 Mar of 2006} (\texttt{1142731848} in UNIX timestamp format).
\textbf{POI visit and historical trip.}
Let $\mathcal{U}$ be the set of all users and $\mathcal{L}$ be the set of all POIs in the check-in records in $\mathcal{R}$.
We aggregate a user $u$'s consecutive check-ins at the same POI $l$ into a \emph{POI visit}
$v^u = \langle u, l, t_a, t_d\rangle$, where $t_a$ and $t_d$ represent the times of the first and the last (consecutive) check-ins at $l$ by $u$.
With a slight abuse of terminology, we use a POI visit $v^u$ and the corresponding POI $l$ interchangeably as long as the context is clear.
POI visits of user $u$ within a certain time period (e.g., a day) form a \emph{historical trip} of $u$, denoted as $s^u =\langle v_{1}^u, v_{2}^u, \ldots, v_{|s^u|}^u \rangle$.
All historical trips of user $u$ form the \emph{profile} of $u$, denoted as $\mathcal{S}^u=\{s_1^u, s_2^u,\dots, s_{|\mathcal{S}^u|}^u\}$.
We learn the POI embedding from the set $\mathcal{S}$ of all historical trips of all users in $\mathcal{U}$, i.e.,
$\mathcal{S} = \mathcal{S}^{u_1} \cup \mathcal{S}^{u_2} \cup \ldots \cup \mathcal{S}^{u_{|\mathcal{U}|}}$.
We summarize the notation in Table~\ref{tab:symbols}.
\textbf{TripRec query.} To showcase the effectiveness of our POI embedding,
we apply it to a trip recommendation problem~\cite{lim2015personalized}.
This problem aims to recommend a \emph{trip} $tr$ formed by an ordered sequence of POIs to a user $u_q$, i.e., $tr = \langle l_1, l_2, \ldots, l_{|tr|} \rangle$,
such that the value of a \emph{user satisfaction function} is maximized.
We propose a novel user satisfaction function denoted by $S(u_q,tr)$ which is detailed in Section~\ref{sec:algorithms}.
Intuitively, each POI makes a contribution to $S(u_q,tr)$, and the contribution is larger when the POI suits $u_q$'s preference better.
A time budget $t_q$ is used to cap the number of POIs in $tr$. The \emph{time cost} of $tr$, denoted by $tc(tr)$, must not
exceed $t_q$. The time cost $tc(tr)$
is the sum of the \emph{visiting time} at every POI $l_i \in tr$, denoted as $tc_v(l_i)$, and the \emph{transit
time} between every two consecutive POIs $l_i, l_{i+1} \in tr$, denoted as $tc_t(l_i, l_{i+1})$:
\vspace{-1mm}
\begin{equation}
tc(tr) = \sum_{i=1}^{|tr|}tc_v(l_i) + \sum_{i=1}^{|tr|-1}tc_t(l_i, l_{i+1})
\end{equation}
We derive the visiting time $tc_v(l_i)$ as the average time of POI visits at $l_i$:
\vspace{-4mm}
\begin{equation}
tc_v(l_i) = \frac{1}{N_{l_i}}\sum_{u\in \mathcal{U}}\sum_{s^u\in \mathcal{S}^u}\sum_{c^u_{j}\in s^u} (v^u_{j}.t_d-v^u_{j}.t_a)\delta(v^u_{j}.l, l_i)
\end{equation}
Here, $N_{l_i}$ represents the total number of POI visits at $l_i$; and $\delta(v^u_{j}.l, l_i)$ is an indicator function that returns 1 if $v^u_{j}.l$ and $l_i$ are the same POI, and 0 otherwise.
The transit time $tc_t(l_i, l_{i+1})$ depends on the transportation mode (e.g., by walk or car), which is orthogonal to our study.
Without loss of generality and following previous studies~\cite{gionis2014customized,lim2015personalized,wang2016improving}, we assume transit by walk
and derive $tc_t(l_i, l_{i+1})$ as the road network shortest path distance between $l_i$ and $l_{i+1}$ divided by an average walking speed of 4 km/h.
Other transit time models can also be used.
Following~\cite{chen2016learning,lim2015personalized,wang2016improving},
we also require $l_1$ and $l_{|tr|}$ to be at a given starting POI $l_s$ and a given ending POI $l_e$.
For ease of discussion, we call such a trip recommendation problem the \emph{TripRec query}:
\vspace{-2mm}
\begin{definition}[TripRec Query]
A TripRec query $q$ is represented by a 4-tuple $q=\langle u_q, t_q, l_s, l_e\rangle$.
Given a query user $u_q$, a query time budget $t_q$, a starting POI $l_s$, and an ending POI $l_e$,
the TripRec query finds a trip $tr = \langle l_1, l_2, ..., l_{|tr|}\rangle$ that maximizes $S(u_q,tr)$ and satisfies: (i) $tc(tr) \leqslant t_q$,
(ii) $l_1 = l_s$, and (iii) $l_{|r|} = l_e$.
\end{definition}
\section{Learning a Context-Aware POI Embedding}
\label{sec:model}
Consider a POI $l_i$, a user $u$, and a historical trip $s$ of $u$ that contains $l_i$.
The popularity of $l_i$,
the user $u$, and the other POIs co-occurring in $s$ together form a \emph{context} of $l_i$. Our POI embedding is computed from
such contexts, and hence is named a \emph{context-aware
POI embedding}.
We first discuss how to learn a POI embedding
such that POIs co-occurring more frequently are closer in the embedded space in Section~\ref{sec:model_contextual}.
We further incorporate user preferences and POI popularities into the embedding in Sections~\ref{sec:model_user} and~\ref{sec:model_pop}.
We present an algorithm for model parameter learning in Section~\ref{sec:learningalgo}.
\vspace{-1mm}
\subsection{Learning POI Co-Occurrences}
\label{sec:model_contextual}
Given a POI $l_i$,
we call another POI $l_j$ a \emph{co-occurring POI} of $l_i$, if
$l_j$ appears in the same trip as $l_i$.
The conditional probability $p(l_i|l_j)$, i.e.,
the probability of a trip containing $l_i$
given that $l_j$ is in the trip,
models the \emph{co-occurrence relationship} of $l_i$ over~$l_j$.
To learn $p(l_i|l_j)$, the Markov model is a solution, which views $p(l_i|l_j)$ as a transition probability from $l_j$ to $l_i$.
This model assumes that the transition probability of each POI pair is independent from any other POIs, and there are a total of $|\mathcal{L}|^2$ probabilities to be learned.
Learning such a model requires a large number of check-ins with different adjacent POI combinations.
This may not be satisfied by real-world POI check-in datasets since check-ins are skewed towards
popular POIs. Many pairs of POIs may not be observed in consecutive check-ins.
Learning the transition probability between non-adjacent POIs requires higher-order Markov models which suffers more from data sparsity.
To overcome the data sparsity problem and capture the co-occurrence relationships between both adjacent and non-adjacent POIs,
we propose a model to learn $p(l_i|c(l_i))$ instead of $p(l_i|l_j)$, where $c(l_i)$ represents the set of co-occurring POIs of $l_i$.
Our model is inspired
by the \emph{Word2vec} model~\cite{mikolov2013distributed}.
The Word2vec model embeds words into a vector space where each word is placed in close proximity with its \emph{context words}. Given an occurrence of word $w$ in a large text corpus, each word that
occurs within a pre-defined distance to $w$ is regarded as a context word of $w$. This pre-defined distance forms a \emph{context window} around a word.
In our problem, we can view a POI as a ``word'', a historical trip as a ``context window'', the historical trips of a user as a ``document'', and all historical
trips of all the users as a ``text corpus''. Then, we can learn a POI embedding based on the probability distribution of the co-occurring POIs.
Specifically, we use the architecture of \emph{continuous bag-of-words} (CBOW)~\cite{mikolov2013efficient}, which predicts the \emph{target word} given its \emph{context},
to compute the POI embedding. The computation works as follows.
Given a POI $l_i\in \mathcal{L}$, we map $l_i$ into a latent $d$-dimensional real space $\mathbb{R}^d$
where $d$ is a system parameter, $d\ll |\mathcal{L}|$.
The mapped POI, i.e., the POI embedding, is a $d$-dimensional vector $\vec{l_i}$.
When computing the embeddings, we treat each historical trip as a context window: given an occurrence of $l_i$ in a historical trip $s$, we treat $l_i$ as the target POI and all other POIs in $s$
as its co-occurring POIs $c(l_i|s)$, i.e., $c(l_i|s) = \{l|l\in s\setminus \{l_i\}\}$.
In the rest of the paper, we abbreviate $c(l_i|s)$ as $c(l_i)$ as long as the context is clear.
Let $csim(l_i, l_j)$ be the \emph{co-occurrence similarity}
between two POIs $l_i$ and $l_j$. We compute $csim(l_i, l_j)$ as
the dot product of the embeddings of $l_i$ and $l_j$:
\begin{equation}
csim(l_i, l_j) = \vec{l_i}\cdot\vec{l_j}
\end{equation}
Similarly, the co-occurrence similarity between a POI $l_i$ and its set of co-occurring POIs $c(l_i)$, denoted as $csim(l_i, c(l_i))$, is computed as:
\begin{equation}
csim(l_i, c(l_i)) = \vec{l_i}\cdot\vec{c(l_i)}
\end{equation}
Here, $\vec{c(l_i)}$ is computed as an aggregate vector of the embeddings of the POIs in $c(l_i)$.
We follow Wang et al.~\cite{henry2018vector} and aggregate the embeddings by summing them up in each dimension independently:
\begin{equation}
\vec{c(l_i)} = \sum_{l\in c(l_i)} \vec{l}
\end{equation}
Other aggregate functions (e.g.,~\cite{wang2015learning}) can also be used.
Then, the probability of observing $l_i$ given $c(l_i)$ is derived by applying the softmax function on the co-occurrence similarity $csim(l_i, c(l_i))$:
\begin{equation}
p(l_i|c(l_i)) = \frac{e^{csim(l_i, c(l_i))}}{Z(\vec{c(l_i)})} = \frac{e^{\vec{l_i}\cdot\vec{c(l_i)}}}{Z(\vec{c(l_i)})}
\end{equation}
Here, $Z(\vec{c(l_i)}) = \sum_{l\in\mathcal{L}}{e^{\vec{l}\cdot\vec{c(l_i)}}}$ is a normalization term.
\subsection{Incorporating User Preferences}
\label{sec:model_user}
Next, we incorporate user preferences into our model.
We model a user's preferences towards the POIs as her
``co-occurrence'' with the POIs, i.e., a user $u_j$ is also projected
to a $d$-dimensional embedding space where she is closer to the POIs that she is more likely to visit (i.e., ``co-occur'').
Specifically, the co-occurrence similarity between a POI $l_i$ and a user $u_j$ is computed as:
\begin{equation}
csim(l_i, u_j)=\vec{l_i}\cdot\vec{u_j}
\end{equation}
Thus, the preference of $u_j$ over $l_i$ can be seen as the probability $p(l_i|u_j)$ of observing $l_i$ given $u_j$ in the space. After applying the softmax function over $csim(l_i, u_j)$, $p(l_i|u_j)$ can be computed as:
\begin{equation}
p(l_i|u_j) = \frac{e^{\vec{l_i}\cdot\vec{u_j}}}{Z(\vec{u_j})}
\end{equation}
Here, $Z(\vec{u_j})=\sum_{l\in\mathcal{L}}e^{\vec{l}\cdot\vec{u_j}}$ is a normalization term.
To integrate user preferences with POI co-occurrence relationships, we unify the POI embedding space and the user embedding space into a single embedding space.
In this unified embedding space, the POI-POI proximity reflects POI co-occurrence relationships and the user-POI proximity reflects user preferences.
Intuitively, we treat each user $u_j$ as a ``pseudo-POI''. If user $u_j$ visits POI $l_i$, then $u_j$ (a pseudo-POI) serves as a co-occurring POI of $l_i$.
Thus, the joint impact of user preferences and POI co-occurrences can be modeled by combining the pseudo POI and the actual co-occurring POIs.
Given a set of co-occurring POIs $c(l_i)$ and a user $u_j$, the probability of observing $l_i$ can be written as:
\begin{equation}
p(l_i|c(l_i), u_j)= \frac{e^{\vec{l_i}\cdot(\vec{u_j+c(l_i)})}}{Z(\vec{u_j+c(l_i)})}
\end{equation}
Here, vectors $\vec{u_j}$ and $\vec{c(l_i)}$ are summed up in each dimension, while
$Z(\vec{u_j+c(l_i)}) = \sum_{l\in\mathcal{L}}e^{\vec{l}\cdot(\vec{u_j+c(l_i)})}$ is a normalization term.
\subsection{Incorporating POI Popularity}
\label{sec:model_pop}
We further derive $p(l_i)$ which represents the popularity of $l_i$.
A straightforward model is to count the number of POI visits at $l_i$ and use the normalized frequency as $p(l_i)$.
This straightforward model is used by most existing studies (e.g., ~\cite{gionis2014customized,lim2015personalized,liu2011personalized}).
This model relies on a strong assumption that POI popularity is linearly proportional to the number of POI visits.
This linearity assumption may not hold since popularity may not be the only reason for visiting a POI.
Instead of counting POI visit frequency, we propose to learn the POI popularity jointly with
the impact of co-occurring POIs and user preferences.
Specifically, we add a dimension to the unified POI and user embedding space, i.e.,
we embed the POIs to an $\mathbb{R}^{d+1}$ space. This extra dimension represents the latent popularity of a POI,
and the embedding learned for this space is our \emph{context-aware POI embedding}.
For a POI $l_i$, its embedding now becomes $\vec{l_i}\oplus l_i.p$ where $\oplus$ is a concatenation operator and $l_i.p$
is the latent popularity. The probability $p(l_i)$ is computed by applying the softmax function over $l_i.p$:
\vspace{-1mm}
\begin{equation}
p(l_i) = \frac{e^{l_i.p}}{\sum_{l\in \mathcal{L}}e^{l.p}}
\vspace{-1mm}
\end{equation}
Integrating with the POI contextual relationships and user preferences, the final probability of oberving $l_i$ given $u_j$ and $c(l_i)$ can be represented as:
\vspace{-1mm}
\begin{equation}
p(l_i|c(l_i), u_j)= \frac{e^{\vec{l_i}\cdot(\vec{u_j+c(l_i)})+l_i.p}}{Z(\vec{u_j+c(l_i)}+l_i.p)}
\vspace{-1mm}
\end{equation}
Here, $Z(\vec{u_j+c(l_i)}+l_i.p)=\sum_{l\in\mathcal{L}}e^{\vec{l}\cdot(\vec{u_j+c(l_i)})+l.p}$
is a normalization term.
\subsection{Parameter Learning}
\label{sec:learningalgo}
We adopt the \emph{Bayesian Pairwise Ranking} (BPR) approach~\cite{rendle2009bpr} to learn the embeddings of POIs and users. The learning process aims to maximize the posterior of the observations:
\vspace{-1mm}
\begin{equation}
\Theta = \underset{\Theta}{\mathsf{argmax}}\underset{u\in\mathcal{U}}{\prod}\underset{s^u\in\mathcal{H}^u}{\prod}\underset{l\in s^u}{\prod}\underset{l'\notin s^u}{\prod} P(>_{u,c(l)}|\Theta)p(\Theta)
\vspace{-1mm}
\end{equation}
Here, $\Theta$ represents the system parameters to be learned (i.e., user and POI vectors) and $P(>_{u,c(l)}|\Theta)$ represents the pairwise margin given $u$ and $c(l)$ between the probabilities of observing $l$ and observing $l'$.
Maximizing the above objective function equals to maximizing its log-likelihood function. Thus, the above equation can be rewritten as follows:
\vspace{-1mm}
\begin{multline}
\Theta = \underset{\Theta }{\mathsf{argmax}}\underset{u\in\mathcal{U}}{\sum}\underset{s^u\in\mathcal{H}^u}{\sum}\underset{l\in s^u}{\sum}\underset{l'\notin s^u}{\sum} \log\sigma \Big(\vec{l}\cdot\vec{c(l)}+\vec{l}\cdot\vec{u}+l.p \\
-\vec{l'}\cdot\vec{c(l)}-\vec{l'}\cdot\vec{u}-l'.p\Big)
\vspace{-1mm}
\end{multline}
Here, $\sigma(\cdot)$ is the sigmoid function and $\sigma(z) = \frac{1}{1+e^{-z}}$.
To avoid overfitting, we add a regularization term $\lambda||\Theta||^2$ to the objective function:
\begin{multline}
\vspace{-1mm}
\Theta = \underset{\Theta}{\mathsf{argmax}}\underset{u\in\mathcal{U}}{\sum}\underset{s^u\in\mathcal{H}^u}{\sum}\underset{l\in s^u}{\sum}\underset{l'\notin s^u}{\sum} \log\sigma \Big(\vec{l}\cdot\vec{c(l)}+\vec{l}\cdot\vec{u}+l.p \\
-\vec{l'}\cdot\vec{c(l)}-\vec{l'}\cdot\vec{u}-l'.p\Big) -\lambda||\Theta||^2
\vspace{-1mm}
\end{multline}
We use stochastic gradient descent (SGD) to solve the optimization problem. Given a trip $s^u$ of user $u$, we obtain $|s^u|$ observations
in the form of $\langle u, s^u, l, c(l)\rangle$, where $l \in s^u$. For each observation, we randomly sample $k$ negative POIs not in $s^u$.
Using each sampled POI $l'$, we update $\Theta$ along the ascending gradient direction:
\vspace{-1mm}
\begin{equation}
\Theta \leftarrow \Theta + \eta \frac{\partial}{\partial\Theta}(\log\sigma(z)-\lambda||\Theta||^2)
\vspace{-1mm}
\end{equation}
Here, $\eta$ represents the learning rate and $z = \vec{l}\cdot\vec{c(l)} + \vec{l}\cdot\vec{u} +l.p - \vec{l'}\cdot\vec{c(l)} - \vec{l'}\cdot\vec{u} - l'.p$ represents the distance between the observed POI and a sampled non-visited POI $l'$.
We summarize the learning algorithm in Algorithm~\ref{alg:learning}, where $itr_m$ represents a pre-defined maximum number of
learning iterations.
\begin{algorithm}
\caption{Embedding learning\label{alg:learning}}
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\SetKwFunction{KwAppend}{append}
\LinesNumbered
\Input{$\mathcal{S}$: a set of trips; $itr_m$: max iterations}
\Output{$\Theta$}
Initialize $\Theta$ with Uniform distribution $U(0,1)$\;
$itr\leftarrow 0$\;
\While{$itr \le itr_m$}{
\ForEach{observation $\langle u, s^u, l, c(l)\rangle$}{
Sample a set $\mathcal{L}'$ of $k$ POIs not in $s^u$\;
\ForEach{$l'\in \mathcal{L}'$}{
$\vec{c(l)}\leftarrow \mathsf{aggregate}(\vec{l_i}|l_i\in c(l))$\;
$\delta = 1 - \sigma(z)$\;
$\vec{u}\leftarrow\vec{u} + \eta(\delta(\vec{l}-\vec{l'})-2\lambda\vec{u})$\;
$\vec{l}\leftarrow\vec{l} + \eta(\delta(\vec{u}+\vec{c(l)})-2\lambda\vec{l})$\;
$\vec{l'}\leftarrow\vec{l'} - \eta(\delta(\vec{u}+\vec{c(l)})-2\lambda\vec{l'})$\;
$l.p\leftarrow \l.p + \eta(\delta-2\lambda l.p)$\;
$l'.p\leftarrow \l'.p - \eta(\delta-2\lambda l'.p)$\;
\ForEach{$l_i\in c(l)$}{
$\vec{l_i}\leftarrow\vec{l_i} + \eta(\delta(\vec{l}-\vec{l'})-2\lambda\vec{l_i})$\;
}
}
}
$itr \leftarrow itr+1$\;
}
\Return{$\Theta$}\;
\end{algorithm}
\vspace{-5mm}
\section{Trip Recommendation}
\label{sec:algorithms}
To showcase the capability of our context-aware POI embedding to capture the latent POI features, we apply it to
the TripRec query as defined in Section~\ref{sec:problem}.
Given a TripRec query $q= \langle u_q, l_s, l_e, t_q\rangle$, the aim is to return a trip $tr = \langle l_1, l_2, \ldots, l_{|tr|}\rangle$ such that
(i) $tr$ satisfies the query constraints, i.e., starting at $l_s$, ending at $l_e$, and the time cost not exceeding $t_q$ (i.e., $tc(tr) \le t_q$),
and (ii) $tr$ is most preferred by user $u_q$.
There may be multiple \emph{feasible trips} that satisfy the query constraints. Let $\mathcal{T}$ be the set of feasible trips.
The problem then becomes selecting the trip $tr \in \mathcal{T}$ that is most preferred.
The strategy that guides trip selection plays a critical role in recommendation quality.
\textbf{Context-aware trip quality score.}
We propose the \emph{context-aware trip quality} (CTQ) score to guide trip selection.
We thus reduce TripRec to an optimization problem of finding the feasible trip with the highest CTQ score.
The CTQ score of a trip $tr$, denoted as $S(u_q,tr)$, is a joint score of two factors: the closeness between $tr$ and query $q$ and the co-occurrence similarity among the POIs in $tr$.
To compute the closeness between $tr$ and $q$, we derive the latent representation $\vec{q}$ of $q$ as an aggregation (e.g., summation) of the vectors $\vec{u_q}$, $\vec{l_s}$, and $\vec{l_e}$.
The closeness between $q$ and a POI $l$, denoted as $clo(q,l)$, is computed as the probability of observing $l$ given $q$:
\vspace{-2mm}
\begin{equation}
clo(q,l) = \frac{e^{\vec{l}\cdot\vec{q}}}{\sum_{l'\in\mathcal{L}}e^{\vec{l'}\cdot\vec{q}}}
\vspace{-1mm}
\end{equation}
The closeness between $q$ and $tr$, denoted as $clo(q,tr)$, is the sum of $clo(q,l)$ for every $l \in tr$:
\vspace{-2mm}
\begin{equation}
clo(q,tr) = \sum_{i=2}^{|tr|-1} clo(q, l_i)
\vspace{-1mm}
\end{equation}
The co-occurrence similarity among the POIs in $tr$ is computed as the sum of the pairwise \emph{normalized occurrence similarity} between any two POIs $l_i$ and $l_j$ in $tr$,
denoted as $ncsim(l_i, l_j)$:
\vspace{-1mm}
\begin{equation}
ncsim(l_i, l_j)=e^{\vec{l_i}\cdot\vec{l_j}}/\sum_{l}\sum_{l':l'\neq l} e^{\vec{l}\cdot\vec{l'}}
\vspace{-1mm}
\end{equation}
Overall, the CTQ score $S(u_q, tr)$ is computed as:
\vspace{-1mm}
\begin{equation}
S(u_q,tr)=\sum_{i=2}^{|tr|-1} \frac{e^{\vec{q}\cdot\vec{l_i}}}{\sum_{l} e^{\vec{q}\cdot\vec{l}}}+ \sum_{i=2}^{|tr|-2}\sum_{j=i+1}^{|tr|-1} \frac{e^{\vec{l_i}\cdot\vec{l_j}}}{\sum_{l}\sum_{l':l'\neq l} e^{\vec{l}\cdot\vec{l'}}}
\end{equation}
Here, we have omitted $l_1$ and $l_{|tr|}$. This is because all feasible trips share the same $l_1$ and $l_{|tr|}$ which are the given starting and ending POIs $l_s$ and $l_e$ in the query.
\textbf{Problem reduction.}
To generate the feasible trips, we construct a directed graph $G=(V,E)$, where each vertex $v_i \in V$ represents POI $l_i \in \mathcal{L}$ and each edge $\overrightarrow{e_{ij}} \in E$ represents the
transit from $v_i$ to $v_j$. We assign \emph{profits} to the vertices and edges. The profit of vertex $v_i$, denoted as $f(v_i)$, is computed as $f(v_i)=clo(q,l_i)$.
The profit of an edge $\overrightarrow{e_{ij}}$, denoted as $f(\overrightarrow{e_{ij}})$, is computed as $f(\overrightarrow{e_{ij}} )=ncsim(l_i,l_j)$.
For ease of discussion, we use $v_1$ and $v_{|V|}$ to represent the query starting and ending POIs $l_s$ and $l_e$, respectively.
We set the profits of $v_1$ and $v_{|V|}$ as zero, since they are included in every feasible trip.
We further add \emph{costs} to the edges to represent the trip cost. The cost of edge $\overrightarrow{e_{ij}} $, denoted as $tc(\overrightarrow{e_{ij}})$, is the sum of the transit time cost between $l_i$ and $l_j$
and the visiting time cost of $l_j$, i.e., $tc(\overrightarrow{e_{ij}} )=tc_v(l_j)+tc_t(l_i,l_j)$.
Based on the formulation above, recommending a trip for query $q$ can be seen as a variant of the \emph{orienteering problem}~\cite{golden1987orienteering} which finds a path that collects the most profits in $G$ while costs no more than a given budget $t_q$. We thus reduce the TripRec problem to the following constrained optimization problem:
\begin{equation}\label{eq:problem}
\begin{array}{l}
\displaystyle \text{max}\ \sum_{i=1}^{|V|}\sum_{j=1}^{|V|} x_{ij}\cdot f(v_j)+\sum_{i=2}^{|V|-1}\sum_{j=i+1}^{|V|-1}x_i\cdot x_j\cdot f(\overrightarrow{e_{ij}}) \\
\text{s.t.}\ \ \displaystyle \text{(a) } \sum_{i=1}^{|V|} x_{1i} = x_1=1, \quad \text{(b) } \sum_{i=1}^{|V|} x_{i|V|} = x_{|V|}=1\\
\quad \quad \displaystyle \text{(c) } \sum_{j=1}^{|V|} x_{ij} = \sum_{k = 1}^{|V|} x_{ki} = x_i \leqslant 1,\ \forall i\in [2,|V|-1]\\
\quad \quad \displaystyle \text{(d) } tc_v(v_1) + \sum_{i=1}^{|V|-1}\sum_{j=2}^{|V|} x_{ij}\cdot tc(\overrightarrow{e_{ij}}) \leqslant t_q\\
\quad \quad \displaystyle \text{(e) } 2\leqslant p_i\leqslant |V|,\ \forall i\in [2,|V|]\\
\quad \quad \displaystyle \text{(f) } p_i - p_j +1\leqslant (|V|-1)(1-x_{ij}),\ \forall i,j\in [2,|V|]
\end{array}
\end{equation}
Here, $x_{ij}$ and $x_i$ are boolean indicators: $x_{ij}=1$ if edge $\overrightarrow{e_{ij}}$ is selected, and $x_i= 1$ if vertex $v_i$ is selected.
Conditions (a) and (b) restrict the trip to start from $v_1$ and end at $v_{|V|}$. Condition (c) restricts to visit any selected POI once.
Condition~(d) denotes the time budget constraint. Conditions~(e) and~(f) are adapted from~\cite{miller1960integer}, where $p_i$ denotes the position of $v_i$ in the trip. They ensure no cycles in the trip.
\subsection{The C-ILP Algorithm}\label{sec:cilp}
\vspace{-1mm}
A common approach for the orienteering problems is the \emph{integer linear programming} (ILP) algorithm~\cite{chen2016learning,lim2015personalized}.
However, ILP does not apply directly to our problem. This is because the second term in our objective function in Equation~\ref{eq:problem}, i.e.,
$\sum_{i=2}^{|V|-1}\sum_{j=i+1}^{|V|-1}x_i\cdot x_j\cdot f(\overrightarrow{e_{ij}})$, is nonlinear.
In what follows, we transform Equation~\ref{eq:problem} to a linear form such that the ILP algorithm~\cite{chen2016learning,lim2015personalized}
can be applied to solve our problem. Such an algorithm finds
the exact optimal trip for TripRec. We denote it as the \emph{C-ILP} algorithm for ease of discussion.
Our transformation replaces the vertex indicators $x_i$ and $x_j$ in Equation~\ref{eq:problem}
with a new indicator $x'_{ij}$, where $x'_{ij} = 1$ if both $v_i$ and $v_j$ are selected (not necessarily adjacent).
We further impose $i<j$ in $x'_{ij}$ to reduce the total number of such indicators by half. This does not affect the correctness of the optimization
since $x'_{ij} = x'_{ji}$.
Then, Equation~\ref{eq:problem} is rewritten as follows.
\vspace{-1.5mm}
\begin{equation}\label{eq:problem2}
\begin{array}{l}
\text{max}\ \sum_{i=1}^{|V|}\sum_{j=1}^{|V|} x_{ij}\cdot f(v_j) + \sum_{i=2}^{|V|-2}\sum_{j=i+1}^{|V|-1}x'_{ij}\cdot f(\overrightarrow{e_{ij}})\\
\text{s.t.} \ \ \displaystyle \text{(a) } \sum_{i=1}^{|V|} x_{1i} = 1, \quad \text{(b) } \sum_{i=1}^{|V|} x_{i|V|} = 1\\
\quad \quad \displaystyle \text{(c) } \sum_{j=1}^{|V|} x_{ij} = \sum_{k = 1}^{|V|} x_{ki} \leqslant 1,\ \forall i\in [2,|V|-1]\\
\quad \quad \displaystyle \text{(d) } x'_{ij} = \sum_{k=1}^{|V|}\sum_{m=1}^{|V|} x_{ik}\cdot x_{jm},\ \forall i,j\in [1, |V|-1],i<j\\
\quad \quad \displaystyle \text{(e) } x'_{i|V|} = \sum_{k=1}^{|V|} x_{ik}, \forall i\in [1,|V|-1]\\
\quad \quad \displaystyle \text{(f) } tc_v(v_1) + \sum_{i=1}^{|V|}\sum_{j=1}^{|V|} x_{ij}\cdot tc(\overrightarrow{e_{ij}}) \leqslant t_q\\
\quad \quad \displaystyle \text{(g) }2\leqslant p_i\leqslant |V|,\ \forall i\in [2,|V|]\\
\quad \quad \displaystyle \text{(h) } p_i - p_j +1\leqslant (|V|-1)(1-x_{ij}), \ \forall i,j\in [2,|V|]
\end{array}
\vspace{-1.5mm}
\end{equation}
Here, Conditions (a) to (c) and (f) to (h) are the same as those in Equation~\ref{eq:problem}. Conditions (d) and (e) define the relationships between $x_{ij}$ and $x'_{ij}$. The main idea is that if a trip includes a vertex $v_i$, it must contain an edge starting from $v_i$, or an edge ending at $v_i$ if $v_i=v_{|V|}$.
Thus, for any two vertices $v_i$ and $v_j$ that are not $v_{|V|}$, there indicator $x_{ij}$ equals to 1 if the solution trip contain two edges:one starting from $v_i$ and another from $v_j$. For any vertex $v_i$ and the vertix $v_{|V|}$, their indicator $x_{ij}$ equals to 1 if the solution trip contains an edge starting from $v_i$.
Using $x'_{ij}$, we transform our objective function into a linear form.
Condition (d) is still non-linear (note $x_{ik}\cdot x_{jm}$). We replace it with three linear constraints:
\vspace{-2mm}
\begin{equation}
\label{eq:transformation}
\begin{array}{l}
\displaystyle x'_{ij}\leqslant \sum_{k=1}^{|V|}x_{ik},\forall i,j\in [1,|V|-1], i<j\\
\displaystyle x'_{ij}\leqslant \sum_{k=1}^{|V|}x_{jk},\forall i,j\in [1,|V|-1], i<j\\
\displaystyle x'_{ij}\geqslant \sum_{k=1}^{|V|}\sum_{m=1}^{|V|} (x_{ik} + x_{jm}) -1, \forall i,j\in [1,|V|-1], i<j
\end{array}
\end{equation}
To show the correctness of the above transformation,
we consider two cases: (i) At least one vertex (e.g., $v_i$) is not included in the optimal trip; and (ii) Both $v_i$ and $v_j$ are not in the optimal trip. Condition (d) in Equation~\ref{eq:problem2} ensures that $x'_{ij} = 0$ in Case (i) and $x'_{ij}=1$ in Case (ii). We next show that this is also guaranteed by Equation~\ref{eq:transformation}. For Case (i), we have $\sum_{k=1}^{|V|}x_{ik}=0$, which leads to $x'_{ij} \leqslant 0$ according to the first constraint in Equation~\ref{eq:transformation}. Since $x'_{ij}\in [0,1]$, we have $x'_{ij}=0$. For Case (ii), we have $\sum_{k=1}^{|V|}x_{ik}=1$ and $\sum_{k=1}^{|V|}x_{jk}=1$. According to the third constraint in Equation~\ref{eq:transformation}, we have $x'_{ij}\geqslant 1$. Since $x'_{ij}\in [0,1]$, we have $x'_{ij}=1$. Combining the two cases, we show that the above transformation retains the constraints
of Condition (d).
\textbf{Algorithm complexity.}
There are $2\cdot |E|$ boolean variables in C-ILP, where $|E|$ represents the number of edges in $G$. To compute the solution, the \emph{lpsolve} algorithm~\cite{berkelaar2004lpsolve} first finds a trip without considering the integer constraints, which can be done in $O(|E|)$ time. Then it refines the trip to find the optimal integral solution. Given a non-integer variable in the current trip, the algorithm splits the solution space into two: one restricting the variable to have at least the ceiling of its current value and the other restricting the variable to have at most the floor of its current value. Then, the algorithm optimizes the two solution spaces and checks if there still exist non-integer variables in the new trip. The algorithm repeats the above procedure until an integral solution is found. The algorithm uses branch-and-bound to guide the search process. It may need to explore all possible combinations in the worst case, which leads to a worst-case time complexity of $O(2^{|E|})$.
\subsection{The C-ALNS Algorithm}\label{sec:calns}
The C-ILP algorithm finds the trip with the highest CTQ score. However, the underlying
integer linear program algorithm may incur a non-trivial running time as shown by the complexity analysis above.
To avoid the high running time of C-ILP,
we propose a heuristic algorithm named C-ALNS that is based on \emph{adaptive large neighborhood search} (ALNS)~\cite{pisinger2007general}.
ALNS is a meta-algorithm to generate heuristic solutions. It starts with an initial solution (a trip in our problem) and then improves the solution iteratively
by applying a destroy and a build operator in each iteration.
The \emph{destroy operator} randomly removes a subset of the elements (POIs) from the current solution.
The \emph{build operator} inserts new elements into the solution to form a new solution. Different destroy/build operators use different heuristic strategies to select the elements to remove/insert.
Executing a pair of destroy and build operators can be viewed as a move to explore a neighborhood of the current solution.
The aim of the exploration is to find a solution with a higher objective function value. The algorithm terminates after a pre-defined maximum number of
iterations $itr_m$ is reached.
As summarized in Algorithm~\ref{alg:ALNS},
our C-ALNS algorithm adapts the ALNS framework as follows:
(i) C-ALNS consists of multiple ($run_m$) ALNS runs (Lines~\ref{algl:ALNS run start}). The best trip of all runs and its CTQ score are stored as $tr_{opt}$ and $S(u_q, tr_{opt})$.
The best trip within a single run is stored as $tr_{r\_opt}$. The algorithm initializes a solution pool $\mathcal{P}$ (detailed in Section~\ref{sec:solutionIni}) before running ALNS,
where a trip from the solution pool is randomly selected to serve as the initial solution of ALNS.
(ii) C-ALNS uses multiple pairs of destroy operators $\mathcal{D}$ and build operators $\mathcal{B}$ to enable random selections of the operators to be used ALNS (detailed in Section~\ref{sec:operators}).
(iii) C-ALNS uses a local search procedure after a new solution is built to explore different visiting orders over the same set of POIs (detailed in Section~\ref{sec:localSearch}).
(iv)~C-ALNS uses a \emph{Simulated Annealing} (SA) strategy to avoid falling in local optimum (detailed in Section~\ref{sec:SA}).
\vspace{-2mm}
\begin{algorithm}
\caption{C-ALNS\label{alg:ALNS}}
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\SetKwFunction{KwAppend}{append}
\LinesNumbered
\Input{POI Graph $G$, Query $q=\langle u_q, l_s, l_e, t_q\rangle$}
\Output{Optimal trip $tr_{opt}$}
$tr_{opt}\leftarrow \emptyset, S(u_q, tr_{opt})\leftarrow -\infty, run \leftarrow 0$\;
initialize the solution pool $\mathcal{P}$\;
\While{$ run \le run_m$\label{algl:ALNS run start}}{
$tr\leftarrow \mathsf{RandomSelect}(\mathcal{P})$\label{algl:solutionIni}\;
$tr_{r\_opt}\leftarrow tr$\;
$temp\leftarrow \tau$\label{algl:tempIni}\;
initialize the weights $\mathcal{D}$ and $\mathcal{B}$\;
$itr \leftarrow 0$\;
\While{$itr \le itr_m$}{
$\{d,b\}\leftarrow \mathsf{RandSelect}(\mathcal{D},\mathcal{B})$\;
$tr'\leftarrow\mathsf{Apply}(tr, d)$\;
$tr'\leftarrow\mathsf{Apply}(tr, b)$\;
$\mathsf{LocalSearch}(tr')$\label{algl:localsearch}\;
\If{$S(u_q, tr')>S(u_q, tr)\ \text{or} \ x^{U(0,1)}<exp(\frac{S(u_q, tr')-S(u_q, tr))}{temp})$\label{algl:solution_accept}} {
$tr\leftarrow tr'$\;
\If{$S(u_q, tr_{r\_opt})<S(u_q, tr)$}{
$tr_{r\_opt}\leftarrow tr$\;
}
}
$temp\leftarrow temp\times \theta$\label{algl:cooling}\;
update the weights of $\mathcal{D}$ and $\mathcal{B}$\;
}
\If{$S(u_q, tr_{opt})<S(u_q, tr_{r\_opt})$}{
$tr_{opt}\leftarrow tr_{r\_opt}$\;
}
update $\mathcal{P}$\;
}\label{algl:ALNS run end}
\Return{$tr_{opt}$}
\end{algorithm}
\vspace{-2mm}
\subsubsection{The Solution Pool\label{sec:solutionIni}}
We maintain a subset of feasible trips in the solution pool $\mathcal{P}$, where each trip $tr_i$ is stored with its CTQ score as a tuple: $\langle tr_i, S(u_q, tr_i)\rangle$.
At the beginning of each ALNS run, we select a trip from the solution pool $\mathcal{P}$ and use it as the initial trip for the run. The probability of selecting a trip $tr_i$ is computed as $p(tr_i)=S(u_q, tr_i)/\sum_{tr \in \mathcal{P}} S(u_q, tr)$.
At the end of each run, we insert the tuple $\langle tr_{r\_opt}, S(u_q, tr_{r\_opt})\rangle$ into $\mathcal{P}$, where $tr_{r\_opt}$ is the best trip accepted in this run.
We keep $N$ trips with the highest CTQ scores in $\mathcal{P}$, where $N$ is a system parameter.
We initialize the solution pool with three initial trips generated by a low-cost heuristic based algorithm.
This algorithm first creates a trip with two vertices $v_1$ and $v_{|V|}$ corresponding to the starting and ending POIs $l_s$ and $l_e$ of the query.
Then, it iteratively inserts a new vertex into the trip until the time budge is reached.
To choose the next vertex to be added, we use the following three different strategies, yielding the three initial trips:
\begin{itemize}
\item
Choose the vertex $v$ that adds the highest profit to maximize $f^+_{\Delta}(v) = S(u_q, tr')-S(u_q,tr)$, where $tr$ is the current trip and $tr'$ is the trip after adding~$v$.
\item
Choose the vertex $v$ that adds the least time cost to minimize $tc^+_{\Delta}(v)$, where $t^+_{\Delta}(v) = tc(tr')-tc(tr)$.
\item
Choose the most cost-effective vertex $v$ that maximizes $f^+_{\Delta}(v)/t^+_{\Delta}(v)$.
\end{itemize}
\subsubsection{The Destroy and Build Operators\label{sec:operators}}
\textbf{The destroy operator.} Given a trip $tr$ and a removal fraction parameter $\rho\in [0,1]$, a destroy operator removes $\lceil \rho\cdot (|tr|-2)\rceil$ vertices from $tr$.
We use four destroy operators with different removal strategies:
\emph{Random.} This operator randomly selects $\lceil \rho\cdot (|tr|-2)\rceil$ vertices to be removed.
\emph{Least profit reduction.} This operator selects $\lceil \rho\cdot (|tr|-2)\rceil$ vertices with the least profit reduction: $f^-_{\Delta}(v)=S(u_q, tr)-S(u_q,tr')$, where $tr'$ represents the trip
after $v$ is removed from $tr$. We add randomness to this operator. Given the list of vertices in $tr$ sorted in ascending order of their profits, we compute the next vertex to be removed as $(x^{U(0,1)})^{\psi\cdot\rho}(|tr|-2)$. Here, $x$ is a random value generated from the Uniform distribution $U(0,1)$ and the parameter $\psi$ is a system parameter that represents the extent of randomness imposed on this operator. A larger value of $\psi$ leads to less randomness.
\emph{Most cost reduction.} This operator selects $\lceil \rho\cdot (|tr|-2)\rceil$ vertices with the largest cost reduction: $t^-_{\Delta}(v)$. We also randomize this operator in the same way as
the least profit reduction operator.
\emph{Shaw removal.} This operator implements the Shaw removal~\cite{ropke2006adaptive}. It randomly selects a vertex $v$ in $tr$ and removes $\lceil \rho\cdot (|tr|-2)\rceil$ vertices with the smallest distances to $v$. We also randomize this operator as we do above.
\textbf{The build operator.} The build operator adds vertices to $tr$ until the time budget is reached. We use four build operators as follows.
\emph{Most profit increment.} This operator iteratively inserts an unvisited vertex that adds the most profit.
\emph{Least cost increment.} This operator iteratively inserts an unvisited vertex that adds the least time cost.
\emph{Most POI similarity.} This operator randomly selects a vertex $v_i$ in $tr$. Then, it sorts the unvisited vertices by their distances to $v_i$
in our POI embedding space. The unvisited vertices nearest to $v_i$ are added to $tr$.
\emph{Highest potential.} This operator iteratively inserts an unvisited vertex $v_i$ that, together with another unvisited vertex $v_j$, adds the most profit while the two vertices
do not exceed the time budget.
\textbf{Operator choosing.}
We use a roulette-wheel scheme to select the operators to be applied. Specifically, we associate a weight $w$ to each destroy or build operator, which represents its performance in previous iterations to increase the CTQ score. The probability of selecting an operator $o_i$ equals to its normalized weight (e.g., $o_i.w/\sum_{o\in\mathcal{D}} o.w$ if $o_i$ is a destroy operator).
At the beginning of each ALNS run, we initialize the weight of each operator to be 1. After each iteration in a run, we score the applied operators based on their performances.
We consider four scenarios: (i) a new global best trip $tr_{opt}$ is found; (ii) a new local best trip within the run is found; (iii) a local best trip within the run is found but it is not new; and (iv)
the new trip is worse than the previous trip but is accepted by the Simulated Annealing scheme.
We assign different scores for different scenarios. The operator scoring scheme is represented as a vector $\vec{\pi} = \langle \pi_1, \pi_2, \pi_3, \pi_4, \pi_5 \rangle $, where each element corresponds to a scenario, e.g., $\pi_1$ represents the score for Scenario (i), and $\pi_5$ corresponds to any scenario not listed above. We require $\pi_1>\pi_2>\pi_3>\pi_4>\pi_5$.
Given an operator $o_i$, its current weight $o_i.w$ and its score $o_i.\pi$, we update the weight of $o_i$ as $o_i.w\leftarrow\kappa\cdot o_i.w + (1-\kappa)\cdot o_i.w$.
Here, $\kappa$ is a system parameter controlling the weight of the scoring action.
\subsubsection{Local Search\label{sec:localSearch}}
The local search function $\mathsf{LocalSearch}$ (Algorithm~\ref{alg:ALNS}, Line~\ref{algl:localsearch})
takes a trip $tr$ as its input and explores trips that consist of the same set of vertices of $tr$
but have different visiting orders. We adapt the \emph{2-opt edge exchange} technique for efficient local exploration. Specifically, the 2-opt edge exchange procedure iteratively performs the following procedure:
(i) remove two edges from $tr$; (ii) among the three sub-trips produced by Step (i), reverse the visiting order of the second sub-trip; (iii) reconnect the three sub-trips.
For example, let $tr = \langle v_1, v_2, v_3, v_4, v_5, v_6 \rangle$. Assume that we remove edges $\overrightarrow{e_{1,2}}$ and $\overrightarrow{e_{4,5}}$, which results in three sub-trips $\langle v_1 \rangle$,
$\langle v_2, v_3, v_4 \rangle$, and $\langle v_5, v_6\rangle$. We swap the visiting order of the second sub-trip and reconnect it with the other two sub-trips, producing a new trip
$\langle v_1, v_4, v_3, v_2, v_5, v_6\rangle$. If the new trip has a lower time cost, we accept the change and proceed to next pair of unchecked edges.
\renewcommand{\arraystretch}{1}
\begin{table*}
\vspace{-1mm}
\centering
\scriptsize
\caption{Performance Comparison in Recall, Precision, and F$_1$-score\label{tab:result1}}
\vspace{-3mm}
\begin{tabular}{|l|*3c|*3c|*3c|*3c|}
\hline
City & \multicolumn{3}{|c|}{Edin.} & \multicolumn{3}{|c|}{Glas.} & \multicolumn{3}{|c|}{Osak.} & \multicolumn{3}{|c|}{Toro.} \\ \hhline{|-|---|---|---|---|}
Algorithm & Rec. & Pre. & F$_1$ & Rec. & Pre. & F$_1$ & Rec. & Pre. & F$_1$ & Rec. & Pre. & F$_1$ \\ \hhline{|-|---|---|---|---|}
Random & 0.052 & 0.079 & 0.060 & 0.071 & 0.092 & 0.078 & 0.057 & 0.074 & 0.063 & 0.045 & 0.060 & 0.050 \\
Pop & 0.195 & 0.238 & 0.209 & 0.104 & 0.128 & 0.112 & 0.110 & 0.138 & 0.121 & 0.114 & 0.148 & 0.125 \\
MF & 0.242 & 0.229 & 0.233 & 0.310 & 0.308 & 0.307 & 0.195 & 0.173 & 0.181& 0.408 & 0.410 & 0.407 \\
PersTour & 0.455 & 0.418 & 0.430 & 0.589 & 0.571 & 0.577 & 0.406 & 0.384 & 0.392 & 0.431 & 0.422 & 0.425 \\
POIRank & 0.326 & 0.326 & 0.326 & 0.408 & 0.408 & 0.408 & 0.367 & 0.367 & 0.367 & 0.389 & 0.389 & 0.389\\
M-POIRank & 0.318 & 0.318 & 0.318 & 0.387 & 0.387 & 0.387 & 0.328 & 0.328 & 0.328 & 0.379 & 0.379 & 0.379 \\
C-ILP (proposed) & \textbf{0.555} & \textbf{0.527} & \textbf{0.538} & \textbf{0.659} & \textbf{0.646} & \textbf{0.651} & \textbf{0.497} & \textbf{0.492} & \textbf{0.494} & \textbf{0.618} & \textbf{0.601} & \textbf{0.608}\\
C-ALNS (proposed) & \textit{0.554} &\textit{0.527} & \textit{0.537} & \textit{0.657} & \textit{0.645} & \textit{0.649} & \textit{0.496} & \textit{0.491} & \textit{0.493} & \textit{0.616} & \textit{0.598} & \textit{0.607}
\\ \hline
\end{tabular}
\vspace{-2mm}
\end{table*}
\renewcommand{\arraystretch}{1}
\begin{table*}
\centering
\scriptsize
\caption{Performance Comparison in Recall$^*$, Precision$^*$, and F$_1^*$-score\label{tab:result2}}
\vspace{-3mm}
\begin{tabular}{|l|*3c|*3c|*3c|*3c|}
\hline
City & \multicolumn{3}{|c|}{Edin.} & \multicolumn{3}{|c|}{Glas.} & \multicolumn{3}{|c|}{Osak.} & \multicolumn{3}{|c|}{Toro.} \\ \hhline{|-|---|---|---|---|}
Algorithm & Rec$^*$. & Pre$^*$. & F$_1^*$ & Rec$^*$. & Pre$^*$. & F$_1^*$ & Rec$^*$. & Pre$^*$. & F$_1^*$ & Rec$^*$. & Pre$^*$. & F$_1^*$ \\ \hhline{|-|---|---|---|---|}
PersTour & 0.740 & 0.633 & 0.671 & 0.826 & 0.782 & 0.798 & 0.759 & 0.662 & 0.699 & 0.779 & 0.706 & 0.732 \\
POIRank & 0.700 & 0.700 & 0.700 & 0.768 & 0.768 & 0.768 & 0.745 & 0.745 & 0.745 & 0.754 & 0.754 & 0.754 \\
M-POIRank & 0.697 & 0.697 & 0.697 & 0.762 & 0.762 & 0.762 & 0.732 & 0.732 & 0.732 & 0.751 & 0.751 & 0.751 \\
C-ILP (proposed) & \textbf{0.792} & \textbf{0.754} & \textbf{0.769} & \textbf{0.864} & \textbf{0.844} & \textbf{0.853} & \textbf{0.793} & \textbf{0.740} & \textbf{0.763} & \textbf{0.842} & \textbf{0.800} & \textbf{0.818} \\
C-ALNS (proposed) & \textit{0.792}& \textit{0.752} & \textit{0.768} & \textit{0.862} & \textit{0.843} & \textit{0.852} & \textit{0.792} & \textit{0.739} & \textit{0.762} & \textit{0.841} & \textit{0.798} & \textit{0.815}
\\ \hline
\end{tabular}
\vspace{-5mm}
\end{table*}
\subsubsection{Simulated Annealing\label{sec:SA}}
We adapt the \emph{simulated annealing} (SA) technique to avoid local optima.
Specifically, at the beginning of each ALNS run, we initialize a temperature $temp$ to a pre-defined value $\tau$.
After every iteration, a new trip $tr'$ is generated from a previous trip $tr$. If $S(u_q, tr') < S(u_q, tr)$, we do not discard $tr'$ immediately.
Instead, we further test whether $S(u_q, tr') - S(u_q, tr) > -temp\times\log x$ where $x$ is a random value generated from the Uniform distribution $U(0,1)$ (Algorithm~\ref{alg:ALNS}, Line~\ref{algl:solution_accept}).
If yes, we still replace $tr$ with $tr'$. We gradually reduce the possibility of keeping a worse new trip by decreasing the value of $temp$ after each iteration by a pre-defined cooling factor $\theta$ (Algorithm~\ref{alg:ALNS}, Line~\ref{algl:cooling}).
\textbf{Algorithm complexity.}
C-ALNS has $run_m$ ALNS runs, where each run applies $itr_m$ pairs of destroy-build operators. To apply a detroy operator, the algorithm needs to perform $|tr_{avg}|$ comparisons to choose the vertices to remove. To apply a build operator, the algorithm needs to perform $|V|$ comparisons to choose the vertices to add. Here, $|tr_{avg}|$ represents the average length of feasible trips and $|V|$ represents the number of vertices in $G$. Thus, the time complexity of C-ALNS is $O(run_m\cdot itr_m\cdot (|tr_{avg}| + |V|))$.
\section{Experiments}
\label{sec:experiments}
We evaluate the effectiveness and efficiency of the proposed algorithms empirically in this section.
We implement the algorithms in Java. We run the experiments on a 64-bit Windows machine with 24 GB memory and a 3.4 GHz Intel Core i7-4770 CPU.
\subsection{Settings}
We use four real-world POI check-in datasets from Flickr (cf. Section~\ref{sec:empirical}). {\color{red} }
We perform leave-one-out cross-validation on the datasets.
In particular, we use a trip of a user $u$ with at least three POIs as a testing trip $tr^*$.
We use $u$ as the query user, the starting and ending POIs of $tr^*$ as the query starting and ending POIs, and the time cost of $tr^*$ as the query time budget.
We use all the other trips in the dataset for training to obtain the context-aware embeddings for the POIs and $u$.
Let $tr$ be a trip recommended by an algorithm.
We evaluate the algorithms with three metrics:
(i) \emph{Recall} -- the percentage of the POIs in $tr^*$
that are also in $tr$, (ii) \emph{Precision} -- the
percentage of the POIs in $tr$ that are also in $tr^*$,
(iii) \emph{F$_1$-score} -- the harmonic mean of Precision and Recall.
We exclude the starting and ending POIs when computing these three metrics.
To keep consistency with two baseline algorithms~\cite{chen2016learning,lim2015personalized},
we further report three metrics denoted as \emph{Recall$^*$}, \emph{Precision$^*$}, and \emph{F$_1^*$-score}.
These metrics are counterparts of Recall, Precision, and F$_1$-score, but they include the starting and ending POIs in the computation.
We test both our algorithms \textbf{C-ILP} (Section~\ref{sec:cilp}) and \textbf{C-ALNS} (Section~\ref{sec:calns}).
They use the same context-aware POI embeddings as described in Section~\ref{sec:model}.
We learn a 13-dimensional embedding with a learning rate $\eta$ of 0.0005 and a regularization term parameter $\lambda$ of 0.02.
For C-ALNS, we set the removal fraction $\rho$ as 0.2, the operator scoring vector $\vec{\pi}$ as $\langle 10,5,3,1,0\rangle$, the SA initial temperature as 0.3, and the cooling factor as 0.9995.
\textbf{Baseline algorithms.}
We compare with the following five baseline algorithms:
\textbf{Random.} This algorithm repeatedly adds a randomly chosen unvisited POI to the recommended trip until reaching the query time budget.
\textbf{Pop.} This algorithm repeatedly adds the most popular unvisited POI to the recommended trip until reaching the query time budget.
The popularity of a POI is computed as the normalized POI visit frequency.
\textbf{MF.} This algorithm repeatedly adds the unvisited POI with the highest \emph{user interest score}
to the recommended trip until reaching the query time budget.
The user interest score of a POI is computed using \emph{Bayesian Probabilistic Matrix Factorization}~\cite{salakhutdinov2008bayesian} over the matrix of users and POI visits.
\textbf{PersTour~\cite{lim2015personalized}.} This algorithm recommends the trip that meets the time budget
and has the highest sum of \emph{POI scores}.
The POI score of a POI $l$ is the weighted sum of its popularity and user interest score, where the popularity is computed with the same method as in \textbf{Pop.}, and the user interest score is derived from the query user's previous visiting durations at POIs with the same category as $l$. We use a weight of 0.5, which is reported to be optimal~\cite{lim2015personalized}.
\textbf{POIRank~\cite{chen2016learning}.} This algorithm resembles PersTour but differs in how the POI score is computed.
It represents each POI as a feature vector of five dimensions: POI category, neighborhood, popularity, visit counts, and visit duration (cf.~Section~\ref{sec:lit_gen}).
It computes the POI score of each POI using \emph{rankSVM} with linear kernel and L2 loss~\cite{lee2014large}.
We further test its variant \textbf{M-POIRank} where a weighted \emph{transition score} is added to the POI score.
Given a pair of POIs, their transition score is modeled using the Markov model that factorizes the transition probability between the two POIs as
the product of the transition probabilities between the five POI features of the two POIs.
\begin{comment}
We summarize the algorithms tested and the factors considered by the algorithms in Table~\ref{tab:algo_sum}.
\begin{table}[h]
\renewcommand*{\arraystretch}{1.0}
\centering
\small
\caption{Factors Considered in the Algorithms: popularity (pop.), user preference (pref.), Context POIs (con.), and time budget (time)}
\vspace{-3mm}
\begin{threeparttable}
\begin{tabular}{lllll}
\toprule[1pt]
Algorithm & pop. & pref. & con. & time\\ \midrule[1pt]
Random & \ding{55} & \ding{55} & \ding{55} & \checkmark\\ \midrule
Pop & \checkmark & \ding{55} & \ding{55} & \checkmark\\ \midrule
MF & \ding{55} & \checkmark & \ding{55} & \checkmark \\\midrule
PersTour & \checkmark & \checkmark & \ding{55} & \checkmark\\\midrule
POIRank & \checkmark & \ding{55} & \ding{55} & partial\\ \midrule
M-POIRank & \checkmark & \ding{55} & partial & partial\\ \midrule
C-ILP & \checkmark & \checkmark & \checkmark & \checkmark\\\midrule
C-ALNS & \checkmark & \checkmark & \checkmark & \checkmark\\
\bottomrule[1pt]
\end{tabular}
\end{threeparttable}
\vspace{-2mm}
\label{tab:algo_sum}
\end{table}
\end{comment}
\subsection{Results}
\emph{Overall performance.}
We summarize the results in Tables~\ref{tab:result1} and~\ref{tab:result2} (Random, Pop, and MF are uncompetitive and are
omitted in Table~\ref{tab:result2} due to space limit). We highlight the best result in \textbf{bold} and the second best result in \textit{italics}. We see that both C-ILP and C-ALNS consistently outperform the baseline algorithms.
C-ILP outperforms PersTour, the baseline with the best performance, by $25\%, 13\%, 26\%$, and $43\%$ in F$_1$-score on the datasets Edinburgh, Glasgow, Osaka, and Toronto, respectively.
C-ALNS has slightly lower scores than those of C-ILP, but the difference is very small (0.002 on average).
This confirms the capability of our heuristic algorithm C-ALNS to generate high quality trips.
We compare the running times of C-ILP and C-ALNS in Figure~\ref{fig:runtime}. For completeness, we also include the running times of Random, Pop, and PersTour, but omit those of MF and POIRank as they resemble that of PersTour.
C-ALNS outperforms C-ILP and PersTour by orders of magnitude (note the logarithmic scale). The average running times of C-ILP and PersTour are $10^4$~ms and $2.5\times 10^3$~ms, respectively, while that of C-ALNS is only around 300~ms, 600~ms, 60~ms, and 100~ms for the four datasets, respectively. Compared with C-ILP, C-ALNS reaches almost the same F$_1$-score while reducing the running time by up to $99.4\%$. Compared with PersTour, C-ALNS obtains up to $43\%$ improvement in F$_1$-score while reducing the running time by up to $97.6\%$. Random and Pop have the smallest running times but also very low trip quality as shown in Table~\ref{tab:result1}.
\begin{figure}[h]
\vspace{-5.5mm}
\centering
\hspace{-6mm}\subfloat[C-ILP vs. C-ALNS ]{\includegraphics[width = 4.5cm]{runtime.eps}\label{fig:runtime}}\hspace{-8mm}
\subfloat[Impact of factors]{\includegraphics[width = 4.5cm]{self.eps}\label{fig:self}}\hspace{-4mm}
\vspace{-3mm}
\caption{Comparisons among proposed algorithms}
\label{fig:proposed_algorithms}
\vspace{-3mm}
\end{figure}
\emph{Impact of different factors.}
To investigate the contributions of POI popularities, user preferences, and co-occurring POIs in our embeddings,
we implement two variants of C-ILP, namely, \textbf{C-ILP-Pop} and \textbf{C-ILP-Pref}. These two variants
use POI embeddings that only learns POI popularities and jointly learns POI popularities and user preferences, respectively.
Fig.~\ref{fig:self} shows a comparison among C-ILP-Pop, C-ILP-Pref, and C-ILP.
We see that the F$_1$-score increases as the POI embeddings incorporate more factors.
This confirms the impact of the three factors.
Moreover, we see that on the Edinburgh and Toronto datasets where POIs have more diverse POI co-occurrences (cf.~Section~\ref{sec:empirical}),
the improvement of C-ILP (with co-occurring POIs in the embeddings) over C-ILP-Pref is more significant. This demonstrates the effectiveness
of our model to learn the POI co-occurrences.
We also implement an algorithm that separately learns the impact of POI popularities, user preferences, and co-occurring POIs, denoted as \textbf{C-ILP-Sep}.
The algorithm considers equal contribution of the three factors to recommend trips.
We see that C-ILP outperforms C-ILP-Sep consistently. This confirms the superiority of joint learning in our algorithm.
\begin{figure}[h]
\vspace{-5mm}
\centering
\hspace{-6mm}\subfloat[POI popularity]{\includegraphics[width = 4.5cm]{pop-f1.eps}\label{fig:pop}}\hspace{-8mm}
\subfloat[User preferences]{\includegraphics[width = 4.5cm]{int-f1.eps}\label{fig:pref}}\hspace{-6mm}
\vspace{-3mm}
\caption{Impact of model learning capability}
\label{fig:model_capability}
\vspace{-3mm}
\end{figure}
\emph{Impact of model learning capability.}
To further show that our proposed POI embedding model has a better learning capability,
we compare C-ILP-Pop with Pop in Fig.~\ref{fig:pop}, since these two algorithms only consider POI popularity.
Similarly, we compare C-ILP-Pref with the baseline algorithms that considers user preferences, i.e., MF and PersTour, in Fig.~\ref{fig:pref}.
In both figures, our models produce trips with higher F$_1$-scores, which confirms the higher learning capability of our models.
\section{Conclusions}
\label{sec:conclusions}
We proposed a context-aware model for POI embedding. This model jointly learns the impact of POI popularities,
co-occurring POIs, and user preferences over the probability of a POI being visited in a trip.
To showcase the effectiveness of this model, we applied it to a trip recommendation problem named TripRec.
We proposed two algorithms for TripRec based on the learned embeddings for both POIs and users.
The first algorithm, C-ILP, finds the exact optimal trip by transforming and solving TripRec as an integer linear programming problem.
The second algorithm, C-ALNS, finds a heuristically optimal trip but with a much higher efficiency based on the adaptive large neighborhood search technique.
We performed extensive experiments on real datasets. The results showed that the proposed algorithms using our
context-aware POI embeddings consistently outperform state-of-the-art algorithms in trip recommendation quality,
and the advantage is up to $43\%$ in F$_1$-score. C-ALNS reduces the running time for trip recommendation by $99.4\%$
comparing with C-ILP while retaining almost the same trip recommendation quality, i.e., only 0.2\% lower in F$_1$-score.
\balance
\bibliographystyle{abbrv}
| {
"attr-fineweb-edu": 1.949219,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdkA4ukPiEUQBmOge | \section{The Static Model}
We intend here to model, in a simple way, the effects of point systems on the choice of the levels of effort of teams. We consider two teams, $A$ and $B$. The possible events in a match are denoted $(a,b)\in \mathbb{N}_{0}\times\mathbb{N}_{0}$, where $\mathbb{N}_{0}$ represents the natural numbers plus $0$. Letters $a$ and $b$ stand for the tries scored by teams $A$ and $B$, respectively.\\
To simplify the analysis, we disregard the precise differences between goals, penalty kicks or drop kicks, and just focus on the tries scored and the joint efforts of the teams. In each event, we consider a contest in which two risk - neutral contestants are competing to score a try, and win the points awarded by the points system.\footnote{We consider that teams totally discount the future, and assume that the game can end after they score a try.} The contestants differ in their valuation of the prize. Each contestant $i\in\{A,B\}$ independently exerts an irreversible and costly effort $e_{i}\geq0$, which will determine, through a {\it contest success function} (CSF), which team wins the points. Formally, the CSF maps the profile of efforts $(e_{A},e_{B})$ into probabilities of scoring a try. We adopt the logit formulation, since it is the most widely used in the analysis of sporting contests (Dietl et al., 2011). Its general form was introduced by Tullock (1980), although we use it here with a slight modification:\footnote{When teams exert no effort, the probability of scoring a try is 0. This allows to get a tie as a result.}
$$p_{i}(e_{A},e_{B})= \left\{ \begin{array}{lcc}
\dfrac{e_{i}^{\alpha}}{e_{A}^{\alpha}+e_{B}^{\alpha}} & if & max\{e_{A},e_{B}\}>0
\\0 & & otherwise
\end{array}
\right.$$
The parameter $\alpha>0$ is called the \textquotedblleft discriminatory power\textquotedblright{} of the CSF, measuring the sensitivity of success to the level of effort exerted.\footnote{This CSF satisfies homogeneity. That is, when teams exert the same level of effort, they have the same probabilities of winning the contest. This is a plausible hypothesis when teams have the same level of play.} We normalize it and set $\alpha=1$. Associated to effort there is a cost function $c_{i}(e_{i})$, often assumed linear in the literature,
$$c_{i}(e_{i})=ce_{i}$$
\noindent where $c>0$ is the (constant) marginal cost of effort.\\
The utility or payoff function when the profile of efforts is $(e_{A},e_{B})$ and the score is $(a,b)$, has the following form (we omit the effort argument for simplicity):
$$U_{A}((e_{A},e_{B}),(a,b))=$$
$$p_{A}(f_{A}(a+1,b)+k_{B1}\epsilon)+(1-p_{A}-p_{B})(f_{A}(a,b)+k_{B2}\epsilon))+p_{B}f_{A}(a,b+1)-ce_{A}$$
\noindent for team $A$, and
$$U_{B}((e_{A},e_{B}),(a,b))=$$
$$p_{A}f_{B}(a+1,b)+(1-p_{A}-p_{B})(f_{B}(a,b)+k_{A2}\epsilon))+p_{B}(f_{B}(a,b+1)+k_{A1})\epsilon)-ce_{B}$$
\noindent for team $B$, where
$$k_{B1}=f_{B}(a,b+1)-f_{B}(a+1,b)$$
$$k_{B2}=f_{B}(a,b+1)-f_{B}(a,b)$$
$$k_{A1}=f_{A}(a+1,b)-f_{A}(a,b+1)$$
$$k_{A2}=f_{A}(a+1,b)-f_{A}(a,b)$$
\noindent and
$f_{i}:\mathbb{N}_{0}\times\mathbb{N}_{0}\rightarrow \{0,1,2,3,4,5\}$ depends on the point system we are working with. It is defined on the final scores and yields the points earned by team $i$.
Each point system is characterized by a different function.\\
In the case where no bonus point ($NB$ system) is awarded, we have:\\
$f_{A}^{NB}(a,b)= \left\{ \begin{array}{lcc}
4 & if & a>b
\\ 2 & if & a=b
\\ 0 & if & a<b
\end{array}
\right.$
$f_{B}^{NB}(a,b)= \left\{ \begin{array}{lcc}
0 & if & a>b
\\ 2 & if & a=b
\\ 4 & if & a<b
\end{array}
\right.$\\
When a bonus point is given for scoring $4$ or more tries, and for losing by one try ($+4$ system) the functions are:\\
$f_{A}^{+4}(a,b)= \left\{ \begin{array}{lcccccccc}
4 & if & a>b & and & a<4
\\5 & if & a>b & and & a\geq 4
\\ 2 & if & a=b & or & [a>4 & and & b-a=1] \\
3 & if & a=b & and & a\geq 4\\
0 & if & a<4 & and & b-a>1\\
1 & if & [a+1<b & and & a\geq 4] & or & b-a=1
\end{array}
\right.$
$f_{B}^{+4}(a,b)= \left\{ \begin{array}{lcccccccc}
4 & if & b>a & and & b<4
\\5 & if & b>a & and & b\geq 4
\\ 2 & if & a=b & or & [b>4 & and & a-b=1] \\
3 & if & a=b & and & b\geq 4\\
0 & if &b<4 & and & a-b>1\\
1 & if & [b+1<a & and & b\geq 4] & or & a-b=1
\end{array}
\right.$\\
Finally, when a difference of $3$ tries gives the winning team a bonus point and losing by one try gives the bonus point to the loser ($3+$ system) we have:\\
$f_{A}^{3+}(a,b)= \left\{ \begin{array}{lcccccccccc}
4 & if & 0<a-b<3
\\5 & if &a-b\geq 3
\\ 2 & if & a=b \\
0 & if & b-a>1 \\
1 & if & b-a=1
\end{array}
\right.$
$f_{B}^{3+}(a,b)= \left\{ \begin{array}{lcccccccccc}
4 & if & 0<b-a<3
\\5 & if & b-a\geq 3
\\ 2 & if & a=b \\
0 & if & a-b>1 \\
1 & if & a-b=1
\end{array}
\right.$ \\
In all three cases the utility functions represent the weighted sum of three probabilities, namely that of team $A$ scoring, that of none of the teams scoring and that of team $B$ scoring. The corresponding weights are the points earned in each case plus the gain of blocking the other team, precluding it of winning points. This gain is defined as the difference between the points that the other team can earn if it scores and the points they get times $\epsilon$, where $0<\epsilon<<1$ is not very large. This $\epsilon$ intends to measure the importance of blocking the other team and not letting it score and earn more points. Teams are playing a tournament, so making it hard for the other team to earn points is an incentive (although not a great one) in a match. The way we define the utility function rests on the simple idea that to score four tries, one has to be scored first. This captures the assumption that teams care only about the immediate result of scoring, and not about what can happen later.\\
Under these assumptions, we seek to find the equilibria corresponding to the three point systems. The appropriate notion of equilibrium here is in terms of {\em strict dominant strategies} since the chances of each team are independent of what the other does. Notice that, trivially, each dominant strategies equilibrium is (the unique) Nash equilibrium in the game.\footnote{Nash equilibria exist since the game trivially satisfies the condition of having compact and convex spaces of strategies while the utility functions have the expected probability form, which ensures that the best response correspondence has a fixed point.} Once obtained these equilibria, the next step of the analysis is to compare them, to determine how the degree of offensiveness changes with the change of rules. This comparison is defined in terms of the following relation:
$$(e_{A},e_{B})\succeq (e_{A^{\prime}},e_{B^{\prime}})\ \mbox{if}\ e_{A}+e_{B}\geq e_{A^{\prime}}+e_{B^{\prime}}$$
\noindent while
$$(e_{A},e_{B})\sim (e_{A^{\prime}},e_{B^{\prime}}) \ \ \mbox{in any other case.}$$
\noindent where $(e_{A},e_{B})\succeq (e_{A^{\prime}},e_{B^{\prime}})$ is understood as ``with $(e_{A},e_{B})$ both teams exert more effort than with $(e_{A^{\prime}},e_{B^{\prime}})$''.\\
We look for the maximum number of tries that can be scored by a team, in order to limit the number of cases to analyze. We use the statistics of games played in different tournaments around the world, which show that, in average, teams can get at most $7$ tries ([12]-[23])(see Section 4). This, in turn leads us to $64$ possible instances ({\em events}).\\
At each event we compare the equilibrium strategies. We thus obtain a ranking of the point systems, based on the $\succeq$ relation. The reaction function of team $i$, describing the best response to any possible effort choice of the other team, can be computed from the following first order conditions:
$$\dfrac{e_{B}}{(e_{A}+e_{B})^{2}}k_{A}=c $$
\noindent for team $A$, and
$$\dfrac{e_{A}}{(e_{A}+e_{B})^{2}}k_{B}=c $$
\noindent for team $B$, where $k_{A}$ and $k_{B}$ obtain by rearranging the constants of the corresponding utility function. The equilibrium $(e_{A}^{\ast},e_{B}^{\ast})$ in pure strategies is characterized by the intersection of the two reaction functions and is given by:
$$(e_{A}^{\ast},e_{B}^{\ast})=(\dfrac{k_{A}^{2}k_{B}}{c(k_{A}+k_{B})^{2}},\dfrac{k_{A}k_{B}^{2}}{c(k_{A}+k_{B})^{2}}) $$
As an example, consider, without loss of generality, a particular instance (for simplicity we omit the arguments):
\begin{itemize}
\item \textbf{Event (2,0)}\\
\textbf{$NB$ system}
$$U_{A}((e_{A},e_{B}),(2,0))=p_{A}(4+0\epsilon)+(1-p_{A}-p_{B})(4+0\epsilon))+p_{B}4-ce_{A}$$
$$U_{B}((e_{A},e_{B}),(2,0))=p_{A}0+(1-p_{A}-p_{B})(0+0\epsilon))+p_{B}(0+0\epsilon)-ce_{B}$$
The Nash (dominant strategies) equilibrium is given by $(e_{A}^{\ast},e_{B}^{\ast})=(0,0) $\\
\textbf{$3+$ system}
$$U_{A}((e_{A},e_{B}),(2,0))=p_{A}(5+\epsilon)+(1-p_{A}-p_{B})(4+\epsilon))+p_{B}4-ce_{A}$$
$$U_{B}((e_{A},e_{B}),(2,0))=p_{A}0+(1-p_{A}-p_{B})(0+\epsilon))+p_{B}(1+\epsilon)-ce_{B}$$
The equilibrium is $(e_{A}^{\ast},e_{B}^{\ast})=(\dfrac{(1+\epsilon)^{2}(1+\epsilon)}{c(1+\epsilon+1+\epsilon)^{2}},\dfrac{(1+\epsilon)(1+\epsilon)^{2}}{c(1+\epsilon+1+\epsilon))^{2}}) $.\\
\textbf{$+4$ system}
$$U_{A}((e_{A},e_{B}),(2,0))=p_{A}(4+\epsilon)+(1-p_{A}-p_{B})(4+\epsilon))+p_{B}4-ce_{A}$$
$$U_{B}((e_{A},e_{B}),(2,0))=p_{A}0+(1-p_{A}-p_{B})(0+0\epsilon))+p_{B}(1+0\epsilon)-ce_{B}$$
The equilibrium is $(e_{A}^{\ast},e_{B}^{\ast})=(\dfrac{\epsilon^{2}1}{c(1+\epsilon)^{2}},\dfrac{\epsilon1^{2}}{c(1+\epsilon)^{2}}) $.\\
\end{itemize}
Since we assume that $\epsilon$ is sufficiently small, we can infer that teams will exert more effort under the $3+$ system, then in the $+4$ and finally in the $NB$. The comparison of all the possible events yields:
\begin{propi}
The $3+$ system is the Condorcet winner in the comparison among the point systems. By the same token, teams exert more effort under the $+4$ system than in the $NB$ one.
\end{propi}
\begin{proof}
We analyze the 64 possible events. Table $1$ shows the results favoring team $A$. By symmetry, analogous results can be found for team $B$. \\
Consider the following pairwise comparisons:
\begin{itemize}
\item $NB$ vs. $3+$: 22 events rank higher under {\em 3+}, while 7 under {\em NB}.
\item $+4$ vs. $NB$: 22 events for the former against 7 with the latter.
\item $3+$ vs. $+4$: 16 with the former against 6 with {\em +4}.
\end{itemize}
This indicates that $3+$ is the Condorcet winner, while $NB$ is the Condorcet loser.
\end{proof}
\begin{table}[H]~\label{table}
\begin{flushleft}
\resizebox{17cm}{9.5cm}{
\begin{tabular}{||l l l l l||}
\hline
Events & {\em NB} Equilibrium & {\em 3+} Equilibrium & {\em +4} Equilibrium & Ranking \\
\hline
(0,0),(1,1),(2,2) & $ (\dfrac{(4+4\epsilon)^{3}}{4c(4+4\epsilon)^{2}},\dfrac{(4+4\epsilon)^{3}}{4c(4+4\epsilon)^{2}})$ & $(\dfrac{(3+3\epsilon)^{3}}{4c(3+3\epsilon)^{2}},\dfrac{(3+3\epsilon)^{3}}{4c(3+3\epsilon)^{2}})$&$(\dfrac{(3+3\epsilon)^{3}}{4c(3+3\epsilon)^{2}},\dfrac{(3+3\epsilon)^{3}}{4c(3+3\epsilon)^{2}})$& $NB\succ3+\sim +4$ \\
(3,3) & $(\dfrac{(4+4\epsilon)^{3}}{4c(4+4\epsilon)^{2}},\dfrac{(4+4\epsilon)^{3}}{4c(4+4\epsilon)^{2}})$ & $(\dfrac{(3+3\epsilon)^{3}}{4c(3+3\epsilon)^{2}},\dfrac{(3+3\epsilon)^{3}}{4c(3+3\epsilon)^{2}})$&$(\dfrac{(4+4\epsilon)^{3}}{4c(4+4\epsilon)^{2}},\dfrac{(4+4\epsilon)^{3}}{4c(4+4\epsilon)^{2}})$& $NB\sim+4\succ 3+$ \\
(4,4),(5,5),(6,6), (7,7)& $ (\dfrac{(4+4\epsilon)^{3}}{4c(4+4\epsilon)^{2}},\dfrac{(4+4\epsilon)^{3}}{4c(4+4\epsilon)^{2}})$ & $(\dfrac{(3+3\epsilon)^{3}}{4c(3+3\epsilon)^{2}},\dfrac{(3+3\epsilon)^{3}}{4c(3+3\epsilon)^{2}})$&$(\dfrac{(3+3\epsilon)^{3}}{4c(3+3\epsilon)^{2}},\dfrac{(3+3\epsilon)^{3}}{4c(3+3\epsilon)^{2}})$& $NB\succ3+\sim +4$ \\
(1,0),(2,1) & $ (\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}},\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}})$ & $(\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}},\dfrac{(2+2\epsilon)^{3}}{4c(2+3\epsilon)^{2}})$&$(\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}},\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}})$& $NB\sim3+\sim +4$ \\
(3,2) & $(\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}},\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}})$ &$(\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}},\dfrac{(2+2\epsilon)^{3}}{4c(2+3\epsilon)^{2}})$&$(\dfrac{(3+2\epsilon)^{2}(2+3\epsilon)}{c(5+5\epsilon)^{2}},\dfrac{(3+2\epsilon)(2+3\epsilon)^{3}}{c(5+5\epsilon)^{2}})$& $+4\succ3+\sim NB$ \\
(4,3) & $(\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}},\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}})$ &$(\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}},\dfrac{(2+2\epsilon)^{3}}{4c(2+3\epsilon)^{2}})$&$(\dfrac{(2+3\epsilon)^{2}(3+2\epsilon)}{c(5+5\epsilon)^{2}},\dfrac{(2+3\epsilon)(3+2\epsilon)^{3}}{c(5+5\epsilon)^{2}})$& $+4\succ3+\sim NB$ \\
(5,4), (6,5), (7,6) & $ (\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}},\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}})$ & $(\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}},\dfrac{(2+2\epsilon)^{3}}{4c(2+3\epsilon)^{2}})$&$(\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}},\dfrac{(2+2\epsilon)^{3}}{4c(2+2\epsilon)^{2}})$& $NB\sim3+\sim +4$ \\
(2,0)&$(0,0)$&$(\dfrac{(1+\epsilon)^{3}}{4c(1+1\epsilon)^{2}},\dfrac{(1+\epsilon)^{3}}{4c(1+\epsilon)^{2}})$&$(\dfrac{\epsilon^{2}}{c(1+\epsilon)^{2}},\dfrac{\epsilon}{c(1+\epsilon)^{2}})$& $3+\succ+4\succ NB$ \\
(3,1)&$(0,0)$&$(\dfrac{(1+\epsilon)^{3}}{4c(1+1\epsilon)^{2}},\dfrac{(1+\epsilon)^{3}}{4c(1+\epsilon)^{2}})$&$(\dfrac{(1+\epsilon)^{3}}{4c(1+1\epsilon)^{2}},\dfrac{(1+\epsilon)^{3}}{4c(1+\epsilon)^{2}})$& $3+\sim+4\succ NB$\\
(4,2) &$(0,0)$&$(\dfrac{(1+\epsilon)^{3}}{4c(1+1\epsilon)^{2}},\dfrac{(1+\epsilon)^{3}}{4c(1+\epsilon)^{2}})$&$(\dfrac{\epsilon^{2}}{c(1+\epsilon)^{2}},\dfrac{\epsilon}{c(1+\epsilon)^{2}})$& $3+\succ+4\succ NB$ \\
(5,3) &$(0,0)$&$(\dfrac{(1+\epsilon)^{3}}{4c(1+1\epsilon)^{2}},\dfrac{(1+\epsilon)^{3}}{4c(1+\epsilon)^{2}})$&$(\dfrac{8\epsilon^{2}}{4c(1+\epsilon)^{2}},\dfrac{8\epsilon}{4c(1+\epsilon)^{2}})$& $3+\succ+4\succ NB$ \\
(6,4), (7,5) &$(0,0)$&$(\dfrac{(1+\epsilon)^{3}}{4c(1+1\epsilon)^{2}},\dfrac{(1+\epsilon)^{3}}{4c(1+\epsilon)^{2}})$&$(\dfrac{\epsilon^{2}}{c(1+\epsilon)^{2}},\dfrac{\epsilon}{c(1+\epsilon)^{2}})$& $3+\succ+4\succ NB$ \\
(3,0)&$(0,0)$&$(\dfrac{\epsilon}{c(1+\epsilon)^{2}},\dfrac{\epsilon^{2}}{c(1+\epsilon)^{2}})$&$(\dfrac{\epsilon}{c(1+\epsilon)^{2}},\dfrac{\epsilon^{2}}{c(1+\epsilon)^{2}})$& $3+\sim+4\succ NB$ \\
(4,1), (5,2) &$(0,0)$&$(\dfrac{\epsilon}{c(1+\epsilon)^{2}},\dfrac{\epsilon^{2}}{c(1+\epsilon)^{2}})$&$ (0,0)$& $3+\succ+4\sim NB$ \\
(6,3) &$(0,0)$&$(\dfrac{\epsilon}{c(1+\epsilon)^{2}},\dfrac{\epsilon^{2}}{c(1+\epsilon)^{2}})$&$(\dfrac{\epsilon}{c(1+\epsilon)^{2}},\dfrac{\epsilon^{2}}{c(1+\epsilon)^{2}})$& $3+\sim+4\succ NB$ \\
(7,4) &$(0,0)$&$(\dfrac{\epsilon}{c(1+\epsilon)^{2}},\dfrac{\epsilon^{2}}{c(1+\epsilon)^{2}})$&$ (0,0)$& $3+\succ+4\sim NB$ \\
(4,0), (5,1), (6,2) &$(0,0)$&$(0,0)$ &$ (0,0)$& $3+\sim+4\sim NB$ \\
(7,3) &$(0,0)$&$(0,0)$ &$(\dfrac{\epsilon}{c(1+\epsilon)^{2}},\dfrac{\epsilon^{2}}{c(1+\epsilon)^{2}})$& $+4\succ3+\sim NB$ \\
(5,0), (6,1), (7,2) &$(0,0)$&$(0,0)$ &$ (0,0)$& $3+\sim+4\sim NB$ \\
(6,0), (7,1)&$(0,0)$&$(0,0)$ &$ (0,0)$& $3+\sim+4\sim NB$ \\
(7,0)&$(0,0)$&$(0,0)$ &$ (0,0)$& $3+\sim+4\sim NB$ \\
\hline
\end{tabular}}
\end{flushleft}
\caption{Comparison of point systems}
\end{table}
\newpage
\section{The Dynamic Model}
In this section we model a rugby game following the argumentation line in Mass\'o - Neme (1996). We conceive it as a dynamic game in which the feasible and equilibrium payoffs of the teams under the three point systems can be compared.
In this setting, we first find the minimax feasible payoffs in every point system.\footnote{The smallest payoff which the other team can force a team to receive. Formally: $\bar{v}_i = \min_{s_{-i}} \max_{s_i} \mathbf{U}(s_i, s_{-i})$.} This minimax payoff defines a region of equilibrium payoffs. We consider the Nash equilibriums used to reach this minimax payoffs in and take the average joint efforts in each system. Again, we want to find which point system makes the teams spend more effort in order to attain the equilibirum payoffs.\\
Formally, let us define a dynamic game as $G=(\{A,B\}, (W,(0,0)),E^{\ast},T)$, where:
\begin{enumerate}
\item There are again two teams, $A$ and $B$. A generic team will be denoted by $i$.
\item We restrict the choices of actions to a finite set of joint actions $E^{\ast}$ where $E^{\ast}=\{(e_{A},e_{B})\in\mathbb{R}^{2}_+\}$ where each $e_{i}$ was used in a Nash equilibrium of the static game.
\item A finite set of events $W$, each of which represents a class of equivalent pairs of scores of the two teams.
\begin{itemize}
\item $(a,b)\sim(7,1)$ if $a>7$ and $b=1$\\
$(a,b)\sim(7,2)$ if $a>7$ and $b=2$\\
$(a,b)\sim(7,3)$ if $a>7$ and $b=3$\\
$(a,b)\sim(7,4)$ if $a-b\geq3$ and $b\geq4$\\
$(a,b)\sim(7,5)$ if $a-b\geq2$ and $b\geq5$\\
$(a,b)\sim(7,6)$ if $a-b\geq1$ and $b\geq6$\\
$(a,b)\sim(7,7)$ if $a=b$ and $b\geq7$\\
\end{itemize}
In each case, we say that two scores belong to the same event if the two teams get the same payoffs in both cases in a finite {\bf instantaneous game} in normal form defined as
$$(a,b)=(\{A,B\},E^{\ast},((^{(a,b)}\!U_{i}^{S})_{i\in \{A,B\}} )$$
\noindent where $S = NB, +4$ or $3+$ and $^{(a,b)}\!U_{i}^{S}$ represents the utility function of team $i$ used in the static model in the instantaneous game in the event $(a,b)$, with the point system $S$.
\item All the point systems have the same initial event, namely $(0,0)$.
\item A transition function $T$, which specifies the new event as a function of the current event and the joint actions taken by both teams.
Therefore
$$T:W\times E^{\ast}\longrightarrow W.$$
The transition function has only three possible outcomes (we use a representative element, i.e. a pair of scores, for any event in $W$):
\begin{enumerate}
\item $T((a,b),(e_{A},e_{B}))=(a,b)$
\item $T((a,b),(e_{A},e_{B}))=(a+1,b)$
\item $T((a,b),(e_{A},e_{B}))=(a,b+1)$
\end{enumerate}
These outcomes represent the fact that, upon a choice of joint efforts, either no team scores, $A$ scores or $B$ scores, respectively.
\end{enumerate}
Some further definitions will be useful in the rest of this work:
\begin{defi} For every $t\in \mathbb{N}$, define $H^{t}$ as $\overbrace{E^{\ast} \times \ldots \times E^{\ast}}^{t \ \mbox{times}}$ i.e. an element $h\in H^{t}$ is a history of joint efforts of length $t$. We denote by $H^{0}=\{e\}$ the set of histories of length $0$, with {\em e} standing for the empty history.
Let $H=\cup^{\infty}_{t=0}H^{t}$ be the set of all possible histories in $G$.
\end{defi}
We define recursively a sequence of $t +1$ steps of events starting with $(a,b)$, namely $\{(a,b)^j\}_{j=0}^t$, where $(a,b)^0 = (a,b)$, $\ldots$, $(a,b)^{t-1}= T((a,b), h_{t-1} \setminus h_{t-2})$, $(a,b)^t=T((a,b)_{t-1}, h_{t} \setminus h_{t-1})$, where $(h_{0},\ldots,h_{t})\in H$ is such that for each $j=0, \ldots, t$, $h_{j-1}$ is the initial segment of $h_j$ and $h_j \setminus h_{j-1}$ is the event exerted at the $j$-th step.
\begin{defi}
A strategy of team $i\in\{A,B\}$ in the game $G$ is a function $f_{i}:H\rightarrow E^{\ast}_i$ such that for each $h_{t-1}$, the ensuing $h_t$ is the sequence $(h_{t-1}, (f_A(h_{t-1}), f_B(h_{t-1}))$.\footnote{This assumes {\em perfect monitoring}. That is, that teams decide their actions knowing all the previous actions in the play of the game.} We will denote by $F_{i}$ the set of all these functions for team $i$ and, by extension, we define $F=F_{A}\times F_{B}$.
\end{defi}
Thus, any $f\in F$ defines recursively a sequence of consecutive histories. We also have that $f$ defines a sequence of instantaneous games for each scoring system $S$ defined as $(a,b)^S(f)=\{(a,b)^{S}_{t}(f)\}_{t=0}^{\infty}$, where each game corresponds to an event $(a,b)^S_j$: $(a,b)^S_{0}=(a,b)$ and for every $t\geq 1$, $(a,b)^S_{t} =T((a,b)^S_{t-1}, f(h_t))$.
\begin{defi} A joint strategy $f = (f_A, f_B) \in F$ is {\em stationary} if for every $h,h^{\prime}\in H$ such $h = h_t$ and $h^{\prime} = h_{t^{\prime}}$ and the event generated by both is the same, namely $(\bar{a},\bar{b})$, we have that $f(h)=f(h^{\prime})$.
\end{defi}
That is, a stationary strategy only depends on the event at which it is applied, and not on how the event was reached.\\
We note the set of stationary strategies as $\mathcal{S} \subseteq F$ and in what follows we only consider strategies drawn from this set. In other words, we assume that teams act disregarding how a stage of the game was reached and play only according to the current state of affairs. For example, if the match at $t$ is tied $(3,3)$, teams $A$ and $B$ will play in the same way, irrespectively of whether the score before was $(3,0)$ or $(0,3)$.\\
It can be argued that the assumption of stationarity does not seem to hold in some real-world cases since the way a given score is reached may take an emotional toll on teams. If, say, $A$ is winning at $(4,0)$, and suddenly the score becomes $(4,4)$, the evidence shows that $A$'s players will feel disappointed and anxious, changing the incentives under which they act (Cresswell \& Eklund, 2006).\\
But the theoretical assumption of stationarity is clearly applicable to the case of matches between high performance teams. For instance, consider the first round of the Rugby World Cup 2015, when All Blacks (New Zealand's national team), the best team of the world, played against Los Pumas (Argentina's team), an irregular team. At the start of the second half Los Pumas were 4 points ahead. All Blacks, arguably the best rugby team of the world (and one of the best in any sport (Conkey, 2017)), instead of losing temper kept playing in a ``relaxed'' mode. This ensured that they ended winning the game by 26 to 16 (Cleary, 2015). So, the assumption of stationarity seems acceptable for high performance teams, reflecting their mental strength.\\
We have that,
\begin{lemi}[Mass\'o-Neme (1996)]
Let $s\in \mathcal{S}$. There exist two natural numbers $M,R\in \mathbb{N}$ such that $(a,b)^{t+R}(s)=(a,b)^{t+R+M}(s)$ for every $t\geq 1$. That is, a stationary strategy ({\em every} strategy in our framework) produces a finite cycle of instantaneous games of length $M$ after $R$ periods.
\end{lemi}
For every $s\in \mathcal{S}$ and a scoring system $S$ we define $b^S_{(a,b)}(s)=\{(a,b)^S_{1}(s),\ldots,$ $\ldots (a,b)^S_{R}(s)\}$ and $c^S_{(a,b)}(s)=\{(a,b)^S_{R+1}(s),\ldots, (a,b)^S_{R+M}(s)\}$ as the initial path and the cycle of instantaneous games generated by $s$, where $R$ and $M$ are the smallest numbers of Lemma 1. There are many ways in which the games in a cycle can be reached from the outcomes of another one:
\begin{defi}
Consider $s^{l},s^{{l}^{\prime}}\in \mathcal{S}$ and $(a,b)$ an initial event under a point system $S$.
\begin{enumerate}
\item We say that $s^{l}$ and $s^{{l}^{\prime}}$ are directly connected, denoted $s^{l}\sim s^{{l}^{\prime}}$, if \mbox{$c_{(a,b)}^S (s^{l})\cap c_{(a,b)}^S(s^{{l}^{\prime}})\neq \emptyset$}.
\item We say that $s^{l}$ and $s^{{l}^{\prime}}$ are connected, denoted $s^{l}\approx s^{{l}^{\prime}}$, if there exist \mbox{$s^{1},\ldots, s^{m}\in \mathcal{S}$} such that $s^{l}\sim s^{1}\sim \ldots \sim s^{m}\sim s^{{l}^{\prime}}$.\footnote{In words, two strategies are directly connected if from the cycle of instaneous games corresponding to one of them, teams have direct access to the cycle of instantaneous games of the other and vice versa. If instead, they are (not directly) connected, teams can access from one of the cycles to the instantaneous games of the other one through a sequence of stationary strategies.}
\end{enumerate}
\end{defi}
Then, we have (for simplicity we assume an initial event $(a,b)$ and a scoring system $S$):
\begin{defi}
For every $i\in\{A,B\}$, $\mathbf{U}_{i}(s)=(\frac{1}{|s|})\Sigma_{r=1}^{M}U_{i}^{j(r)}((e_{A},e_{B})^{R+r}(s))$, where $U^{j(r)}_i$ is $i$'s payoff function in the instantaneous game $(a,b)^{R+r}(s)$ and $(e_{A},e_{B})^{R+r}$ is the profile of choices in that game.
\end{defi}
This means that the payoff of a stationary strategy is obtained as the average of the payoffs of the cycle. To apply this result in our setting, we have to characterize the set of feasible payoffs of $G$:
\begin{defi}
A vector $v\in\mathbb{R}^{2}$ is {\em feasible} if there exists a strategy $s\in \mathcal{S}$ such that $v=(\mathbf{U}_A(s), \mathbf{U}_B(s))$.
\end{defi}
We have that:
\begin{teo}[Mass\'o-Neme (1996)]
A vector $v\in\mathbb{R}^{2}$ is feasible if and only if there exists $\mathcal{S}(v)=\{s^{1},\ldots,s^{\overline{k}}\}\subseteq \mathcal{S}$ such that for every $s^{r}, s^{r'}\in \mathcal{S}(v)$, $s^{r}\approx s^{r'}$ and there exists $(\alpha^{1},\ldots,\alpha^{\overline{k}})\in \overline{\Delta}$ (the $\overline{k}$-dimensional unit simplex) such that
$$v=\Sigma_{k=1}^{\overline{k}}\alpha^{k} (\mathbf{U}_A(s^k), \mathbf{U}_B(s^k)).$$
\end{teo}
The definition of Nash equilibria in this game is the usual one:
\begin{defi}
A strategy $s^{\ast}\in \mathcal{S}$ is a Nash equilibrium of game $G$ if for all $i\in \{A,B\}$, $\mathbf{U}_{i}(s^{\ast})\geq \mathbf{U_{i}}(s^{\prime})$ for all $s^{\prime} \in \mathcal{S}$, with $s^{\ast}_i \neq s^{\prime}_i$ while $s^{\ast}_{-i} = s^{\prime}_{-i}$.
A vector $v\in\mathbb{R}^{2}$ is an equilibrium payoff of $G$ if there exists a Nash equilibrium of $G$, $s\in \mathcal{S}$, such that $(\mathbf{U}_A(s), \mathbf{U}_B(s)) = v$.
\end{defi}
The following result characterizes the equilibrium payoffs of $G$:
\begin{teo}[Mass\'o-Neme (1996)]
Let $v$ be a feasible payoff of $G$. Then $v$ is an equilibrium payoff if and only if there exist $s^{1},s^{2},s^{3}\in \mathcal{S}$, and $(\alpha^{1},\alpha^{2},\alpha^{3})\in \Delta^{3}$ such that $v=\Sigma_{k=1}^{3}\alpha^{k}(\mathbf{U}_A(s^k), \mathbf{U}_B(s^k))$ and the payoff $v_{i}$ is better or equal than the higher payoff that team $i$ can guarantee by itself through a deviation of the cycles of $s^{1},s^{2},s^{3}$, the connected cycles and the initial path from those strategies.
\end{teo}
In words: $v$ is an equilibrium payoff if each team gets a better pay than the pay they can get if they deviate.\\
Finally, with all these definitions and theorems at hand, we can analyze the game where two teams that are far away in the position table face each other, so we set $\epsilon=0$ in the utility functions, disregarding the importance of blocking the attacks of the other team and focusing on scoring tries
The set of feasible payoffs of this game is given, as said, by a convex combination of payoffs of stationary strategies. The feasible payoffs region corresponding to each point system are represented in Figures 2 to 4. In each figure we can also see the minimax payoffs for the cycles favoring team $A$.\footnote{Where $c(a,b)$ is the cycle $\{(a,b)\}$ and $c(a,b)(c,d)$ is the cycle $\{(a,b),(c,d)\}$.} Every feasible payoff above and to the right of the minimax payoff is an equilibrium payoff. \\
Figure 2 shows the results for the $NB$ system and Table 5 shows the cycles that receive that minimax payoffs.
\begin{figure}[H]
\includegraphics[scale=1.2]{NB.png}
\caption{Feasible and Minimax payoffs in $NB$}
\end{figure}
\begin{table}[H]
\centering
\begin{tabular}{|c |c|}
\hline
Minimax payoff & Cycle\\
\hline
$(2.16,2.16)$ & $c(0,0),c(1,1),c(2,2),c(3,3),c(4,4),c(5,5),c(6,6),c(7,7)$\\
$(4,1)$ & $c(1,0),c(2,1),c(3,2),c(4,3),c(5,4),c(5,5),c(7,6),c(7,5)(7,6)$\\
$(4,0.16)$ & $c(2,0),c(3,1),c(4,2),c(5,3),c(6,4),c(7,5),c(7,4)(7,5)$\\
$(4,0)$ & $c(3,0),c(4,1),c(5,2),c(6,3),c(7,4)$\\
&$c(4,0),c(5,1),c(6,2),c(7,3),c(5,0),c(6,1),c(7,2)c(6,0),c(7,1),c(7,0)$\\
$(4,2.16)$ & $c(7,6)(7,7)$\\
\hline
\end{tabular}
\caption{NB System}
\end{table}
Figures 3 and Table 6 show the results for the $3+$ system.
\begin{figure}[H]
\includegraphics[scale=1.2]{3+.png}
\caption{Feasible payoffs and Minimax payoffs in $3+$}
\end{figure}
\begin{table}[H]
\centering
\begin{tabular}{|c |c|}
\hline
Minimax payoff & Cycle\\
\hline
$(2.16,2.16)$ & $c(0,0),c(1,1),c(2,2),c(3,3),c(4,4),c(5,5),c(6,6),c(7,7)$\\
$(4,1.53)$ & $c(1,0),c(2,1),c(3,2),c(4,3),c(5,4),c(5,5),c(7,6),c(7,5)(7,6)$\\
$(4,0.16)$ & $c(2,0),c(3,1),c(4,2),c(5,3),c(6,4),c(7,5)$\\
$(5,0)$ & $c(3,0),c(4,1),c(5,2),c(6,3),c(7,4)$\\
&$c(4,0),c(5,1),c(6,2),c(7,3),c(5,0),c(6,1),c(7,2)c(6,0),c(7,1),c(7,0)$\\
$(4,2.16)$ & $c(7,6)(7,7)$\\
$(5,0.16)$ & $c(7,4)(7,5)$\\
\hline
\end{tabular}
\caption{3+ System}
\end{table}
Finally, Figure 4 and Table 7 do the same for the $+4$ system.
\begin{figure}[H]
\includegraphics[scale=1.2]{+4.png}
\caption{Feasible payoffs and Minimax payoffs in $+4$}
\end{figure}
\begin{table}[H]
\centering
\begin{tabular}{|c |c|}
\hline
Minimax payoff & Cycle\\
\hline
$(2.16,2.16)$ & $c(0,0),c(1,1)$\\
$(2.53,2.53)$ & $c(2,2)$\\
$(3.168,3.168)$ & $c(3,3),c(4,4),c(5,5),c(6,6),c(7,7)$\\
$(4,1.53)$ & $c(1,0),c(2,1)$\\
$(5,2)$ & $c(3,2)$\\
$(5,2.53)$ &$c(4,3),c(5,4),c(6,5),c(7,6),c(7,5)(7,6)$\\
$(4,0.16)$ & $c(2,0)$\\
$(5,0.168)$ & $c(3,1),c(5,2)$\\
$(5,0.535)$ & $c(4,2)$\\
$(5,1.16)$ &$c(5,3),c(6,4),c(7,5),c(7,4)(7,5)$\\
$(5,0)$ &$c(3,0),c(4,1),c(4,0),c(5,1),c(6,2),c(5,0)$\\&$c(6,1),c(7,2),c(6,0),c(7,1),c(7,0)$\\
$(5,1)$ &$c(6,3),c(7,4),c(7,3)$\\
$(5,3.168)$ & $c(7,6)(7,7)$\\
\hline
\end{tabular}
\caption{+4 System}
\end{table}
The fact that some minimax payoffs are outside the feasible region indicates that some cycles do not have equilibrium payoffs, so one or both of the teams have incentives to change strategies and get a better payoff.
When we consider the joint efforts that yield the minimax payoffs we obtain an average joint effort of $(0,0)$ in the $NB$ system, $(0.18,0.18)$ in the $3+$ system and $(0.1776,0.1776)$ in the $+4$ system.
\newpage
\section{Empirical Evidence}
In order to check the empirical soundness of our theoretical analyses we will use a database of 473 rugby matches. They were played from 1987 to 2015 in different competitions, including the Rugby Word Cup, the Six Nations and club tournaments. We compiled this database drawing data from different sources ([12]-[23]). Each match is represented by a vector with four components, namely the number of tries of the local team, the number of tries of the visiting team, as well as the scores of the winning and the losing team, respectively. \\
We perform a Least Squares analysis to explain the number of tries of each team and the differences in scores in terms of some explanatory variables. We consider as such the scoring system used in each match (our key variable), the nature of each team (a club team or a national team), a time trend and a constant. The selection of this kind of analysis is justified by, on one hand, its simplicity, but on the other because we lack a panel or temporal structure which could provide a richer information. Notice also that it is natural to posit a linear model in the presence of categorical variables (e.g. the scoring system in a tournament) (Wooldridge, 2020). \\
We run OLS regressions on different variants of the aforementioned general model, changing the way in which explanatory variables are included of changing the sample of matches to be analyzed. In the latter case we divided the entire sample in terms of the homogeneity or heterogeneity of teams playing in each match. In all cases we had to use robust errors estimators to handle the heteroskedasticity of the models. Also, assuming that each tournament is idiosyncratic, we controlled for clustered errors. \\
The general functional form of the model can be stated as:
\begin{equation}
\label{eqn:model}
T_i=\beta_0 + \beta_1 C2_i + \beta_2 C3_i + \gamma X_i + \epsilon_i
\end{equation}
There are many alternative ways of characterizing the dependent variable, which represents the number of tries in a match $i$, i.e. $T_i$. The first and obvious choice is to define it as the total number of tries in a match. But we also analyze variants in which we allow $T_i$ to represent either the number of tries of the local team, of the visitor team, the difference between them, those of the winning team (be it as a local or visiting team) and those of the losing team. \\
With respect of our variable of interest, i.e. the system of bonus points, we specify $+4$ as the categorical base, to compare it to the $3+$ and no bonus systems, represented by means of dummy variables, denoted $C2$ and $C3$ for $NB$ and $3+$ respectively. Both $\gamma$ and $X$ are vectors, containing the control variables and their parameters. We will vary the composition of $X$ in order to check the robustness of the effects of the scoring systems. Finally, $\beta_0$ is the constant, while $\epsilon$ is the error term (specified to account for heterokedasticity or clustered errors) \\
We will first present the descriptive statistics of the database. Then we give the results of the regressions on the different models built by varying both the definition of the dependent and the explanatory variables. Finally, we divide the sample in the classes of matches played by homogeneous or heterogeneous rivals, to compare their results for the same model.
\subsection{Descriptive Statistics}
Figure~\ref{tries} illustrates different aspects of the distribution of the number of tries in the database of matches. Notice that the number of matches is not the same under the three scoring methods: for $+4$ we have $260$, $93$ under $NB$ and $120$ under $3+$. Nevertheless the evidence indicates that the $3+$ scoring method yields the highest scores, hinting that it is the one that induces a more aggressive play. \\
\begin{figure}[hbt!]
\centering
\begin{subfigure}[t]{0.75\textwidth}
\centering
\scriptsize
\includegraphics[width=1\textwidth]{Difference}
\caption{Differences of tries \label{fig:diff}}
\end{subfigure}%
\begin{subfigure}[t]{0.75\textwidth}
\centering
\scriptsize
\includegraphics[width=1\textwidth]{Difference_club}
\caption{Differences for club teams \label{fig:diffclub}}
\end{subfigure}
\caption{Histograms of distributions of differences of tries in each match.}
\label{diff}
\end{figure}
\begin{figure}[hbt!]
\centering
\begin{subfigure}[t]{0.75\textwidth}
\centering
\scriptsize
\includegraphics[width=1\textwidth]{TriesTotal}
\caption{Total tries \label{fig:TriesTotal}}
\end{subfigure}
\begin{subfigure}[t]{0.75\textwidth}
\centering
\scriptsize
\includegraphics[width=1\textwidth]{triestotalh}
\caption{Total tries in homogeneous matches \label{fig:TriesTotalh}}
\end{subfigure}
\caption{Histograms of distributions of tries.}
\label{tries}
\end{figure}
\subsection{Samples and Exploratory Regressions}
We run regressions on different specifications of the general model represented by expression \eqref{eqn:model} in order to make inferences beyond the casual evidence. We use the variable $code$, to represent the scoring system, with the base value $1$ for $+4$, $2$ for $NB$ and $3$ for $3+$, as we expressed above with the variables $C2$ and $C3$ in \eqref{eqn:model}. For $T_i$ we use different specifications, namely $TriesTotal$, $TriesLocal$ and $TriesVis$, representing the number of total tries, tries by the local team and tries of the visiting team, respectively. With respect to the control variables $X$ we use different selections from a set that includes $SR$ is a dummy variable indicating that a match correspond to a Super Rugby tournament ( because Super Rugby is clearly different from the other tournaments analyzed here); $Club$, which indicates whether a match is played by club teams or not; $previous$, a dichotomous variable taking value $1$ on the older matches in our database, namely those played between 1987 and 1991. Of particular interest are two variables that can be included in $X$. One is $diff= |TriesLocal - TriesVis|$ representing the difference in absolute value between the tries of the local and the visiting team. The other, related, control variable is $diff2 = TriesLocal - TriesVis$, capturing the possible advantage of being the local team. Finally, we include $year$ as to capture the possible existence of a temporal trend. \\
The results can be seen in Table~\ref{table}.\footnote{All tables of this section can be found at the end of the article.} It can be seen that $3+$ is indeed the scoring method that achieves the highest number of tries, namely between $1$ and over $2$ more than $+4$ (which is our benchmark). $NB$ induces, in general, less tries than $+4$, except in the case of number of tries of the visiting teams. \\
With respect to the control variables, we can see that $SR$ has a negative impact while $Club$ and $previous$ have a positive influence. The time trend is not significant in any of the regressions. \\
\begin{sidewaystable}[hbt!]
\centering
\scriptsize
\caption{General Case}
\label{table}
\begin{tabular}{lcccccccccccc} \hline
& (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) \\
VARIABLES & TriesLocal & TriesVis & TriesTotal & diff & diff2 & TriesLocal & TriesVis & TriesLocal & TriesVis & TriesLocal & TriesVis & TriesTotal \\ \hline
& & & & & & & & & & & & \\
TriesVis & & & & & & -0.146** & & & & & & \\
& & & & & & (0.067) & & & & & & \\
2.code & -2.339*** & 0.760*** & -1.579*** & -1.576*** & -3.099*** & -2.228*** & 0.624** & & & -1.514*** & 0.760*** & 0.799 \\
& (0.522) & (0.284) & (0.585) & (0.488) & (0.603) & (0.521) & (0.290) & & & (0.395) & (0.284) & (0.590) \\
3.code & 1.475*** & 0.975*** & 2.450*** & 0.339* & 0.006 & 1.617*** & 1.061*** & 1.475*** & 0.975*** & & & 2.159*** \\
& (0.292) & (0.250) & (0.369) & (0.197) & (0.303) & (0.298) & (0.255) & (0.291) & (0.250) & & & (0.466) \\
Club & -1.375*** & 1.088*** & -0.287 & -1.843*** & -2.710*** & -1.216*** & 1.008*** & & & & 1.088*** & \\
& (0.425) & (0.180) & (0.461) & (0.398) & (0.449) & (0.428) & (0.184) & & & & (0.180) & \\
previous & 2.022*** & 0.103 & 2.125*** & 2.056*** & 1.919** & 2.037*** & 0.220 & & & 2.022*** & 0.103 & \\
& (0.632) & (0.401) & (0.654) & (0.575) & (0.832) & (0.621) & (0.410) & & & (0.631) & (0.401) & \\
SR & -0.775*** & -0.033 & -0.808** & & & -0.780*** & -0.078 & & & & & \\
& (0.278) & (0.246) & (0.367) & & & (0.280) & (0.247) & & & & & \\
TriesLocal & & & & & & & -0.058** & & & & & \\
& & & & & & & (0.027) & & & & & \\
Club & & & & & & & & & & & & 1.883*** \\
& & & & & & & & & & & & (0.523) \\
year & & & & & & & & & & & & -0.004 \\
& & & & & & & & & & & & (0.031) \\
Constant & 4.650*** & 1.262*** & 5.912*** & 3.688*** & 3.388*** & 4.834*** & 1.533*** & 2.500*** & 2.317*** & 3.825*** & 1.262*** & 10.969 \\
& (0.391) & (0.111) & (0.398) & (0.380) & (0.414) & (0.407) & (0.169) & (0.222) & (0.200) & (0.192) & (0.111) & (63.039) \\
& & & & & & & & & & & & \\
Observations & 473 & 473 & 473 & 473 & 473 & 473 & 473 & 180 & 180 & 293 & 293 & 245 \\
R-squared & 0.090 & 0.152 & 0.102 & 0.123 & 0.110 & 0.098 & 0.159 & 0.113 & 0.077 & 0.045 & 0.076 & 0.203 \\ \hline
\multicolumn{13}{c}{ Robust standard errors in parentheses} \\
\multicolumn{13}{c}{ *** p$<$0.01, ** p$<$0.05, * p$<$0.1} \\
\end{tabular}
\end{sidewaystable}
\subsection{The Homogeneous Case}
Table~\ref{table2} presents the results of running the aforementioned regressions but only on the class of matches between homogeneous teams.\footnote{National teams are considered homogeneous if they are in the same Tier ([24]), and clubs are considered homogeneous if they belong to the same country.} The dependent variables of the regressions are on the first rows, where the first four columns indicate robust errors while the other four give the errors clustered by tournaments. \\
The transition from $+4$ to $NB$ does not make a difference in robust errors but it does so for tournament errors, adding a little more than half a try (not for the losing team, for which it does not make any difference). The effect of changing from $+4$ to $3+$ is stronger, adding more than $2$ total tries and more than $1$ for the winning team. \\
On the other hand, any of the scoring systems induces almost $2$ more total tries in club tournaments than with national teams. Finally, nor $year$ or the $constant$ are significant.
\begin{sidewaystable}[hbt!]
\centering
\caption{Homogeneus case}
\label{table2}
\begin{tabular}{lccccccccc} \hline
& (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\
VARIABLES & TriesTotal & TriesWin & TriesLoss & Diff & TriesTotal & TriesWin & TriesLoss & Diff & TriesTotal \\ \hline
& & & & & & & & & \\
2.code & 0.799 & 0.731* & 0.067 & 0.664* & 0.799*** & 0.731*** & 0.067 & 0.664*** & 0.799 \\
& (0.590) & (0.438) & (0.224) & (0.369) & (0.203) & (0.159) & (0.125) & (0.201) & (0.590) \\
3.code & 2.159*** & 1.364*** & 0.873*** & 0.491* & 2.159*** & 1.364*** & 0.873*** & 0.491** & 2.159*** \\
& (0.466) & (0.319) & (0.211) & (0.289) & (0.538) & (0.304) & (0.213) & (0.180) & (0.466) \\
Club & 1.883*** & 1.171*** & 0.635*** & 0.536* & 1.883*** & 1.171*** & 0.635** & 0.536** & 1.883*** \\
& (0.523) & (0.367) & (0.213) & (0.306) & (0.542) & (0.308) & (0.223) & (0.215) & (0.523) \\
year & -0.004 & -0.013 & 0.009 & -0.023 & -0.004 & -0.013 & 0.009 & -0.023 & -0.004 \\
& (0.031) & (0.029) & (0.011) & (0.031) & (0.031) & (0.023) & (0.009) & (0.016) & (0.031) \\
Constant & 10.969 & 29.152 & -17.653 & 46.805 & 10.969 & 29.152 & -17.653 & 46.805 & 10.969 \\
& (63.039) & (58.695) & (21.625) & (62.070) & (61.675) & (45.887) & (17.956) & (32.588) & (63.039) \\
& & & & & & & & & \\
Observations & 245 & 245 & 245 & 245 & 245 & 245 & 245 & 245 & 245 \\
R-squared & 0.203 & 0.138 & 0.213 & 0.030 & 0.203 & 0.138 & 0.213 & 0.030 & 0.203 \\ \hline
\multicolumn{10}{c}{ Robust standard errors in parentheses} \\
\multicolumn{10}{c}{ *** p$<$0.01, ** p$<$0.05, * p$<$0.1} \\
\end{tabular}
\end{sidewaystable}
\subsection{The Non-Homogeneous Case}
This analysis, represented in Table~\ref{table3} is performed on the same variables and with the same interpretation of errors as the previous case, but including all the matches. \\
We do not find differences between $NB$ and $+4$. $3+$, instead, makes a difference, although with a lower impact than in the homogeneous case. Another relevant difference is that in this case the effect of $Club$ gets reversed. That is, winning teams score less while losing ones more, reducing in almost $2$ tries the differences with national teams. \\
Another interesting feature is that $year$ becomes significant. That is, there exists a trend towards increasing the differences in time. \\
\begin{sidewaystable}[hbt!]
\centering
\caption{Non Homogeneus case}
\label{table3}
\begin{tabular}{lccccccccc} \hline
& (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\
VARIABLES & TriesTotal & TriesWin & TriesLoss & Diff & TriesTotal & TriesWin & TriesLoss & Diff & TriesTotal \\ \hline
& & & & & & & & & \\
2.code & -0.440 & -0.463 & 0.023 & -0.486 & -0.440 & -0.463 & 0.023 & -0.486 & 0.799 \\
& (0.526) & (0.487) & (0.147) & (0.490) & (0.712) & (0.683) & (0.080) & (0.663) & (0.590) \\
3.code & 1.907*** & 1.122*** & 0.785*** & 0.336* & 1.907*** & 1.122*** & 0.785*** & 0.336*** & 2.159*** \\
& (0.297) & (0.205) & (0.146) & (0.197) & (0.229) & (0.118) & (0.122) & (0.071) & (0.466) \\
Club & -0.559 & -1.200*** & 0.641*** & -1.841*** & -0.559* & -1.200*** & 0.641*** & -1.841*** & 1.883*** \\
& (0.438) & (0.395) & (0.138) & (0.398) & (0.253) & (0.193) & (0.131) & (0.211) & (0.523) \\
year & 0.003*** & 0.002*** & 0.001*** & 0.002*** & 0.003*** & 0.002*** & 0.001*** & 0.002*** & -0.004 \\
& (0.000) & (0.000) & (0.000) & (0.000) & (0.000) & (0.000) & (0.000) & (0.000) & (0.031) \\
Constant & & & & & & & & & 10.969 \\
& & & & & & & & & (63.039) \\
& & & & & & & & & \\
Observations & 473 & 473 & 473 & 473 & 473 & 473 & 473 & 473 & 245 \\
R-squared & 0.814 & 0.766 & 0.719 & 0.557 & 0.814 & 0.766 & 0.719 & 0.557 & 0.203 \\ \hline
\multicolumn{10}{c}{ Robust standard errors in parentheses} \\
\multicolumn{10}{c}{ *** p$<$0.01, ** p$<$0.05, * p$<$0.1} \\
\end{tabular}
\end{sidewaystable}
\subsection{Final Remarks}
All the results obtained, both in the general case and distinguishing between homogeneous and heterogeneous teams, indicate that the results of our theoretical models seem to hold in the real world. \\
\section{Conclusions}
The results of analyzing rugby games in theoretical and empirical terms are consistent. The $3+$ system induces teams to exert more effort both in the static and empirical models. Moreover, in the particular instance analyzed of the dynamic model, with $\epsilon=0$, the result is the same. In all the models, we find that the $3+$ system ranks first, $+4$ second and $NB$ third.\\
While choosing different values of $\epsilon$ in the dynamical model may change a bit the results, it seems that a sports planner should use the $3+$ bonus point system if the goal is to make the game more entertaining.\\
Some possible extensions seem appropiate topics for future research. If we consider $\epsilon$ as a measure of the ``distance'' between teams playing in a league, the choice of the appropriate bonus point system may depend on the teams and the moment of the tournament they are playing. Incentives at the beggining are not the same at the end of the tournament. The idea of conditionalizing the design of a tournament taking into account this in an optimal way, can be of high interest.
| {
"attr-fineweb-edu": 1.96582,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfEk25V5jayWgE0Nx | \section{Introduction}
Hundreds of millions of people consume sporting content each week, motivated by several factors. These motivations include the fact that the spectator enjoys both the quality of sport on display and the feeling of eustress arising from the possibility of an upset \cite{mumford2013watching, wann1995preliminary}. This suggests that there are two important elements present in sporting competition: a high level of skill among players that provides aesthetic satisfaction for the spectator and also an inherent randomness within the contests due to factors such as weather, injuries, and in particular luck. The desire for consumers to get further value from their spectating of sporting content has resulted in the emergence of \emph{fantasy sports} \cite{lee2013understanding, dwyer2011love, karg2011fantasy, farquhar2007types}, in which the consumers, or \emph{managers} as we shall refer to them throughout this article, begin the season with a virtual budget from which to build a team of \emph{players} who, as a result of partaking in the real physical games, receive points based upon their statistical performances. The relationship between the fantasy game and its physical counterpart raises the question of whether those who take part in the former suffer (or gain) from the same combination of skill and luck that makes their physical counterpart enjoyable.
The emergence of large scale quantities of detailed data describing the dynamics of sporting games has opened up new opportunities for quantitative analysis, both from a team perspective \cite{park2005network,yamamoto2011common,grund2012network,gabel2012random, ribeiro2016advantage, gudmundsson2017spatio, gonccalves2017exploring, buldu2019defining} and also at an individual level~\cite{onody2004complex,saavedra2010mutually, duch2010quantifying, radicchi2011best, mukherjee2012identifying, cintia2013engine, brooks2016developing}. This has resulted in analyses aiming to determine two elements within the individual sports; firstly quantifying the level of skill in comparison to luck in these games \cite{yucesoy2016untangling, ben2013randomness, grund2012network, aoki2017luck, pappalardo2018quantifying} while, secondly, identifying characteristics that suggest a difference in skill levels among the competing athletes \cite{duch2010quantifying,radicchi2012universality}. Such detailed quantitative analysis is not, however, present in the realm of fantasy sports, despite their burgeoning popularity with an estimated 45.9 million players in the United States alone in 2019 \cite{fsgaDemographics}. One notable exception is a recent study~\cite{getty2018luck}, which derived an analytical quantity to determine the role chance plays in many contests including fantasy sports based on American sports, and suggested that skill was a more important factor than luck in the games.
\begin{figure*}[t]
\centering
\includegraphics[width = \textwidth]{final_history_plot}
\caption{\textbf{Relationship between the performance of managers over
seasons of FPL}. (a) The relationship between managers' ranks in the 2018/19 and 2017/18 seasons. Each bin is of width 5,000 with the colour highlighting the number of managers in each bin; note the logarithmic scale in colour. (b) The pairwise Pearson correlation between a manager's points totals over multiple seasons of the game, calculated over all managers who appeared in both seasons.}
\label{fig:history_plots}
\end{figure*}
Motivated by this body of work, we consider a dataset describing the \textit{Fantasy Premier League (FPL)} \cite{fplsite}, which is the online fantasy game based upon the top division of England's football league. This game consists of over seven million \textit{managers}, each of whom builds a virtual team based upon real-life players. Before proceeding, we here introduce a brief summary of the rules underlying the game, to the level required to comprehend the following analysis~\cite{fplRules}. The (physical) Premier League consists of 20 teams, each of whom play each other twice, resulting in a season of 380 fixtures split into 38 unique \textit{gameweeks}, with each gameweek generally containing ten fixtures. A manager in FPL has a virtual budget of \pounds100m at the initiation of the season from which they must build a squad of 15 players from the approximately 600 available. Each player's price is set initially by the game's developers based upon their perceived value to the manager within the game, rather than their real-life transfer value. The squad of 15 players is composed under a highly constrained set of restrictions which are detailed in~\ref{app:rules}.
In each gameweek the manager must choose 11 players from their squad as their team for that week and is awarded a points total from the sum of the performances of these players (see~\ref{tab:points}). The manager also designates a single player of the 11 to be the captain, with the manager receiving double this players' points total in that week. Between consecutive gameweeks the manager may also make one unpenalised change to their team, with additional changes coming as a deduction in their points total. The price of a given player then fluctuates as a result of the supply-and-demand dynamic arising from the transfers across all managers' rosters. The intricate rules present multiple decisions to the manager and also encourages longer-term strategising that factors in team value, player potential, and many other elements.
In Section \ref{subsec:hist_corr} we analyse the historical performance of managers in terms of where they have ranked within the competition alongside their points totals in multiple seasons, in some cases over a time interval of up to thirteen years. We find a consistent level of correlation between managers' performances over seasons, suggesting a persistent level of skill over an extended temporal scale.
Taking this as our starting point, in Section \ref{subsec:specific_season} we aim to understand the decisions taken by managers which are indicative of this skill level over the shorter temporal period of the 38 gameweeks making up the 2018/19 season by analysing the entire dataset of actions taken by the majority of the top one million managers\footnote{Due to data availability issues at the time of collection such as managers not taking part in the entire season, the final number of managers identified was actually 901,912. We will however, for the sake of brevity, refer to these as the top 1 million managers over the course of this article. It is also important to note that data from previous seasons is unattainable, which is why we restrict this detailed study to the 2018/19 season.} over the course of the season. Even at this shorter scale we find consistent tiers of managers who, on a persistent basis, outperform those at a lower tier.
With the aim of identifying why these differences occur, we present (Section \ref{subsec:decisions}) evidence of consistently good decision making with regard to team selection and strategy. This would be consistent with some common form of information providing these skilled managers with an `edge' of sorts, for example in the US it has been suggested that 30\% of fantasy sports participants take advantage of further websites when building their teams~\cite{burke2016exploring}. Arguably most interesting of all, in Section~\ref{subsec:Template} we demonstrate how at points throughout the season there occurs temporary herding behaviour in the sense that managers appear to converge to consensus on a \textit{template team}. However,
the consensus does not persist in time, with managers subsequently differentiating themselves from the others. We consider possible reasons and mechanisms for the emergence of these template teams.
\section{Results}
\begin{figure*}[t]
\centering
\includegraphics[width = \textwidth]{final_points_plot_v5}
\caption{\textbf{Summary of points obtained by managers over the course
of the 2018/19 season.} (a) The mean number of points over all managers for each GW. The shaded regions denote the 95\% percentiles of the points' distribution. (b) The difference between the average number of points for four disjoint tiers
of manager, the top $10^3, 10^4, 10^5,\text{ and } 10^6$, and the overall average points as per panel (a). Note that managers are considered to be in only one tier so, for example, the top-$10^4$ tier contains managers ranked from 1001 to $10^4$.}
\label{fig:ave_points_class}
\end{figure*}
\subsection{Historical Performance of Players}\label{subsec:hist_corr}
We consider two measures of a manager's performance in a given season of FPL: the total number of points their team has obtained over the season and also their resulting rank based on this points total in comparison to all other managers. A strong relationship between the managers' performances over multiple seasons of the game is observed. For example, in panel (a) of Fig.~\ref{fig:history_plots} we compare the ranks of managers who competed in both the 2018-19 and 2017-18 seasons. The density near the diagonal of this plot suggests a correlation between performances in consecutive seasons. Furthermore, we highlight specifically the bottom left corner which indicates that those managers who are among the most highly ranked appear to perform well in both seasons. Importantly, if we consider the top left corner of this plot it can be readily seen that the highest performing managers in the 2017-18 season, in a considerable number of cases, did not finish within the lowest positions in the following season as demonstrated by the speckled bins with no observations.
This is further corroborated in panel (b), in which we show the pairwise Pearson correlation between the total points obtained by managers from seasons over a period of 12 years. While the number of managers who partook in two seasons tends to decrease with time, a considerable number are present in each comparison. Between the two seasons shown in Fig.~\ref{fig:history_plots}(a) for example, we observe results for approximately three million managers and find a correlation of 0.42 among their points totals. Full results from 13 consecutive seasons, including the number managers present in each pair and the corresponding Pearson correlation coefficients, are given in \ref{table:correlations}.
Using a linear regression fit to the total points scored in the 2018/19 season as a function of the number of previous seasons in which the manager has played (\ref{fig:points_history}) we find that each additional year of experience is worth on average 22.1 ($\text{R}^2 = 0.082$) additional points (the overall winner in this season obtained 2659 points). This analysis suggests that while there are fluctuations present in a manager's performance during each season of the game, there is also some consistency in terms of performance levels, suggesting a combination of luck and skill being present in fantasy sports just as was observed in their physical analogue in \cite{getty2018luck}.
\subsection{Focus on Season 2018-19}\label{subsec:specific_season}
In Sec.~\ref{subsec:hist_corr} we considered, over multiple seasons, the performance of managers at a season level in terms of their cumulative performance over the 38 gameweeks of each season. We now focus at a finer time resolution, to consider the actions of managers at the gameweek level for the single season 2018/19, in order to identify elements of their decision making which determined their overall performance in the game.
The average points earned by all managers throughout the season is shown in Fig.~\ref{fig:ave_points_class}(a) along with the 95 inter-percentile range, i.e., the values between which the managers ranked in quantiles 0.025 to 0.975 appear.
This quantity exhibits more frequent fluctuations about its long-term average (57.05 points per gameweek) in the later stages of the season, suggesting that some elements of this stage of the season cause different behaviour in these gameweeks. There may of course be many reasons for this e.g., difficult fixtures or injuries for generally high-scoring players or even simply a low/high scoring gameweek, which are themselves factors of luck within the sport itself (see \ref{tab:manager_points} for a detailed break down of points per gameweek). However, in Section \ref{subsec:chips} we analyse an important driver of the fluctuations related to strategic decisions of managers in these gameweeks.
In each season some fixtures must be rescheduled due to a number of reasons, e.g., clashing fixtures in European competitions, which results in certain gameweeks that lack some of the complete set of ten fixtures. Such scenarios are known as \textit{blank-gameweeks} (BGW) and their fixtures are rescheduled to another gameweek in which some teams play twice;
these are known as \textit{double-gameweeks} (DGWs). In the case of the 2018/19 season these BGWs took place in GWs 27 (where there were eight fixtures), 31 (five fixtures), and 33 (six fixtures), making it difficult for some managers to have 11 starting players in their team. The DGWs feature some clubs with two games and therefore players in a manager's team who feature in these weeks will have twice the opportunity for points; in the 2018/19 season these took place in GWs 25 (where 11 games were played), 32 (15), 34 (11), and 35 (14). We see that the main swings in the average number of points are actually occurring in these gameweeks (aside from the last peak in GW 36 which we will comment on later in the article). In Section \ref{subsec:chips} we show that the managers' attitude and preparation towards these gameweeks are in fact indicators of their skill and ability as a fantasy manager
\begin{figure}
\centering
\includegraphics[width = 0.48\textwidth]{decision_plots_v1}
\caption{\textbf{Decisions of managers by tier.}
(a)~Distributions of the total net points earned by managers in the gameweek following a transfer, i.e., the points scored by the player brought in minus that of the player transferred out. The average net points for each tier is also shown below; note the difference between the top three tiers and the bottom tier. (b)~Distribution of the fraction of better transfers a manager could have made based upon points scored in the following gameweek. Faster-decreasing distributions reflect managers in that tier being more successful with their transfers. (c) The distribution of points from captaincy along with the average total for each tier.}
\label{fig:decision_plots}
\end{figure}
To analyse the impact of decision-making upon final ranks, we define \emph{tiers} of managers by rank-ordering them by their final scores and then splitting into the top $10^3$, top $10^4$, top $10^5$, and top $10^6$ positions. These disjoint
tiers of managers, i.e., the top $10^3$ is the managers with ranks between 1 and 1000, the top $10^4$ those with ranks between 1001 and 10,000 and so on, range from the most successful (top $10^3$) to the relatively unsuccessful (top~$10^6$) and so provide a basis for comparison (see \ref{tab:manager_points} and \ref{table:points_summary} for summaries of points obtained by each tier).
The average performance of the managers in each tier (relative to the baseline average over the entire dataset) are shown in panel (b) of Fig.~\ref{fig:history_plots}. Note that the points for the top~$10^6$ tier are generally close to zero as the calculation of the baseline value is heavily dependent upon this large bulk of managers. A detailed summary of each tier's points total, along with visualisation of the distribution of points total may be found in \ref{tab:points} and \ref{fig:points_summary}. It appears that the top tier managers outperform those in other tiers, not only in specific weeks but consistently throughout the season which results in the competition for places in this top tier more difficult to obtain as the season progresses (\ref{fig:alluvial_plots}). This is particularly noticeable in the first gameweek, where the top $10^3$ managers tended to perform very strongly, suggesting a high level of preparation (in terms of squad-building)
prior to the physical league starting. We also comment that the largest gaps between the best tier and the worst tier occur not only in two of the special gameweeks (DGW 35 and BGW 33) but also in GW 1, which suggests that prior to the start of the season these managers have built a better-prepared team to take advantage of the underlying fixtures. We note however that all tiers show remarkably similar temporal variations in their points totals, in the sense that they all experience simultaneous peaks and troughs during the season. See \ref{tab:manager_points} for a full breakdown of these values alongside their variation for each gameweek.
Having identified both differences and similarities underlying the performance in terms of total points for different tiers of managers we now turn to analysis of the actions that have resulted in these dynamics.
\subsection{Decision-Making}\label{subsec:decisions}
\subsubsection{Transfers}
The performance of a manager over the season may be viewed as the consequence of a sequence of decisions that the manager made at multiple points in time. These decisions include which players in their squad should feature in the starting team, the formation in which they should set up their team, and many more. In the following sections we consider multiple scenarios faced by managers and show that those who finished within a higher tier tended to consistently outperform those in lower tiers.
One decision the manager must make each gameweek is whether to change a player in their team by using a transfer. If the manager wants to make more than one transfer they may also do so but at the cost of a points deduction for each extra transfer. The distribution of total points made from transfers, which we determine by the difference between points attained by the player the manager brought in for the following gameweek compared to the player whom they transferred out, over the entire season for each tier is shown in Fig.~\ref{fig:decision_plots}(a). The average number for each tier is also shown. To further analyse this scenario we calculate, for each gameweek, the number of better transfers the managers could have made with the benefit of perfect foresight, given the player they transferred out. This involves taking all players with a price less than or equal to that of the player transferred out and calculating the fraction of options which were better than the one selected, i.e., those who received more points the following gameweek (see Methods). Figure~\ref{fig:decision_plots}(b) shows the complementary cumulative distribution function (CCDF) of this quantity for each tier, note the steeper decrease of the CCDFs for the higher tiers implies that these managers were more likely to choose a strong candidate when replacing a player.
\begin{figure}
\centering
\includegraphics[width = 0.48\textwidth]{money_plots_v4}
\caption{\textbf{Analysis of the team value of managers.} (a) The change in average team value from the initial \pounds100M of all managers, along with 95 percentiles; note the general upward trend of team value over the course of the season. (b)~Distributions of team values for each gameweek for those who finished in the top ten thousand positions (i.e., the combination of those in the top $10^3$ and $10^4$ tiers) versus lower-ranked managers. The distribution for those with higher rank is generally to the right of that describing the other managers from an early stage of the season, indicating higher team value being a priority for successful managers. (c) The relationship between a manager's team value at GW 19 versus their final points total, where the heat map indicates the number of managers within a given bin. The black line indicates the fitted linear regression line, showing that an increase in team value by \pounds1M at this point in the season results in an average final points increase of 21.8 points.}
\label{fig:money_plots}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width = \textwidth]{bb_chip}
\caption{\textbf{Summary of use and point returns of the bench boost chip.} The managers are grouped into two groups: those who finished in the top ten-thousand positions (Top 10k) and the remainder (Top Million). (a) Fraction of managers who had used the bench boost chip by each gameweek. We see a clear strategy for use in double gameweek 35, particularly for the top managers, 79.4\% of whom used it at this stage. (b) Distribution of points earned from using this chip along with the average points---23.2 for the Top 10k and 13.8 for the Top Million---shown by the dashed lines.}
\label{fig:bb_usage}
\end{figure*}
A second decision faced by managers in each gameweek is the choice of player to nominate as captain,
which results in the manager receiving double points for this players' actions during the GW. This is, of course, a difficult question to answer as the points received by a player can be a function of both their own actions i.e., scoring or assisting a goal, and also their team's collective performance (such as a defender's team not conceding a goal).
This topic is an identification question which may be well suited to further research making use of the data describing the players and teams but with additional data about active managers who are making the same decision. For example, an analysis of the captaincy choice of managers based upon their social media activity was recently presented in \cite{bhatt2019should} and showed that the \textit{wisdom of crowds} concept performs comparably to that of the game's top managers. Panel (c) of Fig.~\ref{fig:decision_plots} shows the distribution of points obtained by managers in each tier from their captaincy picks. Again we observe that the distribution of points obtained over the season is generally larger for those managers in higher tiers.
\subsubsection{Financial Cognizance}
The financial ecosystem underlying online games
has been a focus of recent research
\cite{papagiannidis2008making, yamaguchi2004analysis}. With this in mind, we consider the importance of managers' financial awareness in impacting their performance. As mentioned previously, each manager is initially given a budget of \pounds100 million to build their team, constrained by the prices of the players which, themselves fluctuate over time. While the dynamics of player price changes occur via an undisclosed mechanism, attempts to understand this process within the community of Fantasy Premier League managers have resulted in numerous tools to help managers predict player price changes during the season, for example see \cite{fplstats}. The resulting algorithms are in general agreement that the driving force behind the changes is the supply and demand levels for players.
These price fluctuations offer an opportunity for the astute manager to `play the market' and achieve a possible edge over their rivals and allow their budget to be more efficiently spent (see \ref{fig:player_points} for a description of player value and their corresponding points totals and \ref{fig:ternary} for an indication of how the managers distribute their budget by player position). At a macro level this phenomenon of price changes is governed by the aforementioned supply and demand, but these forces are themselves governed by a number of factors affecting the player including, but not limited to, injuries, form, and future fixture difficulty. As such, managers who are well-informed on such aspects may profit from trading via what is in essence a fundamental analysis of players' values by having them in their team prior to the price rises \cite{dechow2001short}. Interestingly, we note that the general trend of team value is increasing over time among our managers as shown in panel (a) of Fig~\ref{fig:money_plots} along with corresponding 95 percentiles of the distribution, although there is an indicative decrease between weeks towards the season's end (GWs 31-35) suggesting the team value becomes less important to the managers towards the games conclusion. Equivalent plots for each tier are shown in \ref{fig:tv_class}
Probing further into the relationship between finance and the managers' rank, we show in Fig.~\ref{fig:money_plots}(b) the distribution of team values for the top two tiers (top $10^3$ and top $10^4$), compared with that for the bottom two tiers (top $10^5$ and top $10^6$)
There is a clear divergence between the two groups from an early point in the season, indicating an immediate importance being placed upon the value of their team. A manager who has a rising team value is at an advantage relative to one who does not due to their increased purchasing power in the future transfer market. This can be seen in panel (c) of Fig.~\ref{fig:money_plots} which shows the change in team value for managers at gameweek 19, the halfway point of the season, versus their final points total. A positive relationship appears to exist and this is validated by fitting an OLS Linear Regression with a slope of 21.8 ($R^2 =~0.1689$), i.e., an increase of team value by \pounds1M at the halfway point is worth, on average, an additional 21.8 points by the end of the game (for the same analysis in other gameweeks see \ref{tab:regression_coefficients}). The rather small $R^2$ value suggests, however, that the variation in a managers' final performance is not entirely explained by their team value and as such we proceed to analyse further factors which can play a part in their final ranking.
\subsubsection{Chip Usage}\label{subsec:chips}
A further nuance to the rules of FPL is the presence of four \textit{game-chips}, which are single use `tricks' that may be used by a manager in any GW to increase their team's performance, by providing additional opportunities to obtain points. The time at which these chips are played and the corresponding points obtained are one observable element of a managers' strategy.
A detailed description for each of the chips and analysis of the approach taken by the managers in using them is given in \ref{sm:chips}.
For the sake of brevity we focus here only on one specific chip, the \textit{bench boost}. When this chip is played, the manager receives points from all fifteen players in their squad in that GW, rather than only the starting eleven as is customary. This clearly offers the potential for a large upswing in points if this chip is played in an efficient manner, and as such it should ideally be used in GWs where the manager may otherwise struggle to earn points with their current team or weeks in which many of their players have a good opportunity of returning large point scores. The double and blank GWs might naively appear to be optimal times to deploy this chip
however when the managers' actions are analysed we see differing approaches (and corresponding returns).
Figure~\ref{fig:bb_usage} shows the proportion of managers who had used the bench boost chip by each GW alongside the corresponding distribution of points the manager received from this choice, where we have grouped the two higher tiers into one group and the remaining managers in another for visualization purposes (see \ref{fig:chip_usage} \& \ref{fig:wc_use} and \ref{tab:BB}-\ref{tab:WC} for a breakdown of use and point returns by each tier). It is clear that the majority of better performing managers generally focused on using these chips during the double and blank GWs with 79.4\% choosing to play their BB chip during DGW35 in comparison to only 28.9\% of those in the rest of the dataset. We also observe the difference in point returns as a result of playing the chip, with the distribution for the top managers being centred around considerable higher values, demonstrating that their squads were better prepared to take advantage of this chip. The fact that the managers were willing to wait until one of the final gameweeks is also indicative of the long-term planning that separates them from those lower ranked. Similar results can be observed for the other game-chips (\ref{tab:FH}-\ref{tab:WC}). We also highlight that a large proportion of managers made use of other chips in GW36, which was the later gameweek in which there was a large fluctuation from the average shown in Fig.~\ref{fig:ave_points_class}.
\begin{figure*}[]
\centering
\includegraphics[width=0.9\textwidth]{schematic_clustering}
\caption{\textbf{Schematic representation of the approaches taken to identify similarity between the composition of managers' teams in each GW. }
We view the connections between managers and players as a bipartite network such that an edge exists if the player is in the managers' team. To determine the relationship between players' levels of popularity we use the co-occurrence matrix which has entries corresponding to the number of teams in which two players co-appear. Using this matrix we perform hierarchical clustering techniques to identify groups of players who are similarly popular within the game, where the number of clusters is determined by analysing the within-cluster sum of squared errors. The similarity between the teams of two managers is determined by calculating the Jaccard similarity, which is determined by the number of players that appear in both teams.}
\label{fig:clustering_scheme}
\end{figure*}
Finally, we comment on the fact that some managers did not employ their chips by the game's conclusion which suggests that either they were not aware of them or, more likely, the mangers in question had simply lost interest in the game at this point. As such, the quantity of managers who had not used their chip gives us a naive estimation of the retention rate for active managers in Fantasy Premier League ($85.05\%$ of managers in our dataset). We note that this is a biased estimate in the sense that our dataset is only considering the top tiers of managers, or at least those who finished in the top tiers, and one would expect the drop out rate to be in fact much higher in lower bands.
\subsection{Template Team}\label{subsec:Template}
While the preceding analysis proposes reasons for the differences between points obtained by tiers shown in Fig.~\ref{fig:ave_points_class}, the question remains as to why the managers'
gameweek points totals show similar temporal dynamics.
In order to understand this we consider here the underlying structure of the managers' teams. We show that a majority of teams feature a core group of players that results in a large proportion of teams having a similar make-up. We call this phenomenon the \textit{template team} which appears to emerge at different points in the season; this type of collective behaviour has been observed in such social settings previously, see, for example \cite{ross2014social, aleta2019dynamics}. We identify the template team by using the network structure describing the teams of all managers, which is described by the adjacency matrix $A_{ij}^G$, whereby an edge between two players $i$ and $j$ appearing in $n$ teams for a given gameweek $G$ describes a value in the matrix given by $A_{ij}^G = n$. This matrix is similar in nature to the co-citation matrix used within the field of bibliometrics \cite{newman2018networks}, see Fig.~\ref{fig:clustering_scheme} for a representation of the process.
With these structures in place we proceed to perform hierarchical clustering on the matrices in order to identify groups of players constituting the common building blocks of the managers' teams. By performing the algorithm with $k = 4$ clusters we find that three clusters contain only a small number of the 624 players, suggesting that most teams include this small group of core players (see \ref{table:first_cluster} for the identities of those in the first cluster each gameweek). Figure~\ref{fig:team_similarity}(a) shows the size of these first three clusters over all managers for each gameweek of the season (\ref{fig:clusters_all} shows the equivalent values for each tier). To understand this result further, consider that at their largest these three clusters only consist of 5.13\% (32/624) of the available players in the game, highlighting that the teams are congregated around a small group of players. For an example representation of this matrix alongside its constituent clusters we show the structure in panel (b) of Fig.~\ref{fig:team_similarity} for gameweek 38, which was the point in time at which the three clusters were largest.
To further examine the closeness between managers' decisions we consider the Jaccard similarity between sets of teams, which is a distance measure that considers both the overlap and also total size of the sets for comparison (see Methods for details). Figure~\ref{fig:team_similarity}(c) shows the average of this measure over pairwise combinations of managers from all tiers and also between pairs of managers who are in the same tier.
Fluctuations in the level of similarity over the course of the season can be seen among all tiers indicating times at which teams become closer to a template followed by periods in which managers appear to differentiate themselves more from the peers. Also note that the level of similarity between tiers increases with rank suggesting that as we start to consider higher performing managers, their teams are more like one another not only at certain parts of the season but, on average, over its entirety (see \ref{fig:jaccard_all} for corresponding plots for each tier individually). The high level of similarity between the better managers' teams in the first gameweek (and the corresponding large points totals seen in \ref{fig:points_summary}) is particularly interesting given that this is before they have observed a physical game being played in the actual season. This suggests a similar approach in identifying players based purely upon their historical performance and corresponding value by the more skilled managers.
\begin{figure}
\centering
\includegraphics[width = 0.48\textwidth]{team_similarity_plots_v2}
\caption{\textbf{Analysis of team similarity of managers.} (a) Size of each of the first three identified clusters over all managers for each gameweek. Note that the first cluster is generally of size one, simply containing the most-owned player in the game. (b)~An example of the network structure of these three clusters for gameweek 38, where we can see the ownership level decreasing in the larger clusters. The diagonal elements of this structure are the fraction of teams in which the player is present. (c)~The Jaccard similarity between the tiers of managers and also over all managers; note that the higher-performing managers tend to be more like one another than those in lower tiers, also note the fluctuations in similarity over the course of the season indicating that a template team emerges at different time points
}
\label{fig:team_similarity}
\end{figure}
\section{Discussion}
The increasing popularity of fantasy sports in recent years \cite{fsgaDemographics} enables the quantitative analysis of managers' decision-making through the study of their digital traces.
The analysis we present in this article considers the game of Fantasy Premier League, which is played by approximately seven million managers. We observe a consistent level of skill among managers in the sense that there exists a considerable correlation between their performance over multiple seasons of the game, in some cases over thirteen years. This result is particularly striking given the stochastic nature of the underlying game upon which it is based.
Encouraged by these findings, we proceeded to conduct a deeper analysis
of the actions taken by a large proportion of the top one million managers from the 2018-19 season of the game. This allowed each decision made by these managers to be analysed using a variety of statistical and graphical tools.
We divided the managers into tiers based upon their final position in the game and observed that the managers in the upper echelons consistently outperformed those in lower ones, suggesting that their skill levels are present throughout the season and that their corresponding rank is not dependent on just a small number of events. The skill-based decisions were apparent in all facets of the game, including making good use of transfers, strong financial awareness, and taking advantage of short- and long-term strategic opportunities, such as their choice of captaincy and use of the chips mechanic, see Section~\ref{subsec:chips}.
Arguably the most remarkable observation presented in this article is, however, the emergence of what we coin a \textit{template team} that suggests a form of common collective behaviour occurring between managers. We show that most teams feature a common core group of constituent players at multiple time points in the season. This occurs despite the wide range of possible options for each decision,
suggesting that the managers are acting similarly, and particularly so for the top-tier managers as evident by their higher similarity metrics. Such coordinated behaviour by managers suggests an occurrence of the so-called `superstar effect' within fantasy sports just as per their physical equivalent \cite{lucifora2003superstar}, whereby managers independently arrive at a common conclusion on a core group of players who are viewed as crucial to optimal play. A further dimension is added by the fact that the similarity between the teams of better managers is evident even prior to the first event of the season, i.e., they had apparently all made similar (good) decisions even `before a ball was kicked'.
In this article we have focussed on the behaviour of the managers and their decision-making that constitutes their skill levels. The availability of such detailed data offers the potential for further research from a wide range of areas within the field of computational social science. For example, analysis of the complex financial dynamics taking place within the game as a result of the changing player values and the buying/selling decisions made by the managers would be interesting. A second complementary area of research would be the development of algorithms that consider the range of possible options available to managers and give advice on optimizing point returns.
Initial analysis has recently been conducted \cite{bhatt2019should} in this area, including the optimal captaincy choice in a given gameweek, and has demonstrated promising results.
In summary, we believe the results presented here offer an insight into the behaviour of top fantasy sport managers that is indicative of both long-term planning and collective behaviour within their peer group, demonstrating the intrinsic level of skill required to remain among the top positions over several seasons, as observed in this study. We are however aware that the correlations between decisions and corresponding points demonstrated are not perfect, which is in some sense to be expected due to the non-deterministic nature which makes the sport upon which the game is based so interesting to the millions of individuals who enjoy it each week. These outcomes suggest a combination of skill and luck being present in fantasy sport just as in their physical equivalent.
\section{Methods}
\subsection{Data Collection}
We obtained the data used in this study by accessing approximately 50 million unique URLs through the Fantasy Premier League API. The rankings at the end of the 2018/19 season were obtained through \url{https://fantasy.premierleague.com/api/leagues-classic/{league-id}/standings/} from which we could obtain the entry ID of the top 1 million ranked managers. Using these IDs we then proceeded to obtain the team selections along with other manager quantities for each gameweek of this season that were used in the study through \url{https://fantasy.premierleague.com/api/entry/{entry-id}/event/{GW}/picks/}, we then filtered the data to include only managers for whom we had data for the entirety of the season which consisted of $901,912$ unique managers. The data for individual footballers and their performances were captured via \url{https://fantasy.premierleague.com/api/bootstrap-static/}. Finally, the historical performance data was obtained for 6 million active managers through \url{https://fantasy.premierleague.com/api/entry/{entry-id}/history/}.
\subsection{Calculation of Transfer Quality}
In order to calculate the transfer quality plot shown in Fig.~\ref{fig:decision_plots}(b) we consider the gameweeks in which managers made one transfer and, based upon the value of the player whom they transferred in, determine what fraction of players with the same price or lower the manager could have instead bought for their team. Suppose that in gameweek $G$ the manager transferred out player $x_i$, who had value $q_G(x_i)$, for player $x_j$ who scored $p_G(x_j)$ points in the corresponding gameweek. The calculation involves firstly finding all players the manager could have transferred in, i.e., those with price less than or equal to $q_G(x_i)$
and then determining the fraction $y_G(x_i, x_j)$ of these players who scored more points than the chosen player given the player whom was transferred out. This is calculated by using
\begin{equation*}
y_G(x_i, x_j) = \frac{\sum_k \mathbbm{1}\left[q_G(x_{k}) \le q_G(x_i)\right] \cdot \mathbbm{1}\left[p_G(x_{k}) > p_G(x_j)\right]}{\sum_{\ell} \mathbbm{1}\left[q_G(x_{\ell}) \le q_G(x_i)\right]},
\end{equation*}
where $\mathbbm{1}$ represents the indicator function.
Using this quantity we proceed to group over the entire season for each tier of manager which allows us to obtain the distribution of the measure itself and finally the probability of making a better transfer which is shown in panel (b) of Fig.~\ref{fig:decision_plots}.
\subsection{Team Similarity}\label{subsec:jaccard_calc}
With the aim of identifying levels of similarity between the teams of two managers $i$ and $j$ we make use of the Jaccard similarity which is a measure used to describe the overlap between two sets. Denoting by $T^G_i$ the set of players that appeared in the squad of manager $i$ during gameweek $G$ we consider the Jaccard similarity between the teams of managers $i$ and $j$ for gameweek $G$ given by
\begin{equation*}
J^G(i,j) = \frac{\left|T^G_i \cap T^G_j\right|}{\left|T^G_i \cup T^G_j\right|},
\label{eq:jaccard}
\end{equation*}
where $|\cdot|$ represents the cardinality of the set. We then proceed to calculate this measure for all $n$ managers which results in a $n \times n$ symmetric matrix $J^G$, the $(i,j)$ element of which is given by the above equation, note that the diagonal elements of this matrix are unity. Calculation of this quantity over all teams is computationally expensive in the sense that one must perform pair-wise comparison of the $n$ teams for each gameweek. As such we instead calculated an estimate of this quantity by taking random samples without replacement of 100 teams from each tier and calculating the measure both over all teams and also within tiers for each gameweek. We repeat this calculation 10,000 times and the average results are those used in the main text and \ref{sm:clusters}.
\subsection{Cluster Identification of Player Ownership}
As described in the main text, the calculation of clusters within which groups of players co-appear involves taking advantage of the underlying network structure of all sets of teams. The adjacency matrix describing this network is defined by the matrix $A_{ij}^G$ that has entry $(i,j)$ equal to the number of teams within which player $i$ and $j$ co-appear in gameweek $G$. Note that the diagonal entries of this matrix describe the number of teams in which a given player appears gameweek $G$. Using this matrix we identify the clusters via a hierarchical clustering approach, with $k = 4$ clusters determined via analysing the within-cluster sum of squared errors of $k$-means for each cluster using the elbow method as shown in \ref{fig:WSS}.
\begin{acknowledgements}
Helpful discussions with Kevin Burke, James Fannon, Peter Grinrod, Stephen Kinsella, Renaud Lambiotte, and Sean McBrearty are gratefully acknowledged. This work was supported by Science Foundation Ireland grant numbers 16/IA/4470, 16/RC/3918, 12/RC/2289 P2 and 18/CRT/6049), co-funded by the European Regional Development Fund.~(J.D.O'B and J.P.G). We acknowledge the DJEI/DES/SFI/HEA Irish Centre for High-End Computing (ICHEC) for the provision of computational facilities and support. The funders had no role in study design, data collection, and analysis, decision to publish, or preparation of
the manuscript.
\end{acknowledgements}
| {
"attr-fineweb-edu": 1.789062,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdBc5qdmC7qKfcX3e | \section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{materials/ECCV2022_TeaserFigure_v2-min.jpg}
\caption{
Which one is more descriptive for the above professional sport game clip?
Conceptual comparison between NSVA (top box)
and
extant basketball (NBA) video captioning datasets \cite{Yu-cvpr2018,msrvtt} (bottom box). The sentence in blue text describes a passing action, which might not be practically valuable and is not a focus of NSVA. Instead, captions in NSVA target compact information that could enable statistics counting and game analysis.
Moreover, both alternative captioning approaches lack in important detail (e.g., player identities and locations).
}
\label{fig:teaser}
\end{figure}
Recently, there have been many attempts aimed at empowering machines to describe the content presented in a given video~\cite{krishna-iccv2017,DBLP:journals/corr/abs-1806-08854,Yu-cvpr2018,svcdv}. The particular challenge of generating a text from a given video is termed ``video captioning''~\cite{Aafaq-Computing-Surveys-2019}. Sports video captioning is one of the most intriguing video captioning sub-domains, as sports videos usually contain multiple events depicting the interactions between players and objects, e.g., ball, hoop and net. Over recent years, many efforts have addressed the challenge of sports video captioning for soccer, basketball and volleyball games~\cite{svcdv,Yu-cvpr2018,SVN}.
Despite the recent progress seen in sports video captioning, previous efforts share three major limitations. (1) They all require laborious human annotation efforts that limit the scale of data~\cite{svcdv,Yu-cvpr2018,SVN}. (2) Some previous efforts do not release data~\cite{svcdv,Yu-cvpr2018,SVN}, and thereby prevent others from accessing useful data resources. (3) The collected human annotations typically lack the diversity of natural language and related intricacies. Instead, they tend to focus on details that are not interesting to human viewers, e.g. passing or dribbling activities (see Figure~\ref{fig:teaser}), while lacking important information (e.g. identity of performing players). In this regard, a large-scale sports video dataset that is readily accessible to researchers and annotated by professional sport analysts is very much needed. In response we propose NBA dataset for Sports Video Analysis (NSVA).
Figure \ref{fig:teaser} shows captions depicting the same sports scene from NSVA, MSR-VTT \cite{msrvtt} and another fine-grained sports video captioning dataset, SVN \cite{Yu-cvpr2018}. Our caption is compact, focuses on key actions (e.g., \textit{made shot}, \textit{miss shot} and \textit{rebound}) and is identity aware. Consequently, it could be further translated to a box score for keeping player and team statistics. SVN includes more less important actions, e.g., \textit{passing}, \textit{dribbling} or \textit{standing}, which are excessively common but of questionable necessity. They neither cover player names nor essential details, e.g., shooting from 26 feet away. This characteristic of NSVA poses a great challenge as it requires models to ignore spatiotemporally dominant, yet unimportant, events and instead focus on key events that are of interest to viewers, even though they might have unremarkable visual presence. Additionally, NSVA also requires the model to identify the players whose actions will be recorded in the box score. This characteristic adds another difficulty to NSVA and distinguishes us from all previous work, where player identification is under-emphasized by only referring to ``a man'', ``some player'', ``offender'', etc.
\textbf{Contributions.} The contributions of this paper are threefold. (1) We propose a new identity-aware NBA dataset for sports video analysis (NSVA), which is built on web data, to fill the vacancy left by previous work whose datasets are neither identity aware nor publicly available. (2) Multiple novel features are devised, especially for modeling captioning as supported by NSVA, and are used for input to a unified transformer framework. Our designed features can be had with minimal annotation expense and provide complementary kinds of information for sports video analysis.
Extensive experiments have been conducted to demonstrate that our overall approach is effective.
(3) In addition to video captioning, NSVA is used to study salient player identification and hierarchical action recognition. We believe this is a meaningful extension to the fine-grained action understanding domain and can help researchers gain more knowledge by investigating their sports analysis models for these new aspects.
\section{Related work}\label{sec:related_work}
\noindent\textbf{Video captioning} aims at generating single or multiple natural language sentences based on the information stored in video clips. Researchers usually tackle this visual data-to-text problem with encoder-decoder frameworks~\cite{memoryrnn,pan2020spatio,aafaq2019spatio,shi2020learning}.
Recent efforts have found object-level visual cues particularly useful for caption generation on regular videos~\cite{pan2020spatio,objectcaption,objectcaption2,objectcaption3} as well as sports videos~\cite{SVN,svcdv}. Our work follows this idea to make use of detected finer visual features together with global information for professional sports video captioning.
\noindent\textbf{Transformers and attention} first achieved great success in the natural language domain~\cite{attention-all-you-need,bert}, and then received much attention in vision research. One of the most influential pioneering works is the vision transformer (ViT)~\cite{vit}, which views an image as a sequence of patches on which a transformer is applied.
Shortly thereafter, many tasks have found improvements using transformers, e.g., object detection~\cite{detr}, semantic segmentation~\cite{segtransformer,segtransformer2} and video understanding~\cite{tqn,swin,videobert,timeSformer}.
Our work is motivated by these advances and uses transformers as building blocks for both feature extraction and video caption generation.
\noindent\textbf{Sports video captioning} is one of several video captioning tasks that emphasizes generation of fine-grained text descriptions for sport events, e.g., chess, football, basketball and volleyball games~\cite{chen-acl2011,msrvtt,chess,svcdv,Yu-cvpr2018,SVN}. One of the biggest limitations in this area is the lack of public benchmarks. Unfortunately, none of the released video captioning datasets have a focus on sport domains.
The most similar efforts to ours have not made their datasets publicly available~\cite{svcdv,Yu-cvpr2018,SVN}, which inspires us to take advantage of webly available data to produce a new benchmark and thereby enable more exploration on this valuable topic.
\noindent\textbf{Identity aware video captioning} is one of the video captioning tasks that requires recognizing person identities~\cite{nba2,nba1,identity-aware-captioning}.
We adopt this setting in NSVA because successfully identifying players in a livestream game is crucial for sports video understanding and potential application to automatic score keeping. Unfortunately, the extant sports video captioning work failed to take player identities into consideration when creating their datasets. Earlier efforts that targeted player identification in professional sport scenes only experimented in highly controlled (i.e., unrealistic) environments, e.g., two teams and ten players, and has not consider incorporating identities in captioning~\cite{nba2,nba1}.
\noindent\textbf{Action recognition} automates identification of actions in videos. Recent work has mostly focused on two sub-divisions: coarse and fine-grained recognition. The coarse level tackles basic action taxonomy and many challenging datasets are available, e.g., UCF101~\cite{ucf101}, Kinetics~\cite{kinetics} and ActivityNet~\cite{activitynet}. In contrast, fine-grained distinguishes sub-classes of basic actions, with representative datasets including Diving48~\cite{diving48}, FineGym~\cite{finegym}, Breakfast~\cite{breakfast} and Epic-Kitchens~\cite{epickitchen}. Feature representation has advanced rapidly within the deep-learning paradigm (for review, see~\cite{acreview}) from primarily convolutional (e.g.,~\cite{tsn,i3d,s3d,tsm,slowfast}) to attention-based (e.g.,~\cite{timeSformer,swin}). Our study contributes to action understanding by providing a large-scale fine-grained basketball dataset that has three semantic levels as well as a novel attention-based recognition approach.
\section{Data collection}\label{sec:data}
Unlike previous work, we make fuller use of data that is available on the internet. We have written a webscaper to scrape NBA play-by-play data from the official website~\cite{nba}, which contains high resolution (e.g., 720P) video clips along with descriptions, each of which is a single event occurred in a game.
We choose 132 games played by 10 teams in NBA season 2018-2019, the last season unaffected by COVID and when teams still could play with full capacity audiences, for data collection. We have collected 44,649 video clips, each of which has its associated play-by-play information, e.g., description, action and player names. We find that on the NBA website some different play-by-play information share the same video clip because there are multiple events taking place one-by-one within a short period time and the NBA just simply uses the same video clip for every event occurring in it. To avoid conflicting information in model training, the play-by-play text information sharing the same video clip is combined. We also remove the play-by-play text information that is beyond the scope of a single video clip, e.g., the points a player has scored so far in this game. This entire process is fully automated, so that we can access NBA webly data and associate video clips with captions, actions and players. Overall, our dataset consists of 32,019 video clips for fine-grained video captioning, action recognition and player identification. Additional details on dataset curation are provided in the supplement.
\subsection{Dataset statistics}
\begin{table*}[t]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l c c c cc c c c c c c c c c c c}
\toprule
Datasets & & Domain && \texttt{\#}Videos & & \texttt{\#}Sentences & & \texttt{\#}Hours & & Avg. words & & Accessibility & & Scalability & & Multi-task \\
\midrule
SVN~\cite{SVN} & & basketball & & 5,903 & & 9,623 & & 7.7 & & 8.8 & & \ding{55} & & \ding{55}& & \ding{55} \\
SVCDV~\cite{svcdv} & & volleyball & & 4,803 & & 44,436 & & 36.7 & & - & &\ding{55} & & \ding{55} & & \ding{55} \\
NSVA & & basketball & & \textbf{32,019} && \textbf{44,649} & &\textbf{84.8} & & 6.5 & & \ding{51} & & \ding{51} & & \ding{51} \\
\bottomrule
\end{tabular}
}
\caption{The statistics of NSVA and comparison to other fine-grained sports video captioning datasets.
}
\label{tab:stats}
\end{table*}
\input{table_2}
Table~\ref{tab:stats} shows the statistics of NSVA and two other fine-grained sports video captioning datasets. NSVA has the most sentences out of three datasets and five times more videos than both SVN and SVCDV. The biggest strength of NSVA is its public accessibility and scalability. Both SVN and SVCDV datasets are neither publicly available nor scalable because heavy maunal annotation effort is required in their creation. In contrast, NSVA is built on data that already existed on the internet; so, everyone who is interested can directly download and use the data by following our guidelines. Indeed, the 132 games that we chose to use only accounts for 10.7\% of total games in NBA season 2018-2019. There is more data being produced everyday as NBA teams keep playing and sharing their data. Note that some other datasets also contain basketball videos, e.g., MSR-VTT~\cite{msrvtt} and ActivityNet~\cite{activitynet}. However, they only provide coarse-level captions (see example in Figure~\ref{fig:teaser}) and include very limited numbers of videos, e.g, ActivityNet has 74 videos for basketball and they are all from amateur play, not professional.
Table~\ref{tab:data_split} shows the data split of NSVA. We hold 32 games out from 132 games to form validation set and test set, each of which contains 16 games. All clips and texts belonging to a single game are assigned to the same data split. When choosing what data split a game is assigned to, we ensure that every team match-up has been seen at least once in the training set. For example, Phoenix Suns play four games against San Antonio Spurs in NBA season 2018-2019. We put two games in the training set, one in the validation set and one in the test set.
NSVA also supports two additional vision tasks, namely fine-grained action recognition and key player identification. We adopt the same data curation strategy as captioning and show the number of distinct action or player name categories in the rightmost two columns of Table~\ref{tab:data_split}. When being compared with other find-grained sport action recognition datasets, e.g., Diving48 (48 categories) and Finegym (530 categories), ours is in the middle place (172 categories) in terms of number of actions and is the largest regarding the basketball sub-domain.
\begin{figure}[t]
\footnotesize
\centering
\includegraphics[width=1.0\linewidth]{materials/ECCV2020_Figure2_v2_page-0001-min.jpg}
\caption{Pipeline of our proposed approach for versatile sports video understanding. First, raw video clips (left) are processed into two types of finer visual information, namely object detection (including ball, players and basket), and court-line segmentation, all of which are cropped, grided and channelled into a pre-trained vision transformer model for feature extraction. Second, these heterogeneous features are aggregated and cross-encoded with the global contextual video representation extracted from TimeSformer (middle). Third, a transformer decoder is used with task-specific heads to recursively yield results, be it as video captions, action recognition or player identification (right).
}
\label{fig:algorithm}
\end{figure}
\section{Architecture design}\label{sec:methods}
\noindent\textbf{Problem formulation.} We seek to predict the correct sequence of word captions as one-hot vectors, $\{\mathbf{y}\}$, whose length is arbitrary, given the observed input clip $X \in \mathbb{R}^{H \times W \times 3 \times N}$ consisting of $N$ RGB frames of size $H \times W$ sampled from the original video.
\noindent\textbf{Overall structure}. As our approach relies on feature representations extracted from multiple orthogonal perspectives, we adopt the framework of UniVL \cite{UniVL}, a network designed for cross feature interactive modeling, as our base model. It consists of four transformer backbones that are responsible for coarse feature encoding, fine-grained feature encoding, cross attention and decoding, respectively.
In the following, we step-by-step detail our multi-level feature extraction, integrated feature modeling and decoder.
\subsection{Course contextual video modeling}
In most video captioning efforts a 3D-CNN has been adopted as the fundamental unit for feature extraction, e.g., S3D \cite{s3d,SVN,svcdv}. More recent work employed a transformer architecture in tandem \cite{UniVL}.
Inspired by TimeSformer \cite{timeSformer}, which is solely built on a transformer block and has shown strong performance on several action recognition datasets, we substitute the S3D part of UniVL with this new model as video feature extractor. Correspondingly, we decompose each frame into $F$ non-overlapping patches, each of size $P \times P$, such that the $F$ patches span the entire frame, i.e., $F = HW/P^2$.
We flatten these patches into vectors and channel them into several blocks comprised of linear-projection, multihead-self-attention and layer-normalization, in both spatial and temporal axes, which we shorten as
\begin{equation}\label{eq:coarse}
\mathbf{F}_c = \text{TimeSformer} \left(X\right),
\end{equation}
where $\mathbf{F}_c \in \mathbb{R}^{N \times d}$, $d$ is the feature dimension and $X$ is an input clip.
Transformer blocks have less strong inductive priors compared to convolutional blocks, so they can more readily model long-range spatiotemporal information with their self-attention mechanism in a large-scale data learning setting. We demonstrate the strong performance of TimeSformer features in Sec.~\ref{sec:experiment}.
\subsection{Fine-grained objects of interest modeling}
One limitation of solely using TimeSformer features is that we might lose important visual details, e.g., ball, players and basket, after resizing $1280\times720$ images to $224\times224$, the size that TimeSformer encoder needs. Such loss can be important because NSVA requires modeling main players' identities and their actions to generate an accurate caption. To remedy this issue, we use an object detector to capture objects of interest that contain rich regional semantic information complementary to the global semantic feature provided by TimeSformer. We extract 1,000 image frames from videos in the training set and annotate bounding boxes for basket and ball and fine-tune on the YOLOv5 model \cite{yolov5} to have a joint ball-basket object detector. This pre-trained model returns ball and basket crops from original images, i.e., $\mathbf{I}_{ball}$ and $\mathbf{I}_{basket}$.
For player detector, we simply use the YOLOv5 model trained on the MS-COCO dataset~\cite{ms-coco} to retrieve a stack of player crops, $\{ \mathbf{I}_{player}\}$. As our caption is identity-aware, we assume that players who have touched the ball during a single play are more likely to be mentioned in captions. Thus, we only keep the detected players that have overlap with a detected ball, e.g., each player crop, $\mathbf{I}_{player}$, is given a confidence score, $C$, of 1 otherwise 0; in particular, $\text{if} \; \text{IoU}\left(\mathbf{I}_{player_{i}}, \mathbf{I}_{ball}\right) > 0:C = 1; \; \text{else}: C = 0$. Player crops that have $C = 1$ will be selected for later use, $\mathbf{I}_{pb}$. Even though the initially detected players, $\{\mathbf{I}_{player}\}$, potentially are contaminated by non--players (e.g., referees, audience members), our ball-focused confidence scores tend to filter out these distractors.
After getting bounding boxes of ball, players intersecting with the ball and basket, we crop these objects from images and feed them to a vision transformer, ViT~\cite{vit}, for feature extraction,
\begin{equation}
\mathbf{f}_{ball} = \text{ViT}\left(\mathbf{I}_{ball}\right), \; \mathbf{f}_{basket} = \text{ViT}\left(\mathbf{I}_{basket}\right), \;
\mathbf{f}_{pb} = \text{ViT}\left(\mathbf{I}_{pb}\right),
\end{equation}
where $\mathbf{f}_{ball}$, $\mathbf{f}_{pb}$ and $\mathbf{f}_{basket}$ are features of $d$ dimension extracted from cropped ball image, $\mathbf{I}_{ball}$, player with ball image, $\mathbf{I}_{pb}$, and basket image, $\mathbf{I}_{basket}$, respectively. We re-group features from every second in the correct time order to have $\mathbf{F}_{ball}$, $\mathbf{F}_{basket}$ and $\mathbf{F}_{pb}$, which all are of dimensions $\mathbb{R}^{m \times d}$.
\noindent\textbf{Discussion.} Compared with previous work that either require pixel-level annotation in each frame to segment each player, ball and background~\cite{SVN}, or person-level annotation that needs professional sport knowledge to recognize each player's action such as setting, spiking and blocking~\cite{svcdv}, our annotation scheme is very lightweight. The annotation only took two annotators less than five hours to draw bounding boxes for ball and basket in 1,000 selected image frames from the training set. Compared to the annotation procedure that requires months of work for experts with extensive basketball knowledge~\cite{SVN}, our approach provides a more affordable, replicable and scalable option. Note that these annotations are only for training the detectors; the generation of the dataset per se is completely automated; see Sec.~\ref{sec:data}.
\subsection{Position-aware module}
NSVA supports modeling estimation of the distance from where the main player's actions take place to the basket. As examples, ``Lonnie Walker missed \textbf{2'} cutting layup shot'' and ``Canaan \textbf{26'} 3PT Pullup Jump Shot'', where the numbers in bold denote the distance between the player and basket.
Notably, distance is strongly correlated with action; e.g., players cannot make a 3PT shot at two-foot distance from the basket.
While estimating such distances is important for action recognition and caption generation, it is non-trivial owing to the need to estimate separation between two 3D objects from their 2D image projections.
Instead of explicitly making such prediction directly on raw video frames, we take advantage of prior knowledge that basketball courtlines are indicators of object's location. We use a pix2pix network \cite{pix2pix} trained on synthetic data \cite{reconstructnba} to generate courtline segmentation given images.
We overlay the detected player with ball and basket region, while blacking out other areas. Figure~\ref{fig:algorithm} shows an exemplar image, $\mathbf{I}_{pa}$, after such processing. We feed these processed images to ViT for feature extraction, i.e.,
$\mathbf{F_{pa}}=\text { ViT }\left(\mathbf{I}_{pa}\right),
$
where $\mathbf{F}_{pa} \in \mathbb{R}^{m \times d}$ are ViT features extracted from position-aware image $\mathbf{I}_{pa}$.
\subsection{Visual transformer encoder}
After harvesting the video, ball, basket and courtline features, we are ready to feed them into the coarse encoder as well as the finer encoder for self-attention. This step is necessary as the used backbones (i.e., ViT and TimeSformer) only perform attention on frames within one second; there is no communication between different timestamps.
For this purpose, we use one transformer to encode video feature, $\mathbf{F}_{c} \in \mathbb{R}^{N \times d}$ \eqref{eq:coarse}, and another transformer to encode aggregated finer features, $\mathbf{F}_f \in \mathbb{R}^{M \times 2d}$, which is from the concatenation of position-aware feature, $\mathbf{F}_{pa}$, and the summation of object-level features. Empirically, we find summation sufficient, i.e.,
\begin{equation}
\mathbf{F}_{f} = \text {CONCAT}(\text {SUM}( \mathbf{F}_{ball}, \mathbf{F}_{basket},\mathbf{F}_{pb}),\mathbf{F}_{pa})
\end{equation}
The overall encoding process is given as
\begin{equation}
\mathbf{V}_{c} = \text{Transformer}\left(\mathbf{F}_{c}\right),
\mathbf{V}_{f} = \text{Transformer}\left(\mathbf{F}_{f}\right),
\end{equation}
where $\mathbf{V}_{c} \in \mathbb{R}^{n \times d}$ and $\mathbf{V}_{f} \in \mathbb{R}^{m \times d}$.
\subsection{Cross encoder for feature fusion}
The coarse and fine encoders mainly focus on separate information. To make them fully interact, we follow existing work and adopt a cross encoder \cite{UniVL}, which takes coarse features, $\mathbf{V}_{c}$, and fine features, $\mathbf{V}_{f}$, as input. Specifically, these features are combined along the sequence dimension via concatenation and a transformer is used to generate the joint representation, i.e.,
\begin{equation}
\mathbf{M}=\operatorname{Transformer}(\text {CONCAT}(\mathbf{V_c}, \mathbf{V_f})),
\end{equation}
where $\mathbf{M}$ is the final output of the encoder. To generate a caption, a transformer decoder is used to attend $\mathbf{M}$ and output text autoregressively, cf., \cite{videobert,gpt3,radford2021learning}.
\subsection{Learning and inference}
Finally, we calculate the loss as the sum of negative log likelihood of correct caption at each step according to
\begin{equation}\label{eq:loss}
\mathcal{L}(\theta)=-\sum_{t=1}^{T} \log P_{\theta}\left({y}_{t} \mid {y}_{<t}, \mathbf{M}\right),
\end{equation}
where $\theta$ is the trainable parameters, ${y}_{<t}$ is the ground-truth words sequence before step $t$ and $y_{t}$ is the ground truth word at step $t$.
During inference, the decoder autoregressively operates a beam search algorithm~\cite{beam} to produce results, with beam size set empirically; see Sec.~\ref{subsec:Implementation details}.
\subsection{Adaption to other tasks}
In NSVA, action and identity also are sequential data. So, we adopt the same model, shown in Figure~\ref{fig:algorithm}, for all three tasks and swap the caption supervision signal in \eqref{eq:loss}, $y_{1:t}$, with either one-hot action labels or player name labels. Similarly, inference operates beam search decoding. Details are in the supplement.
\section{Empirical evaluation}\label{sec:experiment}
\subsection{Implementation details}\label{subsec:Implementation details}
We use hidden state dimension of 768 for all encoders/decoders. We use the BERT~\cite{bert} vocabulary augmented with 356 action types and player names entries. The transformer encoder, cross-attention and decoder are pre\-trained on a large instructional video dataset, Howto100M~\cite{howto100m}. We keep the pre-trained model and fine tune it on NSVA, as we found the pre-trained weights speed up model convergence. The maximum number of frames for the encoder and the maximum output length are set to 30. The number of layers in the feature encoder, cross encoder and decoder are 6, 3 and 3, respectively. We use the Adam optimizer with an initial learning rate of 3e-5 and employ a linear decay learning rate schedule with a warm-up strategy. We used a batch size of 32 and trained our model on a single Nvidia Tesla T4 GPU for 12 epochs over 6 hours. The hyperparameters were chosen based on the top performer on the validation set.
In testing we adopt beam search~\cite{beam} with beam size 5. For extraction of the TimeSformer feature, we sample video frames at 8 fps. For extraction of other features, we sample at 12 vs 4 fps when the ball is vs is not detected in the basket area. We record the time when the ball first is detected and keep 100 frames before and after. This step saves about 70\% storage space compared to sampling
the entire video at 8 fps, but still keeps the most important frames.
\begin{table}[t]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l c l c c c c c c c}
\toprule
Model & & Feature & C & M & B@1 & B@2 & B@3 & B@4 & R\_L \\
\midrule
MP-LSTM~\cite{MP-LSTM} & & S3D & 0.500 & 0.153 & 0.325 & 0.236 & 0.167 & 0.121 & 0.332 \\
TA~\cite{TA} & & S3D & 0.546 & 0.156 & 0.331 & 0.242 & 0.175 & 0.128 & 0.340 \\
Transformer~\cite{Transformer} & & S3D & 0.572 & 0.161 & 0.346 & 0.254 & 0.181 & 0.131 & 0.357 \\
UniVL$^{*}$~\cite{UniVL} & & S3D & 0.717 & 0.192 & 0.441 & 0.309 & 0.226 & 0.169 & 0.401 \\
\midrule
& & T & 0.956 & 0.217 & 0.467 & 0.363 & 0.274 & 0.209 & 0.468 \\
& & S3D+BAL+BAS+PB+PA & 0.986 & 0.227 & 0.479 & 0.371 & 0.281 & 0.216 & 0.466 \\
& & T+BAL & 0.931 & 0.228 & 0.496 & 0.383 & 0.289 & 0.220 & 0.484 \\
& & T+BAS & 1.023 & 0.232 & 0.500 & 0.387 & 0.292 & 0.223 & 0.486 \\
\multicolumn{1}{l}{Our Model} & & T+PB & 1.055 & 0.231 & 0.500 & 0.387 & 0.292 & 0.223 & 0.487 \\
& & T+PA & 1.064 & 0.238 & 0.511 & 0.398 & 0.301 & 0.231 & 0.498 \\
& & T+BAL+BAS & 1.074 & 0.243 & 0.508 & 0.398 & 0.306 & 0.237 & 0.499 \\
& & T+BAL+BAS+PB & 1.096 & 0.242 & 0.519 & 0.408 & 0.312 & 0.242 & 0.506 \\
& & T+BAS+BAL+PB+PA & \textbf{1.139} & \textbf{0.243} & \textbf{0.522} & \textbf{0.410} & \textbf{0.314} & \textbf{0.243} & \textbf{0.508}
\\
\bottomrule
\end{tabular}}
\caption{Performance comparison of our model vs. alternative video captioning models on the NSVA test set. T denotes TimeSformer feature. BAL, BAS and PB denote ViT features for ball, basket and player with ball, respectively. PA is the position-aware feature. $^{*}$As our model adopts the framework of UniVL as backbone, results in the row of UniVL+S3D equals to those of our model only using S3D features.}
\label{tab:caption_rst}
\end{table}
\subsection{Video captioning}
\noindent\textbf{Baseline and evaluation metrics.} The main task of NSVA is video captioning. To assess our proposed approach, we compare our results with four state-of-the-art video captioning systems: MP-LSTM~\cite{MP-LSTM}, TA~\cite{TA}, Transformer~\cite{Transformer} and UniVL~\cite{UniVL} on four widely-used evaluation metrics: CIDEr (C) \cite{cider}, Bleu (B) \cite{bleu}, Meteor (M) \cite{meteor} and Rouge-L (R\_L) \cite{rouge}. Results are shown in Table~\ref{tab:caption_rst}.
To demonstrate the effectiveness of our approach against the alternatives, we train these models on NSVA using existing codebases~\cite{Xmodaler,UniVL}.
\noindent\textbf{Main results.} Comparing results in the first two rows with results in other rows of Table~\ref{tab:caption_rst}, we see that transformer models outperform LSTM models, which confirms the superior capability of a transformer on the video captioning task. Moveover, it is seen that TimeSformer features achieve much better results compared to S3D in modeling video context. We conjecture that this is due to its ability to model long spatiotemporal dependency in videos; see $4^{th}$ and $5^{th}$ rows. This result suggests that TimeSformer features are not only useful for video understanding tasks but also video captioning. Comparing results on the $4^{th}$ and $6^{th}$ rows, we find that after fusing S3D features with those extracted by our proposed modules (but not the TimeSformer), improvements are seen on all metrics.
A possible explanation is that our features add additional semantic information (e.g., pertaining to ball, player and court) and thereby lead to higher quality text. The best result is achieved by fusing TimeSformer features with our proposed features. These results suggest that (1) TimeSformer features are well suited to video captioning and (2) our proposed features can be fused with a variety of features for video understanding to improve performance further.
\begin{comment}
\noindent\textbf{Meteor results.} It is seen that PA and PB features are individually effective on the Meteor metric, e.g., leading to notable improvements over TimeSformer feature alone. However, a plateau occurs when using PA and PB with others, i.e., T+BAL+BAS+PB+PA. Possible explanations are: (1) The combination of features has saturated the dataset. (2) In its formal definition~\cite{meteor}, the Meteor score is defined as $ F_{mean} \times (1-\text{Penalty})$. The second term, i.e., $(1-\text{Penalty}) \in [0,5, 1)$, therefore discounts the performance improvements. (3) The Meteor score increases most when evaluating captions that have identical stem (e.g., missing vs. missed) as well as synonym (e.g., passing vs hand-over) with the ground truth. The presence of such ambiguities in NSVA is very minor.
\end{comment}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{qualitative_analysis}
\caption{Qualitative analysis of captions generated by our proposed approach and others. It is seen that captions from our full approach are the most close to references.
}
\label{fig:qualitative_analysis}
\end{figure}
From the $7^{th}$ to final row of Table~\ref{tab:caption_rst}, we ablate our finer-grained features. It is seen that our model benefits from every proposed finer module, and when combining all modules, we observe the best result; see last row. This documents the effectiveness of our proposed method for the video captioning task on NSVA. More discussions on the empirical results can be found in the supplement.
\noindent\textbf{Qualitative analysis.} Figure~\ref{fig:qualitative_analysis} shows two example outputs generated by four different models, as compared to the ground-truth reference. From the left example output, we see that our full model is able to generate a high quality caption, albeit with relatively minor mistakes. After replacing the TimeSformer features with S3D features, the model fails to identify the player who gets the rebound and mistakes a jump shot for a hook shot. When using TimeSformer or S3D feature alone, the result further deteriorate by misidentifying all players. We also notice that our devised features, i.e., PB+BAL+BAS+PA, can greatly help capture a player's position, e.g., with 10' as the reference, models with PB+BAL+BAS+PA features output 11' and 8', compared to 15' and 25' output by TimeSformer only and S3D only.
The right column shows an example where all models successfully recognize the action, i.e., jump shot, except the S3D only model. Our full model can identify most players but still mistakes Jarret Allen for Joel Embiid. As we will discuss in Sec.~\ref{sec:playerID}, player identification is the bottleneck of our model as it is trained with a very weak supervision signal, which points to future research.
\subsection{Fine-grained basketball action recognition} \label{sec:finegrain_action}
As elaborated in Sec.~\ref{sec:data}, NSVA has massive video clips that cover almost every moment of interest, and these events have been provided by the NBA for the purpose of statistics tracking, which allows fine-grained action recognition. A glimpse of how our action labels are hierarchically organized is shown at Figure~\ref{fig:action_categories}.
\noindent\textbf{Action hierarchy.} NSVA enjoys three levels of granularity in the basketball action domain. (1) On the coarsest level, there exist 14 actions that describe the on-going sport events from a very basic perspective. Some representative examples include: $\{$ \textcolor{green}{\textit{Shot, Foul, Turnover}} $\}$. (2) If further dividing the coarse actions into their finer sub-divisions, we can curate 124 fine-grained actions. Taking the shot category as an example, it has the following sub-categories: $\{$ \textcolor{cyan}{\textit{Shot Dunk}}, \textcolor{cyan}{\emph{Shot Pullup Jumpshot}}, \textcolor{cyan}{\textit{Shot Reverse Layup}}, etc. $\}$. All of these finer actions enrich the coarse ones with informative details (e.g., diverse styles for the same basketball movement). (3) On the finest level, there exists 24 categories that depicts the overall action from the event perspective, which includes the coarse action name, the fine action style and the overall event result, e.g., $\{$ \textcolor{red}{\textit{Shot-Pullup-Jumpshot-Missed}} $\}$. Thanks to the structured labelling, NSVA can support video action understanding on multiple granularity levels. We demonstrate some preliminary results using our proposed approach in Table~\ref{tab:action_reco}.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{materials/nba_action_category.png}
\caption{
Visualization of a sub-tree from our fine-grained basketball action space.
There are 172 fine-grained categories that comprise three levels of sport event details: \textbf{Action-C} (coarse), \textbf{Action-F} (fine) and \textbf{Action-E} (event). Some categories have finer descendants (e.g., \textit{Shot}), while others are solitary (e.g., \textit{Jump Ball} and \textit{Block}). The full list of action categories is in the supplement.
}
\label{fig:action_categories}
\end{figure}
\noindent\textbf{Evaluation.} As exemplified in Figure~\ref{fig:teaser}, our action labels do not always assign a single ground-truth label to a clip. In fact, they contain as many actions as happens within the length of a unit clip. The example in Figure~\ref{fig:teaser} shows a video clip that has two consecutive actions, i.e., \textbf{[} \textcolor{red}{\textit{3-pt Jump-Shot Missed}} $\rightarrow$ \textcolor{cyan}{\textit{Defensive Rebound}} \textbf{]}. To properly evaluate our results in this light, we adopt metrics from efforts studying instructional videos \cite{procedureplan1,pp2,pp3}, and report: (1) mean Intersection over Union (mIoU), (2) mean Accuracy (Acc.) and (3) Success Rate (SR). Detailed explanation can be found in the supplement. We provide action recognition results using the same feature design introduced in Sec.~\ref{sec:methods} and provide an ablation study on the used features.
\begin{table}[t]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{c c c c c c c c c c c c c c c c}
\toprule
& & & & & & & \multicolumn{3}{c}{Action-C} & \multicolumn{3}{c}{Action-F} & \multicolumn{3}{c}{Action-E} \\
\cmidrule(l){8-10}
\cmidrule(l){11-13}
\cmidrule(l){14-16}
Feature-backbone & & PB & BAL & BAS & PA & & SR$\uparrow$ & Acc.$\uparrow$ & mIoU$\uparrow$ & SR$\uparrow$ & Acc.$\uparrow$ & mIoU$\uparrow$ & SR$\uparrow$ & Acc.$\uparrow$ & mIoU$\uparrow$\\
\midrule
TimeSformer & & \ding{51} & \ding{51} &\ding{51} & \ding{51} & & \textbf{60.14} & \textbf{61.20} & \textbf{66.61} & \textbf{46.88} & \textbf{51.25} & 57.08 & \textbf{37.67} & \textbf{42.34} & \textbf{46.45}\\
TimeSformer & & \ding{51} & \ding{51} &\ding{51} & - & & 60.02 & 60.79 & 65.33 & 46.42 & 50.64 & \textbf{57.19} & 36.44 & 42.29 & 42.14\\
TimeSformer & & \ding{51} & \ding{51} & - & - & & 58.06 & 60.31 & 63.71 & 44.31 & 49.01 & 55.78 & 34.53 & 39.34 & 46.45\\
TimeSformer & & \ding{51} & - & - & - & & 57.74 & 58.13 & 60.48 & 44.20 & 50.18 & 55.91 & 34.50 & 39.14 & 42.72 \\
TimeSformer & & - & - & - & - & & 55.83 & 58.01 & 60.19 & 42.55 & 49.66 & 53.81 & 33.63 & 37.50 & 40.84\\
S3D & & - & - & - & - & & 54.46 & 57.91 & 59.91 & 41.92 & 48.81 & 53.77 & 33.09 & 37.11 & 40.77\\
\bottomrule
\end{tabular}
}
\caption{Action recognition accuracy ($\%$) on NSVA at all granularities.}
\label{tab:action_reco}
\end{table}
\noindent\textbf{Results on multiple granularity recognition.} From the results in Table~\ref{tab:action_reco}, we can summarize several observations: (1) Overall, actions in NSVA are quite challenging to recognize, as the best result on the coarsest level only achieves $61.2\%$ accuracy (see columns under Action-C). (2) When the action space is further divided into sub-actions, the performance becomes even weaker (e.g., 51.25$\%$ for Action-F and 42.43 $\%$ for Action-E), meaning that subtle and challenging differences can lead to large drops in recognizing our actions. (3) TimeSformer features perform better than S3D counterparts at all granularity levels, which suggests NSVA benefits from long-term modeling. (4) We observe solid improvements by gradually incorporating our devised finer features, which once again demonstrates the utility of our proposed approach.
\subsection{Player identification}\label{sec:playerID}
We adopt the same training and evaluation strategy as in action recognition to measure the performance of our model on player identification, due to these tasks having the same format, i.e., a sequence of player names involved in the depicted action; Fig.~\ref{tab:identity_reco} has results. Resembling observations in the previous subsection, we find the quality of identified player names increases as we add more features and our full approach (top row) once again is the best performer. It also is seen that the results on all metrics are much worse than those of action recognition, cf., Table~\ref{tab:action_reco}. To explore this discrepancy, we study some failure cases in the images along the top of Fig.~\ref{tab:identity_reco}. It is seen that failure can be mostly attributed to blur, occlusion from unrelated regions and otherwise missing decisive information.
\begin{figure}[h]
\centering
\begin{minipage}[b]{1.0\textwidth}
\centering
\includegraphics[width=0.32\textwidth]{materials/player_identity_hard-min.png}
\includegraphics[width=0.32\textwidth]{materials/player_identity_hard_2-min.png}
\includegraphics[width=0.32\textwidth]{materials/player_identity_hard_3-min.png}
\label{fig:fig15}
\end{minipage}
\begin{minipage}[b]{.6\textwidth}
\centering
\scriptsize
\begin{tabular}{c c c c c c c c c c}
\toprule
Feature-backbone & & PB & BAL & BAS & PA & & SR $\uparrow$ & Acc $\uparrow$ & mIoU $\uparrow$ \\
\midrule
TimeSformer & & \ding{51} & \ding{51} &\ding{51} & \ding{51} & & \textbf{4.63} & \textbf{6.97} & 6.86 \\
TimeSformer & & \ding{51} & \ding{51} &\ding{51} & - & & 4.20 & 6.83 & \textbf{6.89}\\
TimeSformer & & \ding{51} & \ding{51} & - & - & & 4.17 & 6.45 & 6.68 \\
TimeSformer & & \ding{51} & - & - & - & & 3.97 & 6.33 & 6.52 \\
TimeSformer & & - & - & - & - & & 3.66 & 5.98 & 6.07 \\
S3D & & - & - & - & - & & 3.57 & 5.91 & 5.49 \\
\bottomrule
\end{tabular}
\end{minipage}
\caption{(Top) Visual explanations revealing difficulty in player identification. Left: Although our detector captures the ball and player correctly, the face, jersey and size of the key player are barely recognizable due to blur. Middle: The detected player area is crowded and the ball handler is occluded by defenders. Right: A case where the ball is missing; thus, the model cannot find decisive information on the key player. (Bottom) Player identification results in percentage ($\%$) with our full approach and ablations on choice of features.}
\label{tab:identity_reco}
\end{figure}
\section{Conclusion}
In this work, we create a large-scale sports video dataset (NSVA) supporting multiple tasks: video captioning, action recognition and player identification. We propose a unified model to tackle all tasks and outperform the state of the art by a large margin on the video captioning task.
The creation of NSVA only relies on webly data and needs no extra annotation. We believe NSVA can fill the opening for a benchmark in fine-grained sports video captioning, and potentially stimulate the application of automatic score keeping.
The bottleneck of our model is player identification, which we deem the most challenging task in NSVA. To this end, a better algorithm is needed, e.g., opportunistic player recognition when visibility allows, with subsequent tracking for fuller inference of basketball activities.
There also are two additional directions we will explore: (1) We will investigate more advanced video feature representations (e.g., Video Swin transformer~\cite{video-swin}) on NSVA and compare to TimeSformer. (2) Prefix Multi-task learning~\cite{prefix-learning} has been proposed to learn several tasks in one model. Ideally, a model can benefit from learning to solve all tasks and gain extra performance boost on each task. We will investigate NSVA in the Prefix Multi-task learning setting with our task head.
\noindent\textbf{Acknowlegement.} The authors thank Professor Adriana Kovashka for meaningful discussions in the early stages and Professor Hui Jiang for proofreading and valuable feedback. This research was supported in part by a NSERC grant to Richard P. Wildes and a CFREF VISTA Graduate Scholarship to He Zhao.
\clearpage
\bibliographystyle{splncs04}
| {
"attr-fineweb-edu": 1.779297,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdhA25V5isMEL7MIx | \section{Introduction}\label{intro}
Inbound international tourism has been increasingly affecting Japanese economy \cite[][]{jones2009}. A year-on-year growth rate of 19.3\% was observed in 2017, with \num[group-separator={,}]{28691073} inbound tourists \cite[][]{jnto2003-2019}.
Japan's hospitality has been known historically to be of the highest quality. \textit{Omotenashi}, which describes the spirit of Japanese hospitality, with roots in Japanese history and tea ceremony, is celebrated worldwide \cite[][]{al2015characteristics}. Consequently, it would stand to reason that tourists visiting Japan would have this hospitality as their first and foremost satisfaction factor. However, it is known that customers from different countries and cultures have different expectations \cite[][]{engel1990}. Thus, it could be theorized that their satisfaction factors should be different.
The Japanese tourist market is gradually becoming diverse because of multicultural tourist populations. This diversity means that the expectations when staying at a hotel will be varied. Cultural backgrounds have a decisive role in aspects of satisfaction and in the perceptions of quality \cite[][]{mattila1999role,winsted1997service}, or behavioral intentions \cite[][]{liu2001relationships}, such as the difference in Westerners and Asians in their willingness to pay more \cite[][]{LEVY2010319}. A difference in cultural background can also heavily influence customers' expectations, as well as their perceptions of quality, and the difference between these two is what expresses itself as satisfaction. This difference in expectations and perceptions of quality can be smaller or larger depending on the culture in reaction to the same service.
For a growing industry with increasing cultural diversity, it is essential to identify the cross-culture expectations of customers in order to provide the appropriate services, cater to these expectations to ensure and increase customer satisfaction, maintain a good reputation, and generate positive word-of-mouth.
In 2017, Chinese tourists accounted for 25.63\% of the tourist population. On the other hand, Western countries accounted for 11.4\% of the total, and 7.23\% were countries where English is the official or the de facto national language \cite[][]{jnto2003-2019}. The effect of Chinese tourists on international economies is increasing, along with the number of studies on this phenomenon, \cite[][]{sun2017}. Despite this, many tourist-behavior analyses have been performed only involving Western subjects. Yet, it is known that Western and Asian customers are heavily differentiated\cite[][]{LEVY2010319}. As such, a knowledge gap existed until recent decades. Considering the numbers of inbound tourists in Japan and our team's language capabilities, our study focuses on Western and Chinese tourists.
In studies involving Asian populations in the analysis, Chinese-tourist behaviors have been evaluated most commonly \cite[e.g.][]{liu2019, chang2010, dongyang2015}. The few studies reporting comparisons between Asian and Western tourists' behaviors \cite[e.g.][]{choi2000} are typically survey- or interview-based, using small samples. These studies, although valid, can have limitations, namely, the scale and sampling. In the past, survey-based studies have provided a theoretical background for a few specific tourist populations of a single culture or traveling with a single purpose. These studies' limited scope often leads to difficulties in observing cultural and language differences in a single study. This creates a need for large-scale cross-cultural studies for the increasing Asian and Western tourist populations. It could be said that Westerners account for a smaller portion of the tourist population compared to Asians. However, according to \cite{choi2000}, Westerners are known as ``long-haul'' customers, spending more than 45\% of their budget on hotels. In comparison, their Asian counterparts only spend 25\% of their budget on hotels. Therefore, it is essential to study Asian and Western tourist populations, their differences, and the contrast with the existing literature results.
However, with ever-increasing customer populations, this is hard to accomplish without extensive studies of the customer base. There is a need for an automated method for identifying these expectations at a large scale. Our study intends to answer the need for such a methodology utilizing machine learning and natural language processing of large amounts of data. For this, we used a data-driven approach to our analysis, taking advantage of hotel review data. With this methodology, we explore the expectations and needs for the two most differing cultures currently interacting with the hospitality industry in Japan.
Owing to the advent of Web 2.0 and customer review websites, researchers realized the benefits of online reviews for research, sales \cite[][]{ye2009, basuroy2003}, customer consideration \cite[][]{vermeulen2009} and perception of services and products \cite[][]{browning2013}, among other effects of online interactions between customers \cite[e.g.][]{xiang2010, ren2019}. Consequently, information collected online is being used in tourism research for data mining analysis, such as opinion mining \cite[e.g.][]{hu2017436}, predicting hotel demand from online traffic \cite[][]{yang2014}, recommender systems \cite[e.g.][]{loh2003}, and more. Data mining and machine learning technologies can increase the number of manageable samples in a study from hundreds to hundreds of thousands. These technologies can not only help confirm existing theories but also lead to finding new patterns and to knowledge discovery \cite[][]{fayyad1996data}.
In this study, we evaluate the satisfaction factors of two essential tourist populations that are culturally different from Japan: Chinese and Western tourists. We take advantage of the wide availability of online reviews of Japanese hotels by both Mainland Chinese tourists posting on \textit{Ctrip} and Western, English-speaking tourists posting on \textit{TripAdvisor}. Based on these data, we can confirm existing theories regarding the differences in tourists' behavior and discover factors that could have been overlooked in the past. We use machine learning to automatically classify sentences in the online reviews as positive or negative opinions on the hotel. We then perform a statistical extraction of the topics that most concern the customers of each population.
\section{Research objective}\label{research_objective}
With the knowledge that cultural background influences expectations in customers, which is the basis for satisfaction, it becomes important to know the difference in factors influencing satisfaction and dissatisfaction between the most differing and numerous tourist populations in a given area.
This study aims to determine the difference in factors influencing satisfaction and dissatisfaction between Chinese and English-speaking tourists in the context of high-grade hospitality of Japanese hotels across several price ranges. We use machine learning to classify the sentiment in texts and natural language processing to study commonly used word pairings. More importantly, we also intend to measure how hard and soft attributes influence customer groups' satisfaction and dissatisfaction. We define hard attributes as attributes relating to physical and environmental aspects, such as the hotel's facilities, location, infrastructure, and surrounding real estate. In contrast, soft attributes are the hotel's non-physical attributes related to services, staff, or management.
\section{Theoretical background and hypothesis development}\label{theory_hypothesis}
\subsection{Cultural influence in expectation and satisfaction}\label{theory_expectations}
Customer satisfaction in tourism has been analyzed since decades past, \cite{hunt1975} having defined customer satisfaction as the realization or overcoming of expectations towards the service. \cite{oliver1981} defined it as an emotional response to the provided services in retail and other contexts, and \cite{oh1996} reviewed the psychological processes of customer satisfaction for the hospitality industry. It is generally agreed upon that satisfaction and dissatisfaction stem from the individual expectations of the customer. As such, \cite{engel1990} states that each customer's background, therefore, influences satisfaction and dissatisfaction. It can also be said that satisfaction stems from the perceptions of quality in comparison to these expectations.
These differences in customers' backgrounds can be summed up in cultural differences as well. In the past, satisfaction and perceived service quality have been found to be influenced by cultural differences \cite[e.g.][]{mattila1999role, winsted1997service}. Service quality perceptions have been studied via measurements such as SERVQUAL \cite[e.g.][]{armstrong1997importance}.
Previous studies on the dimensions of culture that influence differences in expectations have been performed in the past as well \cite[e.g.][]{MATTILA201910, LEVY2010319, donthu1998cultural}, such as comparing individualism vs. collectivism, high context vs. low context, uncertainty avoidance, among other factors. While culture as a concept is difficult to quantify, some researchers have tried to use these and more dimensions to measure cultural differences, such as the six dimensions described by \cite{hofstede1984culture}, or the nine dimensions of the GLOBE model \cite[][]{house1999cultural}.
These cultural dimensions are more differentiated in Western and Asian cultures \cite[][]{LEVY2010319}. Our study being located in Japan, it stands to reason that the differences in expectations between Western tourists and Asian tourists should be understood in order to provide a good service. However, even though geographically close, Japanese and Chinese cultures are both very different when it comes to customer service. This is why our study focuses on the difference between Chinese and Western customers in Japan. The contrasting cultural backgrounds between Chinese and Western customers will lead to varying expectations of the hotel services, the experiences they want to have while staying at a hotel, and the level of comfort that they will have. In turn, these different expectations will determine the distinct factors of satisfaction and dissatisfaction for each kind of customer and the order in which they prioritize them.
Because of their different origins, expectations, and cultures, it stands to reason Chinese and Western tourists could have completely different factors to one another. Therefore, it could be that some factors do not appear in the other reviews at all. For example, between different cultures, it can be that a single word can express some concept that would take more words in the other language. Therefore, we must measure their differences or similarities at their common ground as well.
\subsection{Customer satisfaction and dissatisfaction towards individual factors during hotel stay}\label{theory_satisfaction}
We reviewed the importance of expectations in the development of satisfaction and dissatisfaction and the influence that cultural backgrounds have in shaping these expectations. This is true for overall satisfaction for the service as a whole, as well as individual elements that contribute to satisfaction.
In this study, we study not overall customer satisfaction but the satisfaction and dissatisfaction that stem from individual-specific expectations, be they conscious or unconscious. For example, if a customer has a conscious expectation of a comfortable bed and a wide shower, and it is realized during their visit, they will be satisfied with this matter. However, suppose that same customer with a conscious expectation of a comfortable bed experienced loud noises at night. In that case, they can be dissatisfied with a different aspect, regardless of the satisfaction towards the bed. Then, the same customer might have packed their toiletries, thinking that the amenities might not include those. They can then be pleasantly surprised with good quality amenities and toiletries, satisfying an unconscious expectation. This definition of satisfaction does not allow us to examine overall customer satisfaction. However, it will allow us to examine the factors that a hotel can revise individually and how a population perceives them as a whole. In our study, we consider the definitions in \cite{hunt1975} that satisfaction is a realization of an expectation, and we posit that customers can have different expectations towards different service aspects. Therefore, in our study, we define satisfaction as the emotional response to the realization or overcoming of conscious or unconscious expectations towards an individual aspect or factor of a service. On the other hand, dissatisfaction is the emotional response to the lack of a realization or under-performance of these conscious or unconscious expectations towards specific service aspects.
Studies on customer satisfaction \cite[e.g.][]{truong2009, romao2014, wu2009} commonly use the Likert scale \cite[][]{likert1932technique} (e.g. 1 to 5 scale from strongly dissatisfied to strongly satisfied) to perform statistical analysis of which factors relate most to satisfaction on the same dimension as dissatisfaction \cite[e.g.][]{chan201518, choi2000}. The Likert scale's use leads to correlation analyses where one factor can lead to satisfaction, implying that the lack of it can lead to dissatisfaction. However, a binary distinction (satisfied or dissatisfied) could allow us to analyze the factors that correlate to satisfaction and explore factors that are solely linked to dissatisfaction. There are fewer examples of this approach, but studies have done this in the past \cite[e.g.][]{zhou2014}. This method can indeed decrease the extent to which we can analyze degrees of satisfaction or dissatisfaction. However, it has the benefit that it can be applied to a large sample of text data via automatic sentiment detection techniques using artificial intelligence.
\subsection{Japanese hospitality and service: \textit{Omotenashi}}\label{theory_omotenashi}
The spirit of Japanese hospitality, or \textit{Omotenashi}, has roots in the country's history, and to this day, it is regarded as the highest standard \cite[][]{ikeda2013omotenashi, al2015characteristics}. There is a famous phrase in customer service in Japan: \textit{okyaku-sama wa kami-sama desu}, meaning ``The customer is god.'' Some scholars say that \textit{omotenashi} originated from the old Japanese art of the tea ceremony in the 16th century, while others found that it originates in the form of formal banquets in the 7th-century \cite[][]{aishima2015origin}. The practice of high standards in hospitality has survived throughout the years. Presently, it permeates all business practices in Japan, from the cheapest convenience stores to the most expensive ones. Manners, service, and respect towards the customer are taught to workers in their training. High standards are always followed to not fall behind in the competition. In Japanese businesses, including hotels, staff members are trained to speak in \textit{sonkeigo}, or ``respectful language,'' one of the most formal of the Japanese formality syntaxes. They are also trained to bow differently depending on the situation, where a light bow could be used to say ``Please, allow me to guide you.'' Deep bows are used to apologize for any inconvenience the customer could have faced, followed by a very respectful apology. Although the word \textit{omotenashi} can be translated directly as ``hospitality,'' it includes both the concepts of hospitality and service \cite[][]{Kuboyama2020}. This hospitality culture permeates every type of business with customer interaction in Japan. A simple convenience shop could express all of these hospitality and service standards, which are not exclusive to hotels.
It stands to reason that this cultural aspect of hospitality would positively influence customer satisfaction. However, in many cases, other factors such as proximity to a convenience store, transport availability, or room quality might be more critical to a customer. In this study, we cannot directly determine whether a hotel is practicing the cultural standards of \textit{omotenashi}. Instead, we consider it as a cultural factor that influences all businesses in Japan. We then observe the customers' evaluations regarding service and hospitality factors and compare them to other places and business practices in the world. In summary, we consider the influence of the cultural aspect of \textit{omotenashi} while analyzing the evaluations on service and hospitality factors that are universal to all hotels in any country.
Therefore, we pose the following research question:
\begin{subrsq}
\begin{rsq}
\label{rsq:hospitality}
To what degree are Chinese and Western tourists satisfied with Japanese hospitality factors such as staff behavior or service?
\end{rsq}
However, Japanese hospitality is based on Japanese culture. Different cultures interacting with it could provide a different evaluation of it. Some might be impressed by it, whereas some might consider other factors more important to their stay in a hotel. This point leads us to a derivative of the aforementioned research question:
\begin{rsq}
\label{rsq:hospitality_both}
Do Western and Chinese tourists have a different evaluation of Japanese hospitality factors such as staff behavior or service?
\end{rsq}
\end{subrsq}
\subsection{Customer expectations beyond service and hospitality}\label{theory_soft_hard}
Staff behavior, hospitality and service, and therefore \textit{Omotenashi}, are all soft attributes of a hotel. That is, they are non-physical attributes of the hotel, and as such, they are practical to change through changes in management. While it is important to know this, it is not known if the cultural differences between Chinese and Western tourists also influence other expectations and satisfaction factors, such as the hard factors of a hotel.
Hard factors are attributes uncontrollable by the hotel staff, which can play a part in the customers' choice behavior and satisfaction. Examples of these factors include the hotel's surroundings, location, language immersion of the country as a whole, or touristic destinations, and the hotel's integration with tours available nearby, among other factors.
Besides the facilities, many other aspects of the experience, expectation, and perception of the stay in a hotel can contribute to the overall satisfaction, as well as individual satisfactions and dissatisfactions. However, previous research focuses more on these soft attributes, with little focus on hard attributes, if only focusing on facilities \cite[e.g.][]{shanka2004, choi2001}. Because of this gap in knowledge, we decided to analyze the differences in cultures regarding both soft and hard attributes of a hotel.
This leads to two of our research questions:
\begin{subrsq}
\begin{rsq}
\label{rsq:hard_soft}
To what degree do satisfaction and dissatisfaction stem from hard and soft attributes of the hotel?
\end{rsq}
\begin{rsq}
\label{rsq:hard_soft_diff}
How differently do Chinese and Western customers perceive hard and soft attributes of the hotel?
\end{rsq}
\end{subrsq}
The resulting proportions of hard attributes to soft attributes for each population could measure how much the improvement of management in the hotel can increase future satisfaction in customers.
\subsection{Chinese and Western tourist behavior}\label{theory_zh_en}
In the past, social science and tourism studies focused extensively on Western tourist behavior in other countries. Recently, however, with the rise of Chinese outbound tourism, both academic researchers and businesses have decided to study Chinese tourist behavior, with rapid growth in studies following the year 2007 \cite[][]{sun2017}. However, studies focusing on only the behavior of this subset of tourists are the majority. To this day, studies and analyses specifically comparing Asian and Western tourists are scarce, and even fewer are the number of studies explicitly comparing Chinese and Western tourists. One example is a study by \cite{choi2000}, which found that Western tourists visiting Hong Kong are satisfied more with room quality, while Asians are satisfied with the value for money. Another study by \cite{bauer1993changing} found that Westerners prefer hotel health facilities, while Asian tourists were more inclined to enjoy the Karaoke facilities of hotels. Both groups tend to have high expectations for the overall facilities. Another study done by \cite{kim2000} found American tourists to be individualistic and motivated by novelty, while Japanese tourists were collectivist and motivated by increasing knowledge and escaping routine.
One thing to note with the above Asian vs. Western analyses is that they were performed before 2000 and not Chinese-specific. Meanwhile, the current Chinese economic boom is increasing the influx of tourists of this nation. The resulting increase in marketing and the creation of guided tours for Chinese tourists could have created a difference in tourists' perceptions and expectations. In turn, if we follow the definition of satisfaction in \cite{hunt1975}, the change in expectations could have influenced their satisfaction factors when traveling. Another note is that these studies were performed with questionnaires in places where it would be easy to locate tourists, i.e., airports. However, our study of online reviews takes the data that the hotel customers uploaded themselves. This data makes the analysis unique in exploring their behavior compared with Western tourists via factors that are not considered in most other studies. Furthermore, our study is unique in observing the customers in the specific environment of high-level hospitality in Japan.
More recent studies have surfaced as well. A cross-country study \cite[][]{FRANCESCO201924} using posts from U.S.A. citizens, Italians, and Chinese tourists, determined using a text link analysis that customers from different countries indeed have a different perception and emphasis of a few predefined hotel attributes. According to their results, U.S.A. customers perceive cleanliness and quietness most positively. In contrast, Chinese customers perceive budget and restaurant above other attributes. Another couple of studies \cite[][]{JIA2020104071, HUANG2017117} analyze differences between Chinese and U.S. tourists using text mining techniques and more massive datasets, although in a restaurant context.
These last three studies focus on the U.S.A. culture, whereas our study focuses on the Western culture. Another difference with our study is that of the context of the study. The first study \cite[][]{FRANCESCO201924} was done within the context of tourists from three countries staying in hotels across the world. The second study chose restaurant reviews from the U.S.A. and Chinese tourists eating in three countries in Europe. The third study analyzed restaurants in Beijing.
On the other hand, our study focuses on Western culture, instead of a single Western country, and Chinese culture clashing with the hospitality environment in Japan, specifically. Japan's importance in this analysis comes from the unique environment of high-grade hospitality that the country presents. In this environment, customers could either hold their satisfaction to this hospitality regardless of their culture or value other factors more depending on their cultural differences. Our study measures this at a large scale across different hotels in Japan.
Other studies have gone further and studied people from many countries in their samples and performed a more universal and holistic (not cross-culture) analysis. \cite{choi2001} analyzed hotel guest satisfaction determinants in Hong Kong with surveys in English, Chinese and Japanese translations, with people from many countries in their sample. \cite{choi2001} found that staff service quality, room quality, and value for money were the top satisfaction determinants. As another example, \cite{Uzama2012} produced a typology for foreigners coming to Japan for tourism, without making distinctions for their culture, but their motivation in traveling in Japan. In another study, \cite{zhou2014} analyzed hotel satisfaction using English and Mandarin online reviews from guests staying in Hangzhou, China coming from many countries. The general satisfaction score was noticed to be different among those countries. However, a more in-depth cross-cultural analysis of the satisfaction factors was not performed. As a result of their research, \cite{zhou2014} thus found that customers are universally satisfied by welcome extras, dining environments, and special food services.
Regarding Western tourist behavior, a few examples can tell us what to expect when analyzing our data. \cite{kozak2002} found that British and German tourists' satisfaction determinants while visiting Spain and Turkey were hygiene and cleanliness, hospitality, the availability of facilities and activities, and accommodation services. \cite{shanka2004} found that English-speaking tourists in Perth, Australia were most satisfied with staff friendliness, the efficiency of check-in and check-out, restaurant and bar facilities, and lobby ambiance.
Regarding outbound Chinese tourists, academic studies about Chinese tourists have increased \cite[][]{sun2017}. Different researchers have found that Chinese tourist populations have several specific attributes. According to \cite{ryan2001} and their study of Chinese tourists in New Zealand, Chinese tourists prefer nature, cleanliness, and scenery in contrast to experiences and activities. \cite{dongyang2015} studied Chinese tourists in the Kansai region of Japan and found that Chinese tourists are satisfied mostly with exploring the food culture of their destination, cleanliness, and staff. Studying Chinese tourists in Vietnam, \cite{truong2009} found that Chinese tourists are highly concerned with value for money. According to \cite{liu2019}, Chinese tourists tend to have harsher criticism compared with other international tourists. Moreover, as stated by \cite{gao2017chinese}, who analyzed different generations of Chinese tourists and their connection to nature while traveling, Chinese tourists prefer nature overall. However, the younger generations seem to do so less than their older counterparts.
Although the studies focusing only on Chinese or Western tourists have a narrow view, their theoretical contributions are valuable. We can see that depending on the study and the design of questionnaires and the destinations; the results can vary greatly. Not only that, but while there seems to be some overlap in most studies, some factors are completely ignored in one study but not in the other. Since our study uses data mining, each factor's definition is left for hotel customers to decide en masse via their reviews. This means that the factors will be selected through statistical methods alone instead of being defined by the questionnaire. Our method allows us to find factors that we would not have contemplated. It also avoids enforcing a factor on the mind of study subjects by presenting them with a question that they did not think of by themselves. This large variety of opinions in a well-sized sample, added to the automatic findings of statistical text analysis methods, gives our study an advantage compared to others with smaller samples. This study analyzes the satisfaction and dissatisfaction factors cross-culturally and compares them with the existing literature.
Undoubtedly previous literature has examples of other cross-culture studies of tourist behavior and may further highlight our study and its merits. A contrast is shown in Table \ref{tab:lit-rev}. This table shows that older studies were conducted with surveys and had a different study topic. These are changes in demand \cite[][]{bauer1993changing}, tourist motivation \cite[][]{kim2000}, and closer to our study, satisfaction levels \cite[][]{choi2000}. However, our study topic is not the levels of satisfaction but the factors that drive it and dissatisfaction, which is overlooked in most studies. Newer studies with larger samples and similar methodologies have emerged, although two of these study restaurants instead of hotels \cite[][]{JIA2020104071, HUANG2017117}. One important difference is the geographical focus of their studies. While \cite{FRANCESCO201924} , \cite{JIA2020104071} and \cite{HUANG2017117} have a multi-national focus, we instead focus on Japan. The focus on Japan is important because of its top rank in hospitality across all types of businesses. Our study brings light to the changes, or lack thereof, in different touristic environments where an attribute can be considered excellent. The number of samples in other text-mining studies is also smaller than ours in comparison. Apart from that, every study has a different text mining method.
\begin{landscape}
\begin{table}[p]
\centering
\caption{Comparison between cross-culture or cross-country previous studies and our study.}
\label{tab:lit-rev}
\resizebox{\linewidth}{!}{%
\begin{tabular}{l|l|l|l|l|l|l|l|}
\cline{2-8}
\textbf{} &
\textbf{Bauer et.al (1993)} &
\textbf{Choi and Chu (2000)} &
\textbf{Kim and Lee (2000)} &
\textbf{Huang (2017)} &
\textbf{Francesco and Roberta (2019)} &
\textbf{Jia (2020)} &
\textbf{Our study} \\ \hline
\multicolumn{1}{|l|}{\textbf{Comparison objects}} &
\begin{tabular}[c]{@{}l@{}}Asians\\ vs\\ Westerns\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Asians\\ vs\\ Westerners\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Anglo-Americans \\ vs \\ Japanese\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Chinese \\ vs \\ English-speakers\end{tabular} &
\begin{tabular}[c]{@{}l@{}}USA \\ vs \\ China \\ vs \\ Italy\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Chinese \\ vs \\ US tourists\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Chinese \\ vs \\ Westerners\end{tabular} \\ \hline
\multicolumn{1}{|l|}{\textbf{Study topic}} &
Changes in demand &
Satisfaction Levels &
Tourist Motivation &
\begin{tabular}[c]{@{}l@{}}Dining experience \\ of Roast Duck\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Perception and \\ Emphasis\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Motivation and \\ Satisfaction\end{tabular} &
\textbf{\begin{tabular}[c]{@{}l@{}}Satisfaction and\\ Dissatisfaction\end{tabular}} \\ \hline
\multicolumn{1}{|l|}{\textbf{Geographical focus}} &
Asia Pacific region &
Hong Kong &
Global &
Beijing &
Multi-national &
Multi-national &
\textbf{Japan} \\ \hline
\multicolumn{1}{|l|}{\textbf{Industry}} &
Hotels &
Hotels &
Tourism &
Restaurant (Beijing Roast Duck) &
Hotels &
Restaurants &
Hotels \\ \hline
\multicolumn{1}{|l|}{\textbf{Study subjects}} &
Hotel managers &
Hotel customers &
\begin{tabular}[c]{@{}l@{}}Tourists arriving \\ in airport\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Diners \\ online reviews\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Hotel customers \\ online reviews\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Diners \\ online reviews\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Hotel customers \\ online reviews\end{tabular} \\ \hline
\multicolumn{1}{|l|}{\textbf{Sample method}} &
surveys &
surveys &
survey &
text mining &
text mining &
text mining &
text mining \\ \hline
\multicolumn{1}{|l|}{\textbf{Number of samples}} &
185 surveys &
540 surveys &
\begin{tabular}[c]{@{}l@{}}165 Anglo-American\\ 209 Japanese\end{tabular} &
\begin{tabular}[c]{@{}l@{}}990 Chinese reviews\\ 398 English reviews\end{tabular} &
9000 reviews (3000 per country) &
\begin{tabular}[c]{@{}l@{}}2448 reviews\\ (1360 Chinese)\\ (1088 English)\end{tabular} &
\textbf{\begin{tabular}[c]{@{}l@{}}89,207 reviews\\ (48,070 Chinese)\\ (41,137 English)\end{tabular}} \\ \hline
\multicolumn{1}{|l|}{\textbf{Study method}} &
statistics &
VARIMAX &
MANOVA &
\begin{tabular}[c]{@{}l@{}}Semantic \\ Network \\ Analysis\end{tabular} &
Text Link Analysis &
\begin{tabular}[c]{@{}l@{}}Topic modeling \\ (LDA)\end{tabular} &
\textbf{\begin{tabular}[c]{@{}l@{}}SVM, \\ Dependency Parsing\\ and POS tagging\end{tabular}} \\ \hline
\multicolumn{1}{|l|}{\textbf{Subject nationality}} &
\begin{tabular}[c]{@{}l@{}}Asians: \\ China,\\ Fiji,\\ Hong Kong,\\ Indonesia,\\ Malaysia,\\ Singapore,\\ Taiwan,\\ Guam,\\ Tahiti,\\ Thailand \\ \\ Westerners: Australia,\\ New Zealand\end{tabular} &
\begin{tabular}[c]{@{}l@{}}Asians:\\ China,\\ Taiwan,\\ Japan,\\ South Korea,\\ South-East Asia\\ \\ Westerners:\\ North America,\\ Europe,\\ Australia,\\ New Zealand\end{tabular} &
USA, Japan &
\begin{tabular}[c]{@{}l@{}}English-speakers: \\ U.K., U.S., Australia,\\ New Zealand, Canada,\\ Ireland\\ \\ Chinese-speakers: China\end{tabular} &
USA, China, Italy &
USA, China &
\begin{tabular}[c]{@{}l@{}}Chinese-speakers:\\ China\\ \\ English-speakers:\\ (U.K., U.S.,\\ Australia,\\ New Zealand,\\ Canada, Ireland)\end{tabular} \\ \hline
\end{tabular}%
}
\end{table}
\end{landscape}
\subsection{Data mining, machine learning, knowledge discovery and sentiment analysis}\label{theory_data}
In the current world, data is presented to us in larger and larger quantities. Today's data sizes were commonly only seen in very specialized large laboratories with supercomputers a couple of decades ago. However, they are now standard for market and managerial studies, independent university students, and any scientist connecting to the Internet. Such quantities of data are available to study now more than ever. Nevertheless, it would be impossible for researchers to parse all of this data by themselves. As \cite{fayyad1996data} summarizes, data by itself is unusable until it goes through a process of selection, preprocessing, transformation, mining, and evaluation. Only then can it be established as knowledge. With the tools available to us in the era of information science, algorithms can be used to detect patterns that would take researchers too long to recognize. These patterns can, later on, be evaluated to generate knowledge. This process is called Knowledge Discovery in Databases.
Now, there are, of course, many sources of numerical data to be explored. However, perhaps what is most available and interesting to managerial purposes is the resource of customers' opinions in text form. Since the introduction of Web 2.0, an unprecedented quantity of valuable information is posted to the Internet at a staggering speed. Text mining has then been proposed more than a decade ago to utilize this data \cite[e.g.][]{rajman1998text,nahm2002text}. Using Natural Language Processing, one can parse language in a way that translates to numbers so that a computer can analyze it. Since then, text mining techniques have improved over the years. This has been used in the field of hospitality as well for many purposes, including satisfaction analysis from reviews \cite[e.g][]{berezina2016, xu2016, xiang2015, hargreaves2015, balbi2018}, social media's influence on travelers \cite[e.g.][]{xiang2010}, review summarization \cite[e.g.][]{hu2017436}, perceived value of reviews \cite[e.g][]{FANG2016498}, and even predicting hotel demand using web traffic data \cite[e.g][]{yang2014}.
More than only analyzing patterns within the text, researchers have found how to determine the sentiment behind a statement based on speech patterns, statistical patterns, and other methodologies. This method is called sentiment analysis or opinion mining. A precursor of this method was attempted decades ago \cite[][]{stone1966general}. With sentiment analysis, one could use patterns in the text to determine whether a sentence was being said with a positive opinion, or a critical one. This methodology could even determine other ranges of emotions, depending on the thoroughness of the algorithm. Examples of sentiment analysis include ranking products through online reviews \cite[e.g][]{liu2017149, zhang2011}, predicting political poll results through opinions in Twitter \cite[][]{oconnor2010}, and so on. In the hospitality field, it has been used to classify reviewers' opinions of hotels in online reviews \cite[e.g.][]{kim2017362, alsmadi2018}.
Our study used an algorithm for sentiment analysis called a Support Vector Machine (SVM), a supervised machine learning used for binary classification. Machine learning is a general term used for algorithms that, when given data, will automatically use that data to "learn" from its patterns and apply them for improving upon a task. Learning machines can be supervised, as in our study, where the algorithm has manually labeled training data to detect patterns in it and use them to establish a method for classifying other unlabeled data automatically. Machine learning can also be unsupervised, where there is no pre-labeled data. In this latter case, the machine will analyze the structure and patterns of the data and perform a task based on its conclusions. Our study calls for a supervised machine since text analysis can be intricate. Many patterns might occur, but we are only interested in satisfaction and dissatisfaction labels. Consequently, we teach the machine through previously labeled text samples.
Machine learning and data mining are two fields with a significant overlap since they can use each other's methods to achieve the task at hand. Machine learning methods focus on predicting new data based on known properties and patterns of the given data. Data mining, on the other hand, is discovering new information and new properties of the data. Our machine learning approach will learn the sentiment patterns of our sample texts showing satisfaction and dissatisfaction and using these to label the rest of the data. We are not exploring new patterns in the sentiment data. However, we are using sentiment predictions for knowledge discovery in our database. Thus, our study is a data mining experiment based on machine learning.
Because the methodology for finding patterns in the data is automatic and statistical, it is both reliable and unpredictable. Reliable in that the algorithm will find a pattern by its nature. Unpredictable in that since it has no intervention from the researchers in making questionnaires, it can result in anything that the researchers could not expect. These qualities determine why, similar to actual mining; data mining is mostly exploratory. One can never be sure that one will find a specific something. However, we can make predictions and estimates about finding knowledge and what kind of knowledge we can uncover. The exploration of large opinion datasets with these methods is essential. The reason is that we can discover knowledge that could otherwise be missed by observing a localized sample rather than taking a holistic view of every user's opinion. In other words, a machine algorithm can find the needles in a haystack that we did not know were there by examining small bundles of hay at a time.
\section{Methodology}\label{method}
We extracted a large number of text reviews from the site \textit{Ctrip}, with mostly mainland Chinese users, and the travel site \textit{TripAdvisor}. We then determined the most commonly used words that relate to positive and negative opinions in a review. We did this using Shannon's entropy to extract keywords from their vocabulary. These positive and negative keywords allow us to train an optimized Support Vector Classifier (SVC) to perform a binary emotional classification of the reviews in large quantities, saving time and resources for the researchers. We then applied a dependency parsing to the reviews and a Part of Speech tagging (POS tagging) to observe the relationship between adjective keywords and the nouns they refer to. We split the dataset into price ranges to observe the differences in keyword usage between lower-class and higher-class hotels. We observed the frequency of the terms in the dataset to extract the most utilized words in either review. We show an overview of this methodology in Figure \ref{fig:method-overview}, which is an updated version of the methodology used by \cite{Aleman2018ICAROB}. Finally, we also observed if the satisfaction factors were soft or hard attributes of the hotel.
\begin{figure}[bp]
\centering
\includegraphics[width=\textwidth]{emotion-method-overview_V3.png}
\caption{Overview of the methodology to quantitatively rank satisfaction factors.}
\label{fig:method-overview}
\end{figure}
\subsection{Data collection}\label{datacollection}
In the \textit{Ctrip} data collection, reviews from a total of \num[group-separator={,}]{5774} hotels in Japan were collected. From these pages, we extracted a total of \num[group-separator={,}]{245919} reviews, from which \num[group-separator={,}]{211932} were detected to be standard Mandarin Chinese. Since a single review can have sentences with different sentiments, we separated sentences using punctuation marks. The Chinese reviews were comprised of \num[group-separator={,}]{187348} separate sentences.
In the \textit{TripAdvisor} data collection, we collected data from \num[group-separator={,}]{21380} different hotels. In total, we collected \num[group-separator={,}]{295931} reviews, from which \num[group-separator={,}]{295503} were detected to be in English. Similarly to the Chinese data, we then separated these English reviews into \num[group-separator={,}]{2694261} sentences using the \textit{gensim} python library. For the language detection in both cases we used the \textit{langdetect} python library.
However, to make the data comparisons fair, we filtered both databases only to contain reviews from hotels in both datasets, using their English names to do a search match. We also filtered them to be in the same date range. In addition, we selected only the hotels that had pricing information available. We extracted the lowest and highest price possible for one night as well. The difference in pricing can be from better room settings, such as double or twin rooms or suites, depending on the hotel. Regardless of the reason, we chose the highest-priced room since it can be an indirect indicator of the hotel's class. After filtering, the datasets contained \num[group-separator={,}]{557} hotels in common. The overlapping date range for reviews was from July 2014 to July 2017. Within these hotels, from \textit{Ctrip} there was \num[group-separator={,}]{48070} reviews comprised of \num[group-separator={,}]{101963} sentences, and from \textit{TripAdvisor} there was \num[group-separator={,}]{41137} reviews comprised of \num[group-separator={,}]{348039} sentences.
The price for a night in these hotels ranges from cheap capsule hotels at 2000 yen per night to high-end hotels 188,000 yen a night at the far ends of the bell curve. Customers' expectations can vary greatly depending on the pricing of the hotel room they stay at. Therefore, we made observations on the distribution of pricing in our database's hotels and binned the data by price ranges, decided by consideration of the objective of stay. We show these distributions in Figure \ref{fig:price_dist}. The structure of the data after division by price is shown in Table \ref{tab:exp_notes}. This table also includes the results of emotional classification after applying our SVC, as explained in \ref{sentimentanalysis}. The first three price ranges (0 to 2500 yen, 2500 to 5000 yen, 5000 to 10,000 yen) would correspond to low-class hotels or even hostels on the lower end and cheap business hotels on the higher end. Further on, there are business hotels in the next range (10,000 to 15,000 yen). After that, the stays could be at Japanese style \textit{ryokan} when traveling in groups, high-class business hotels, luxury love hotels, or higher class hotels (15,000 to 20,000 yen, 20,000 to 30,000 yen). Further than that is more likely to be \textit{ryokan} or high class resorts or five-star hotels (30,000 to 50,000 yen, 50,000 to 100,000 yen, 100,000 to 200,000 yen). Note that because of choosing the highest price per one night in each hotel, the cheapest two price ranges (0 to 2500 yen, 2500 to 5000 yen) are empty, despite some rooms being priced at 2000 yen per night. Because of this, other tables will omit these two price ranges.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{price_range_distribution_50_even_bins.png}
\caption{50 equal lenght bins}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{price_range_distribution_bins_ver_2.png}
\caption{manually set 9 price ranges}
\end{subfigure}
\caption{Price for one night distribution, blue: lowest price, orange: highest price.}
\label{fig:price_dist}
\end{figure}
\begin{table}[ht]
\centering
\caption{Collected data and structure after price range categorizing.}
\label{tab:exp_notes}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|l|l|r|r|}
\hline
\multicolumn{1}{|c|}{\textbf{Price range}} & \multicolumn{1}{c|}{\textbf{Data collected}} & \textbf{Ctrip database} & \textbf{Tripadvisor database} \\ \hline
\multirow{5}{*}{0: All Prices} & Hotels & 557 & 557 \\
& Reviews & 48,070 & 41,137 \\
& Sentences & 101,963 & 348,039 \\
& Positive sentences & 88,543 & 165,308 \\
& Negative sentences & 13,420 & 182,731 \\ \hline
\multirow{2}{*}{1: 0 to 2500 yen} & Hotels & 0 & 0 \\
& Reviews & 0 & 0 \\ \hline
\multirow{2}{*}{2: 2500 to 5000 yen} & Hotels & 0 & 0 \\
& Reviews & 0 & 0 \\ \hline
\multirow{5}{*}{3: 5000 to 10,000 yen} & Hotels & 22 & 22 \\
& Reviews & 452 & 459 \\
& Sentences & 1,108 & 3,988 \\
& Positive sentences & 924 & 1,875 \\
& Negative sentences & 184 & 2,113 \\ \hline
\multirow{5}{*}{4: 10,000 to 15,000 yen} & Hotels & 112 & 112 \\
& Reviews & 2,176 & 2,865 \\
& Sentences & 4,240 & 24,107 \\
& Positive sentences & 3,566 & 11,619 \\
& Negative sentences & 674 & 12,488 \\ \hline
\multirow{5}{*}{5: 15,000 to 20,000 yen} & Hotels & 138 & 138 \\
& Reviews & 7,043 & 4,384 \\
& Sentences & 14,726 & 37,342 \\
& Positive sentences & 12,775 & 17,449 \\
& Negative sentences & 1,951 & 19,893 \\ \hline
\multirow{5}{*}{6: 20,000 to 30,000 yen} & Hotels & 129 & 129 \\
& Reviews & 11,845 & 13,772 \\
& Sentences & 24,413 & 115,830 \\
& Positive sentences & 21,068 & 55,381 \\
& Negative sentences & 3,345 & 60,449 \\ \hline
\multirow{5}{*}{7: 30,000 to 50,000 yen} & Hotels & 83 & 83 \\
& Reviews & 8,283 & 7,001 \\
& Sentences & 17,939 & 58,409 \\
& Positive sentences & 15,642 & 28,493 \\
& Negative sentences & 2,297 & 29,916 \\ \hline
\multirow{5}{*}{8: 50,000 to 100,000 yen} & Hotels & 59 & 59 \\
& Reviews & 16,670 & 9,646 \\
& Sentences & 36,255 & 81,940 \\
& Positive sentences & 31,638 & 38,217 \\
& Negative sentences & 4,617 & 43,723 \\ \hline
\multirow{5}{*}{9: 100,000 to 200,000 yen} & Hotels & 14 & 14 \\
& Reviews & 1,601 & 3,010 \\
& Sentences & 3,282 & 26,423 \\
& Positive sentences & 2,930 & 12,274 \\
& Negative sentences & 352 & 14,149 \\ \hline
\end{tabular}%
}
\end{table}
\subsection{Text processing}\label{textprocessing}
We needed to analyze the grammatical relationship between words, be it English or Chinese, to understand the connections between adjectives and nouns. For all these processes, we used the Stanford CoreNLP pipeline developed by the Natural Language Processing Group at Stanford University \cite[][]{manning-EtAl:2014:P14-5}. In order to separate Chinese words for analysis, we used the Stanford Word Segmenter \cite[][]{chang2008}. In English texts, however, only using spaces is not enough to correctly collect concepts. The English language is full of variations and conjugations of words depending on the context and tense. Thus, a better segmentation is achieved by using lemmatization, which returns each word's dictionary form. For this purpose, we used the \textit{gensim} library for the English texts.
A dependency parser analyzes the grammatical structure, detecting connections between words, and describing the action and direction of those connections. We show an example of these dependencies in Figure \ref{fig:depparse}. This study uses the Stanford NLP Dependency Parser, as described by \cite{chen-EMNLP:2014}. A list of dependencies used by this parser is detailed by \cite{marneffe_manning_2016_depparse_manual}. In more recent versions, they use an updated dependency tag list from Universal Dependencies \cite[][]{zeman2018conll}. In our study, this step was necessary to extract adjective modifiers and their subject. We did that by parsing the database and extracting instances of a few determined dependency codes. One of these dependency codes is ``amod'', which stands for ``adjectival modifier''. This is used when an adjective modifies a noun directly (e.g., A big apple). The other dependency code we used was ``nsubj'', or nominal subject, the class's syntactic subject. We used this one for cases where the adjective is modifying the noun indirectly through other words (e.g., The apple is big). This dependency does not necessarily only include a combination of adjectives and nouns. However, it can also be connected with copular verbs, nouns, or other adjectives. We saw it necessary also to perform a Part of Speech (POS) tagging of these clauses.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{depparse.png}
\caption{Example of dependency parsing.}
\label{fig:depparse}
\end{figure}
A Part of Speech (POS) tagger is a program that assigns word tokens with tags identifying the part of speech. An example is shown in Figure \ref{fig:postag}. A Part of Speech is a category of lexical items that serve similar grammatical purposes, for example, nouns, adjectives, verbs, or conjunctions. In our study, we used the Stanford NLP POS tagger software, described by \cite{toutanova2000enriching} and \cite{toutanova2003feature}, which uses the Penn Chinese Treebank tags \cite[][]{xia_penntreebank}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{postag.png}
\caption{Example of POS tagging with the Penn Treebank tags.}
\label{fig:postag}
\end{figure}
In this study, we were interested in identifying combinations of adjectives, some verbs, and nouns. We also needed to filter away bad combinations that were brought by the versatility of nominal subject dependencies. For this purpose, we identified the tags for nouns, verbs, and adjectives in Chinese and English, with the English tags being a bit more varied. What would be called adjectives in English corresponds more to stative verbs in Chinese, so we needed to extract those as well. We show a detailed description of the chosen tags in Table \ref{tab:target_postag}. We also show a detailed description of the tags we needed to filter. We selected these tags heuristically by observing commonly found undesired pairs in Table \ref{tab:filter_postag}.
\begin{table}[ht]
\centering
\caption{Target Parts of Speech for extraction and pairing.}
\label{tab:target_postag}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|l|l|}
\hline
\textbf{Language} & \textbf{POS Tag} & \multicolumn{1}{c|}{\textbf{Part of Speech}} & \multicolumn{1}{c|}{\textbf{Examples}} \\ \hline
\multirow{4}{*}{Chinese target tags} & NN & Noun (general) & \begin{CJK}{UTF8}{gbsn}酒店\end{CJK} (hotel) \\ \cline{2-4}
& VA & Predicative Adjective (verb) & \begin{CJK}{UTF8}{gbsn}干净 的\end{CJK} (clean) \\ \cline{2-4}
& JJ & Noun modifier (adjectives) & \begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) \\ \cline{2-4}
& VV & Verb (general) & \begin{CJK}{UTF8}{gbsn}推荐\end{CJK} (recommend) \\ \hline
\multirow{9}{*}{English target tags} & NN & Noun (general) & room \\ \cline{2-4}
& NNS & Noun (plural) & beds \\ \cline{2-4}
& JJ & Adjective & big \\ \cline{2-4}
& JJS & Adjective (superlative) & best \\ \cline{2-4}
& JJR & Adjective (comparative) & larger \\ \cline{2-4}
& VB & Verb (base form) & take \\ \cline{2-4}
& VBP & Verb (single present) & take \\ \cline{2-4}
& VBN & Verb (past participle) & taken \\ \cline{2-4}
& VBG & Verb (gerund / present participle) & taking \\ \hline
\end{tabular}%
}
\end{table}
\begin{table}[ht]
\centering
\caption{Filtered out Parts of Speech to aid pairing.}
\label{tab:filter_postag}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|l|l|l|}
\hline
\textbf{Language} &
\multicolumn{1}{c|}{\textbf{POS Tag}} &
\multicolumn{1}{c|}{\textbf{Part of Speech}} &
\multicolumn{1}{c|}{\textbf{Examples}} \\ \hline
\multirow{4}{*}{Commonly filtered tags} & DT & Determiner & a, an \\ \cline{2-4}
& PN & Pronoun & I, you, they \\ \cline{2-4}
& CD & Cardinal Number & 1, 2, 3, 4, 5 \\ \cline{2-4}
& PU & Punctuation & .!? \\ \hline
\multirow{5}{*}{Chinese filtered tags} &
DEV &
Particle &
\begin{CJK}{UTF8}{gbsn}地\end{CJK} (Japan) (adverbial particle) \\ \cline{2-4}
& NR & Noun (proper noun) & \begin{CJK}{UTF8}{gbsn}日本\end{CJK} (Japan) \\ \cline{2-4}
&
M &
Measure word &
\begin{CJK}{UTF8}{gbsn}个\end{CJK} (general classifier), \begin{CJK}{UTF8}{gbsn}公里\end{CJK} (kilometer) \\ \cline{2-4}
&
SP &
Sentence-final particle &
\begin{CJK}{UTF8}{gbsn}他\end{CJK} (he), \begin{CJK}{UTF8}{gbsn}好\end{CJK} (good) \\ \cline{2-4}
& IJ & Interjection & \begin{CJK}{UTF8}{gbsn}啊\end{CJK} (ah) \\ \hline
\multirow{3}{*}{English target tags} & NNP & Noun (proper noun) & Japan \\ \cline{2-4}
& PRP\$ & Possessive Pronoun & My, your, her, his \\ \cline{2-4}
& WP & Wh-pronoun & What, who \\ \hline
\end{tabular}%
}
\end{table}
Once we had these adjective + noun or verb + noun pairs, we could determine what the customers referred to in their reviews. With what frequency they use those pairings positively or negatively.
\subsection{Sentiment analysis using a Support Vector Classifier}\label{sentimentanalysis}
The sentiment analysis was performed using the methodology described by \cite{Aleman2018ICAROB}. Keywords are determined by a comparison of Shannon's entropy \cite[][]{shannon1948} between two classes by a factor of \(\alpha\) for one class and \(\alpha'\) for the other, and then they are used in an SVC \cite[][]{cortes1995}, optimizing keywords to select the best performing classifier using the \(F_1\)-measure \cite[][]{powers2011}. The selected SVC keywords would then clearly represent the user driving factors leading to positive and negative emotions. We also performed experiments to choose the best value of the parameter C used in the SVC. C is a constant that affects the optimization process when minimizing the error of the separating hyperplane. Low values of C give some freedom of error, which minimizes false positives but can also increase false negatives. Inversely, high C values will likely result in minimal false negatives but a possibility of false positives. SVC performance results are displayed in Tables \ref{tab:svm_f1_zh} and \ref{tab:svm_f1_en}. Examples of tagged sentences are shown in Table \ref{tab:training_examples}.
\begin{table}[ht]
\centering
\caption{Best performing SVC 5-fold cross-validation Chinese text classifiers.}
\label{tab:svm_f1_zh}
\resizebox{0.65\textwidth}{!}{%
\begin{tabular}{|l|l|l|l|l|}
\hline
\textbf{Keyword List} &
\textbf{\begin{tabular}[c]{@{}l@{}}Classifier\\ emotion\end{tabular}} &
\textbf{C} &
\begin{tabular}[c]{@{}l@{}}\(F_1\)\\ \(\mu\)\end{tabular} &
\begin{tabular}[c]{@{}l@{}}\(F_1\)\\ \(\sigma\)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Satisfaction keywords\\ (\(\alpha = 2.75\))\end{tabular}
& Satisfaction & 2.5 & 0.91 & 0.01 \\ \hline
\begin{tabular}[c]{@{}l@{}}Negative keywords\\ (\(\alpha' = 3.75\))\end{tabular}
& Dissatisfaction & 0.5 & 0.67 & 0.11 \\ \hline
\begin{tabular}[c]{@{}l@{}}\textbf{Combined}\\ (\(\alpha=2.75\), \(\alpha'=3.75\))\end{tabular}
& \textbf{Satisfaction} & \textbf{0.5} & \textbf{0.95} & \textbf{0.01} \\ \hline
\end{tabular}
%
}
\end{table}
\begin{table}[ht]
\centering
\caption{Best performing SVC 10-fold cross-validation English text classifiers.}
\label{tab:svm_f1_en}
\resizebox{0.65\textwidth}{!}{%
\begin{tabular}{|l|l|l|l|l|}
\hline
\textbf{Keyword List} &
\textbf{\begin{tabular}[c]{@{}l@{}}Classifier\\ emotion\end{tabular}} &
\textbf{C} &
\begin{tabular}[c]{@{}l@{}}\(F_1\)\\ \(\mu\)\end{tabular} &
\begin{tabular}[c]{@{}l@{}}\(F_1\)\\ \(\sigma\)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Satisfaction keywords \\ (\(\alpha=1.5\))\end{tabular}
& Satisfaction & 1.75 & 0.82 & 0.02 \\ \hline
\begin{tabular}[c]{@{}l@{}}Dissatisfaction keywords \\ (\(\alpha'=4.25\))\end{tabular}
& Dissatisfaction & 3 & 0.80 & 0.03 \\ \hline
\begin{tabular}[c]{@{}l@{}}\textbf{Combined} \\ (\(\alpha=1.5\), \(\alpha'=4.25\))\end{tabular}
& \textbf{Satisfaction} & \textbf{2} & \textbf{0.83} & \textbf{0.02} \\ \hline
\end{tabular}
%
}
\end{table}
\begin{table}[ht]
\centering
\caption{Examples of positive and negative sentences used for training SVM.}
\label{tab:training_examples}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|l|}
\hline
\multicolumn{1}{|l|}{\textbf{Language}} &
\multicolumn{1}{l|}{\textbf{Emotion}} &
\textbf{Sentences} \\ \hline
\multirow{4}{*}{Chinese} &
\multirow{2}{*}{Positive} &
\begin{tabular}[c]{@{}l@{}}\begin{CJK}{UTF8}{gbsn}酒店 的 服务 很 好 和 我 住 过 的 所有 日本 酒店 一样 各 种 隐形 服务 非常 厉害\end{CJK}\\ (translated as: "The service of the hotel is very good.\\ All the services of the Japanese hotels I have stayed in are extremely good.")\end{tabular} \\ \cline{3-3}
&
&
\begin{tabular}[c]{@{}l@{}}\begin{CJK}{UTF8}{gbsn}有 一 个 后门 到 地铁站 非常 近 周边 也 算 方便 酒店 服务 和 卫生 都 很 好\end{CJK}\\ (translated as: "There is a back door to the subway station very close to it. \\ The surrounding area is also convenient hotel service and health are very good")\end{tabular} \\ \cline{2-3}
&
\multirow{2}{*}{Negative} &
\begin{tabular}[c]{@{}l@{}}\begin{CJK}{UTF8}{gbsn}酒店 旁边 很 荒凉 连个 便利 店 都 要 走 很远\end{CJK}\\ (translated as: "The hotel is very bleak, \\ and you have to go very far to go to the nearest convenience store.")\end{tabular} \\ \cline{3-3}
&
&
\begin{tabular}[c]{@{}l@{}}\begin{CJK}{UTF8}{gbsn}唯一 不 足 是 价格 太高\end{CJK}\\ (translated as: "The only negative is that the price is too high.")\end{tabular} \\ \hline
\multirow{4}{*}{English} &
\multirow{2}{*}{Positive} &
It was extremely clean, peaceful and the hotel Hosts made us feel super welcome \\ \cline{3-3}
&
&
\begin{tabular}[c]{@{}l@{}}Location is very good, close to a main road with a subway station, a bakery,\\ a 7 eleven and a nice restaurant that is not too expensive but serves good food\end{tabular} \\ \cline{2-3}
&
\multirow{2}{*}{Negative} &
\begin{tabular}[c]{@{}l@{}}The only downside. Our room was labeled 'non-smoking'\\ but our duvet reeked of smoke.\end{tabular} \\ \cline{3-3}
&
&
A bit pricey though \\ \hline
\end{tabular}%
}
\end{table}
Shannon's entropy can be used to observe the probability distribution of each word inside the corpus. A word included in many documents will have a high entropy value for that set of documents. Opposite to this, a word appearing in only one document will have an entropy value of zero.
An SVC is trained to classify data based on previously labeled data, generalizing the data's features by defining a separating (p-1)-dimensional hyperplane in p-dimensional space. Each dimension is a feature of the data in this space. The separating hyperplane, along with the support vectors, divides the multi-dimensional space and minimizes classification error.
Our study used a linear kernel for the SVC, defined by the formula (\ref{eq:svm1}) below. Each training sentence is a data point, a row in the vector \(x\). Each column represents a feature; in our case, the quantities of each of the keywords in that particular sentence. The labels of previously known classifications (1 for positive, 0 for negative) for each sentence comprise the \(f(x)\) vector. The Weight Vector \(w\) is comprised of the influences each point has had in the training process to define the hyperplane angle. The bias coefficient \(b\) determines its position.
During the SVC learning algorithm, each data point classified incorrectly alters the weight vector to correctly classify new data. These changes to the weight vector are greater for features close to the separating hyperplane. These features have stronger changes because they needed to be taken into account to classify with a minimal error. Sequentially, the weight vector can be interpreted as a numerical representation of each feature's effect on each class's classification process. Below we show the formula for the weight vector \(w\) (\ref{eq:svm_weight}), where \(x\) is the training data and each vectorized sentence \(x_i\) in the data is labeled \(y_i\). Each cycle of the algorithm alters the value of \(w\) by \(\alpha\) to reduce the number of wrong classifications. This equation shows the last value of \(\alpha\) after the end of the cycle.
\begin{equation}\label{eq:svm1}
f(x) = w^\top x + b
\end{equation}
\begin{equation}\label{eq:svm_weight}
w = \sum_{i=1}^N \alpha_i y_i x_i
\end{equation}
We tagged 159 Chinese sentences and \num[group-separator={,}]{2357} English sentences as positive or negative for our training data. The entropy comparison factors \(\alpha\) and \(\alpha'\) were tested from 1.25 to 6 in intervals of 0.25. We applied this SVC to classify the rest of our data collection. Subsequently, the positive and negative sentence counts shown in Table \ref{tab:exp_notes} result from applying our SVC for classification.
\section{Data Analysis}\label{dataanalysis}
\subsection{Frequent keywords in differently priced hotels}\label{svmresults}
We observed the top 10 satisfaction and dissatisfaction keywords with the highest frequencies of emotionally positive and negative statements to study. The keywords are the quantitative rank of the needs of Chinese and English-speaking customers. We show the top 10 positive keywords for each price range comparing English and Chinese in Table \ref{tab:freq_res_pos}. For the negative keywords, we show the results in Table \ref{tab:freq_res_neg}.
We can observe that the most used keywords for most price ranges in the same language are similar, with a few changes in priority for the keywords involved. For example, in Chinese, we can see that the customers praise cleanliness first in cheaper hotels, whereas the size of the room or bed is praised more in hotels of higher class. Another example is that in negative English reviews, complaints about price appear only after 10,000 yen hotels. After this, it climbs in importance following the increase in the hotel's price.
\begin{table}[ht]
\centering
\caption{English and Chinese comparison of the top 10 positive keywords.}
\label{tab:freq_res_pos}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|lr|lr|}
\hline
\textbf{Price range} &
\multicolumn{1}{c|}{\textbf{Chinese keyword}} &
\multicolumn{1}{c|}{\textbf{Counts in Ctrip}} &
\multicolumn{1}{c|}{\textbf{English keyword}} &
\multicolumn{1}{c|}{\textbf{Counts in Tripadvisor}} \\ \hline
\multirow{10}{*}{\textbf{0: All Prices}} & \begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) & 12892 & good & 19148 \\
& \begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) & 9844 & staff & 16289 \\
& \begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) & 6665 & great & 16127 \\
& \begin{CJK}{UTF8}{gbsn}交通\end{CJK} (traffic) & 6560 & location & 11838 \\
& \begin{CJK}{UTF8}{gbsn}早餐\end{CJK} (breakfast) & 5605 & nice & 11615 \\
& \begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) & 5181 & clean & 9064 \\
& \begin{CJK}{UTF8}{gbsn}地铁\end{CJK} (subway) & 4321 & helpful & 5846 \\
& \begin{CJK}{UTF8}{gbsn}购物\end{CJK} (shopping) & 4101 & excellent & 5661 \\
& \begin{CJK}{UTF8}{gbsn}推荐\end{CJK} (recommend) & 3281 & comfortable & 5625 \\
& \begin{CJK}{UTF8}{gbsn}环境\end{CJK} (environment) & 3258 & friendly & 5606 \\ \hline
\multirow{10}{*}{\textbf{3: 5000 to 10,000 yen}} & \begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) & 139 & good & 206 \\
& \begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) & 114 & staff & 181 \\
& \begin{CJK}{UTF8}{gbsn}早餐\end{CJK} (breakfast) & 112 & clean & 174 \\
& \begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) & 76 & nice & 166 \\
& \begin{CJK}{UTF8}{gbsn}交通\end{CJK} (traffic) & 72 & great & 143 \\
& \begin{CJK}{UTF8}{gbsn}地铁\end{CJK} (subway) & 66 & location & 91 \\
& \begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) & 55 & comfortable & 79 \\
& \begin{CJK}{UTF8}{gbsn}地铁站\end{CJK} (subway station) & 51 & helpful & 70 \\
& \begin{CJK}{UTF8}{gbsn}远\end{CJK} (far) & 41 & friendly & 64 \\
& \begin{CJK}{UTF8}{gbsn}附近\end{CJK} (nearby) & 34 & recommend & 59 \\ \hline
\multirow{10}{*}{\textbf{4: 10,000 to 15,000 yen}} & \begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) & 601 & good & 1399 \\
& \begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) & 455 & staff & 1165 \\
& \begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) & 348 & great & 961 \\
& \begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) & 323 & nice & 808 \\
& \begin{CJK}{UTF8}{gbsn}早餐\end{CJK} (breakfast) & 270 & location & 800 \\
& \begin{CJK}{UTF8}{gbsn}卫生\end{CJK} (health) & 201 & clean & 656 \\
& \begin{CJK}{UTF8}{gbsn}交通\end{CJK} (traffic) & 196 & excellent & 412 \\
& \begin{CJK}{UTF8}{gbsn}地铁\end{CJK} (subway) & 164 & friendly & 400 \\
& \begin{CJK}{UTF8}{gbsn}远\end{CJK} (far) & 158 & helpful & 393 \\
& \begin{CJK}{UTF8}{gbsn}附近\end{CJK} (nearby) & 150 & comfortable & 391 \\ \hline
\multirow{10}{*}{\textbf{5: 15,000 to 20,000 yen}} & \begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) & 1925 & good & 2242 \\
& \begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) & 1348 & staff & 1674 \\
& \begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) & 1277 & great & 1414 \\
& \begin{CJK}{UTF8}{gbsn}交通\end{CJK} (traffic) & 1058 & clean & 1204 \\
& \begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) & 1016 & nice & 1175 \\
& \begin{CJK}{UTF8}{gbsn}地铁\end{CJK} (subway) & 801 & location & 1109 \\
& \begin{CJK}{UTF8}{gbsn}早餐\end{CJK} (breakfast) & 777 & comfortable & 621 \\
& \begin{CJK}{UTF8}{gbsn}地铁站\end{CJK} (subway station) & 639 & friendly & 615 \\
& \begin{CJK}{UTF8}{gbsn}附近\end{CJK} (nearby) & 572 & free & 581 \\
& \begin{CJK}{UTF8}{gbsn}购物\end{CJK} (shopping) & 516 & helpful & 552 \\ \hline
\multirow{10}{*}{\textbf{6: 20,000 to 30,000 yen}} & \begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) & 3110 & good & 6550 \\
& \begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) & 2245 & staff & 5348 \\
& \begin{CJK}{UTF8}{gbsn}交通\end{CJK} (traffic) & 1990 & great & 5074 \\
& \begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) & 1940 & location & 4414 \\
& \begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) & 1433 & nice & 3451 \\
& \begin{CJK}{UTF8}{gbsn}地铁\end{CJK} (subway) & 1073 & clean & 3364 \\
& \begin{CJK}{UTF8}{gbsn}早餐\end{CJK} (breakfast) & 1007 & shopping & 1992 \\
& \begin{CJK}{UTF8}{gbsn}购物\end{CJK} (shopping) & 979 & helpful & 1970 \\
& \begin{CJK}{UTF8}{gbsn}周边\end{CJK} (surroundings) & 837 & comfortable & 1941 \\
& \begin{CJK}{UTF8}{gbsn}附近\end{CJK} (nearby) & 825 & friendly & 1915 \\ \hline
\multirow{10}{*}{\textbf{7: 30,000 to 50,000 yen}} & \begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) & 2291 & good & 3407 \\
& \begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) & 1913 & staff & 2867 \\
& \begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) & 1159 & great & 2620 \\
& \begin{CJK}{UTF8}{gbsn}交通\end{CJK} (traffic) & 1105 & location & 2186 \\
& \begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) & 935 & nice & 2160 \\
& \begin{CJK}{UTF8}{gbsn}早餐\end{CJK} (breakfast) & 846 & clean & 1750 \\
& \begin{CJK}{UTF8}{gbsn}推荐\end{CJK} (recommend) & 638 & helpful & 1147 \\
& \begin{CJK}{UTF8}{gbsn}购物\end{CJK} (shopping) & 636 & train & 1040 \\
& \begin{CJK}{UTF8}{gbsn}周边\end{CJK} (surroundings) & 552 & subway & 1034 \\
& \begin{CJK}{UTF8}{gbsn}环境\end{CJK} (environment) & 541 & friendly & 1001 \\ \hline
\multirow{10}{*}{\textbf{8: 50,000 to 100,000 yen}} & \begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) & 4451 & great & 4425 \\
& \begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) & 3670 & good & 4350 \\
& \begin{CJK}{UTF8}{gbsn}早餐\end{CJK} (breakfast) & 2422 & staff & 3777 \\
& \begin{CJK}{UTF8}{gbsn}交通\end{CJK} (traffic) & 2012 & nice & 2991 \\
& \begin{CJK}{UTF8}{gbsn}购物\end{CJK} (shopping) & 1764 & location & 2439 \\
& \begin{CJK}{UTF8}{gbsn}新\end{CJK} (new) & 1634 & clean & 1655 \\
& \begin{CJK}{UTF8}{gbsn}棒\end{CJK} (great) & 1626 & excellent & 1555 \\
& \begin{CJK}{UTF8}{gbsn}地铁\end{CJK} (subway) & 1604 & helpful & 1313 \\
& \begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) & 1577 & comfortable & 1246 \\
& \begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) & 1354 & friendly & 1238 \\ \hline
\multirow{10}{*}{\textbf{9: 100,000 to 200,000 yen}} & \begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) & 375 & great & 1488 \\
& \begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) & 315 & staff & 1277 \\
& \begin{CJK}{UTF8}{gbsn}棒\end{CJK} (great) & 189 & good & 994 \\
& \begin{CJK}{UTF8}{gbsn}早餐\end{CJK} (breakfast) & 171 & nice & 864 \\
& \begin{CJK}{UTF8}{gbsn}环境\end{CJK} (environment) & 157 & location & 799 \\
& \begin{CJK}{UTF8}{gbsn}交通\end{CJK} (traffic) & 127 & excellent & 631 \\
& \begin{CJK}{UTF8}{gbsn}选择\end{CJK} (select) & 112 & beautiful & 455 \\
& \begin{CJK}{UTF8}{gbsn}推荐\end{CJK} (recommend) & 109 & large & 404 \\
& \begin{CJK}{UTF8}{gbsn}赞\end{CJK} (awesome) & 101 & helpful & 401 \\
& \begin{CJK}{UTF8}{gbsn}购物\end{CJK} (shopping) & 98 & wonderful & 372 \\ \hline
\end{tabular}%
}
\end{table}
\begin{table}[ht]
\centering
\caption{English and Chinese comparison of the top 10 negative keywords.}
\label{tab:freq_res_neg}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|lr|lr|}
\hline
\textbf{Price range} &
\multicolumn{1}{c|}{\textbf{Chinese keyword}} &
\multicolumn{1}{c|}{\textbf{Counts in Ctrip}} &
\multicolumn{1}{c|}{\textbf{English keyword}} &
\multicolumn{1}{c|}{\textbf{Counts in Tripadvisor}} \\ \hline
\multirow{10}{*}{\textbf{0: All Prices}} & \begin{CJK}{UTF8}{gbsn}价格\end{CJK} (price) & 1838 & pricey & 462 \\
& \begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) & 1713 & poor & 460 \\
& \begin{CJK}{UTF8}{gbsn}中文\end{CJK} (Chinese) & 733 & dated & 431 \\
& \begin{CJK}{UTF8}{gbsn}地理\end{CJK} (geography) & 691 & disappointing & 376 \\
& \begin{CJK}{UTF8}{gbsn}距离\end{CJK} (distance) & 434 & worst & 327 \\
& \begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete) & 319 & minor & 258 \\
& \begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) & 297 & uncomfortable & 253 \\
& \begin{CJK}{UTF8}{gbsn}华人\end{CJK} (Chinese) & 15 & carpet & 240 \\
& & & annoying & 220 \\
& & & sense & 220 \\ \hline
\multirow{10}{*}{\textbf{3: 5000 to 10,000 yen}} & \begin{CJK}{UTF8}{gbsn}价格\end{CJK} (price) & 31 & worst & 6 \\
& \begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) & 28 & walkway & 5 \\
& \begin{CJK}{UTF8}{gbsn}距离\end{CJK} (distance) & 11 & unable & 4 \\
& \begin{CJK}{UTF8}{gbsn}地理\end{CJK} (geography) & 10 & worse & 4 \\
& \begin{CJK}{UTF8}{gbsn}中文\end{CJK} (Chinese) & 9 & annoying & 3 \\
& \begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) & 2 & dirty & 3 \\
& & & funny smell & 3 \\
& & & poor & 3 \\
& & & renovation & 3 \\
& & & carpet & 2 \\ \hline
\multirow{10}{*}{\textbf{4: 10,000 to 15,000 yen}} & \begin{CJK}{UTF8}{gbsn}价格\end{CJK} (price) & 98 & dated & 40 \\
& \begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) & 91 & poor & 29 \\
& \begin{CJK}{UTF8}{gbsn}距离\end{CJK} (distance) & 43 & disappointing & 26 \\
& \begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete) & 34 & worst & 24 \\
& \begin{CJK}{UTF8}{gbsn}地理\end{CJK} (geography) & 31 & uncomfortable & 23 \\
& \begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) & 30 & cigarette & 22 \\
& \begin{CJK}{UTF8}{gbsn}中文\end{CJK} (Chinese) & 26 & pricey & 22 \\
& & & minor & 21 \\
& & & paper & 19 \\
& & & unable & 19 \\ \hline
\multirow{10}{*}{\textbf{5: 15,000 to 20,000 yen}} & \begin{CJK}{UTF8}{gbsn}价格\end{CJK} (price) & 296 & poor & 57 \\
& \begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) & 218 & dated & 41 \\
& \begin{CJK}{UTF8}{gbsn}地理\end{CJK} (geography) & 125 & disappointing & 38 \\
& \begin{CJK}{UTF8}{gbsn}中文\end{CJK} (Chinese) & 93 & annoying & 36 \\
& \begin{CJK}{UTF8}{gbsn}距离\end{CJK} (distance) & 84 & worst & 36 \\
& \begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete) & 43 & cigarette & 31 \\
& \begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) & 26 & rude & 28 \\
& \begin{CJK}{UTF8}{gbsn}华人\end{CJK} (Chinese) & 3 & uncomfortable & 26 \\
& & & paper & 25 \\
& & & pricey & 24 \\ \hline
\multirow{10}{*}{\textbf{6: 20,000 to 30,000 yen}} & \begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) & 504 & poor & 136 \\
& \begin{CJK}{UTF8}{gbsn}价格\end{CJK} (price) & 472 & dated & 131 \\
& \begin{CJK}{UTF8}{gbsn}地理\end{CJK} (geography) & 164 & pricey & 120 \\
& \begin{CJK}{UTF8}{gbsn}中文\end{CJK} (Chinese) & 155 & disappointing & 112 \\
& \begin{CJK}{UTF8}{gbsn}距离\end{CJK} (distance) & 116 & uncomfortable & 103 \\
& \begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete) & 75 & minor & 93 \\
& \begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) & 55 & smallest & 88 \\
& \begin{CJK}{UTF8}{gbsn}华人\end{CJK} (Chinese) & 2 & worst & 86 \\
& & & cigarette & 79 \\
& & & annoying & 70 \\ \hline
\multirow{10}{*}{\textbf{7: 30,000 to 50,000 yen}} & \begin{CJK}{UTF8}{gbsn}价格\end{CJK} (price) & 326 & poor & 92 \\
& \begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) & 311 & pricey & 92 \\
& \begin{CJK}{UTF8}{gbsn}地理\end{CJK} (geography) & 110 & dated & 65 \\
& \begin{CJK}{UTF8}{gbsn}中文\end{CJK} (Chinese) & 94 & worst & 64 \\
& \begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete) & 71 & carpet & 55 \\
& \begin{CJK}{UTF8}{gbsn}距离\end{CJK} (distance) & 68 & uncomfortable & 55 \\
& \begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) & 45 & dirty & 51 \\
& \begin{CJK}{UTF8}{gbsn}华人\end{CJK} (Chinese) & 2 & disappointing & 50 \\
& & & cigarette & 46 \\
& & & unable & 43 \\ \hline
\multirow{10}{*}{\textbf{8: 50,000 to 100,000 yen}} &
\begin{CJK}{UTF8}{gbsn}价格\end{CJK} (price) & 561 & pricey & 163 \\
& \begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) & 510 & dated & 150 \\
& \begin{CJK}{UTF8}{gbsn}中文\end{CJK} (Chinese) & 337 & disappointing & 129 \\
& \begin{CJK}{UTF8}{gbsn}地理\end{CJK} (geography) & 239 & poor & 124 \\
& \begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) & 134 & worst & 98 \\
& \begin{CJK}{UTF8}{gbsn}距离\end{CJK} (distance) & 97 & walkway & 82 \\
& \begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete) & 90 & carpet & 71 \\
& \begin{CJK}{UTF8}{gbsn}华人\end{CJK} (Chinese) & 8 & minor & 63 \\
& & & sense & 63 \\
& & & outdated & 58 \\ \hline
\multirow{10}{*}{\textbf{9: 100,000 to 200,000 yen}} & \begin{CJK}{UTF8}{gbsn}价格\end{CJK} (price) & 54 & pricey & 40 \\
& \begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) & 51 & sense & 34 \\
& \begin{CJK}{UTF8}{gbsn}中文\end{CJK} (Chinese) & 19 & minor & 33 \\
& \begin{CJK}{UTF8}{gbsn}距离\end{CJK} (distance) & 15 & lighting & 20 \\
& \begin{CJK}{UTF8}{gbsn}地理\end{CJK} (geography) & 12 & disappointing & 19 \\
& \begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete) & 6 & poor & 19 \\
& \begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) & 5 & annoying & 16 \\
& & & mixed & 15 \\
& & & disappointment & 14 \\
& & & paper & 14 \\ \hline
\end{tabular}%
}
\end{table}
\subsection{Frequently used adjectives and their pairs}\label{adjresults}
Some keywords in these lists are adjectives, such as the word ``\begin{CJK}{UTF8}{gbsn}大\end{CJK} (big)'' mentioned before. To understand those, we performed the dependency parsing and part of speech tagging explained in section \ref{textprocessing}. While many of these connections, we only considered the top 4 used keyword connections per adjective per price range. We show the most used Chinese adjectives in positive keywords in Table \ref{tab:adj_zh_pos}, and for negative Chinese adjective keywords in Table \ref{tab:adj_zh_neg}. Similarly, for English adjectives used in positive sentences we show the most common examples in Table \ref{tab:adj_en_pos}, and for adjectives used in negative sentences in Table \ref{tab:adj_en_neg}.
\begin{landscape}
\begin{table}[p]
\centering
\caption{Top 4 words related to the mainly used adjectives in positive Chinese texts.}
\label{tab:adj_zh_pos}
\resizebox{\linewidth}{!}{%
\begin{tabular}{|c|l|l|l|l|l|l|}
\hline
\textbf{Price range} &
\multicolumn{1}{c|}{\textbf{\begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad)}} &
\multicolumn{1}{c|}{\textbf{\begin{CJK}{UTF8}{gbsn}大\end{CJK} (big)}} &
\multicolumn{1}{c|}{\textbf{\begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean)}} &
\multicolumn{1}{c|}{\textbf{\begin{CJK}{UTF8}{gbsn}近\end{CJK} (near)}} &
\multicolumn{1}{c|}{\textbf{\begin{CJK}{UTF8}{gbsn}新\end{CJK} (new)}} &
\multicolumn{1}{c|}{\textbf{\begin{CJK}{UTF8}{gbsn}棒\end{CJK} (great)}} \\ \hline
\multirow{5}{*}{\textbf{0: All Prices}} &
\begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) : 12892 &
\begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) : 9844 &
\begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) : 6665 &
\begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) : 5181 &
\begin{CJK}{UTF8}{gbsn}新\end{CJK} (new) : 2775 &
\begin{CJK}{UTF8}{gbsn}棒\end{CJK} (great) : 3028 \\
&
\begin{CJK}{UTF8}{gbsn}不错 酒店\end{CJK} (nice hotel) : 1462 &
\begin{CJK}{UTF8}{gbsn}大 房间\end{CJK} (big room) : 3197 &
\begin{CJK}{UTF8}{gbsn}干净 房间\end{CJK} (clean room) : 1224 &
\begin{CJK}{UTF8}{gbsn}近 酒店\end{CJK} (near hotel) : 453 &
\begin{CJK}{UTF8}{gbsn}新 设施\end{CJK} (new facility) : 363 &
\begin{CJK}{UTF8}{gbsn}棒 酒店\end{CJK} (great hotel) : 463 \\
&
\begin{CJK}{UTF8}{gbsn}不错 位置\end{CJK} (nice location) : 1426 &
\begin{CJK}{UTF8}{gbsn}大 床\end{CJK} (big bed) : 772 &
\begin{CJK}{UTF8}{gbsn}干净 酒店\end{CJK} (clean hotel) : 737 &
\begin{CJK}{UTF8}{gbsn}近 桥\end{CJK} (near bridge) : 144 &
\begin{CJK}{UTF8}{gbsn}新 酒店\end{CJK} (new hotel) : 246 &
\begin{CJK}{UTF8}{gbsn}棒 位置\end{CJK} (great position) : 218 \\
&
\begin{CJK}{UTF8}{gbsn}不错 服务\end{CJK} (nice service) : 869 &
\begin{CJK}{UTF8}{gbsn}大 酒店\end{CJK} (big hotel) : 379 &
\begin{CJK}{UTF8}{gbsn}干净 卫生\end{CJK} (clean and hygienic) : 464 &
\begin{CJK}{UTF8}{gbsn}近 地铁站\end{CJK} (near subway station) : 122 &
\begin{CJK}{UTF8}{gbsn}新 装修\end{CJK} (new decoration) : 116 &
\begin{CJK}{UTF8}{gbsn}棒 服务\end{CJK} (great service) : 168 \\
&
\begin{CJK}{UTF8}{gbsn}不错 环境\end{CJK} (nice environment) : 714 &
\begin{CJK}{UTF8}{gbsn}大 超市\end{CJK} (big supermarket) : 232 &
\begin{CJK}{UTF8}{gbsn}干净 环境\end{CJK} (clean environment) : 61 &
\begin{CJK}{UTF8}{gbsn}近 站\end{CJK} (near station) : 108 &
\begin{CJK}{UTF8}{gbsn}新 房间\end{CJK} (new room) : 53 &
\begin{CJK}{UTF8}{gbsn}棒 早餐\end{CJK} (great breakfast) : 164 \\ \hline
\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}3: 5000 to\\ 10,000 yen\end{tabular}}} &
\begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) : 139 &
\begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) : 76 &
\begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) : 114 &
\begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) : 55 &
&
\begin{CJK}{UTF8}{gbsn}棒\end{CJK} (great) : 11 \\
&
\begin{CJK}{UTF8}{gbsn}不错 酒店\end{CJK} (nice hotel) : 17 &
\begin{CJK}{UTF8}{gbsn}大 房间\end{CJK} (big room) : 11 &
\begin{CJK}{UTF8}{gbsn}干净 房间\end{CJK} (clean room) : 21 &
\begin{CJK}{UTF8}{gbsn}近 酒店\end{CJK} (near hotel) : 4 &
&
\begin{CJK}{UTF8}{gbsn}棒 位置\end{CJK} (great position) : 2 \\
&
\begin{CJK}{UTF8}{gbsn}不错 位置\end{CJK} (nice location) : 16 &
\begin{CJK}{UTF8}{gbsn}大 床\end{CJK} (big bed) : 10 &
\begin{CJK}{UTF8}{gbsn}干净 酒店\end{CJK} (clean hotel) : 10 &
\begin{CJK}{UTF8}{gbsn}近 地铁\end{CJK} (near subway) : 2 &
&
\\
&
\begin{CJK}{UTF8}{gbsn}不错 早餐\end{CJK} (nice breakfast) : 12 &
\begin{CJK}{UTF8}{gbsn}大 超市\end{CJK} (big supermarket) : 5 &
\begin{CJK}{UTF8}{gbsn}干净 卫生\end{CJK} (clean and hygienic) : 6 &
&
&
\\
&
\begin{CJK}{UTF8}{gbsn}不错 服务\end{CJK} (nice service) : 8 &
\begin{CJK}{UTF8}{gbsn}大 商场\end{CJK} (big market) : 3 &
\begin{CJK}{UTF8}{gbsn}干净 总体\end{CJK} (clean overall) : 4 &
&
&
\\ \hline
\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}4: 10,000 to\\ 15,000 yen\end{tabular}}} &
\begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) : 601 &
\begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) : 348 &
\begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) : 455 &
\begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) : 323 &
\begin{CJK}{UTF8}{gbsn}新\end{CJK} (new) : 37 &
\begin{CJK}{UTF8}{gbsn}棒\end{CJK} (great) : 73 \\
&
\begin{CJK}{UTF8}{gbsn}不错 位置\end{CJK} (nice location) : 72 &
\begin{CJK}{UTF8}{gbsn}大 房间\end{CJK} (big room) : 76 &
\begin{CJK}{UTF8}{gbsn}干净 房间\end{CJK} (clean room) : 66 &
\begin{CJK}{UTF8}{gbsn}近 酒店\end{CJK} (near hotel) : 27 &
\begin{CJK}{UTF8}{gbsn}新 设施\end{CJK} (new facility) : 9 &
\begin{CJK}{UTF8}{gbsn}棒 位置\end{CJK} (great position) : 6 \\
&
\begin{CJK}{UTF8}{gbsn}不错 酒店\end{CJK} (nice hotel) : 37 &
\begin{CJK}{UTF8}{gbsn}大 床\end{CJK} (big bed) : 30 &
\begin{CJK}{UTF8}{gbsn}干净 卫生\end{CJK} (clean and hygienic) : 52 &
\begin{CJK}{UTF8}{gbsn}近 站\end{CJK} (near station) : 14 &
\begin{CJK}{UTF8}{gbsn}新 装修\end{CJK} (new decoration) : 2 &
\begin{CJK}{UTF8}{gbsn}棒 房间\end{CJK} (great room) : 3 \\
&
\begin{CJK}{UTF8}{gbsn}不错 服务\end{CJK} (nice service) : 34 &
\begin{CJK}{UTF8}{gbsn}大 社\end{CJK} (big club) : 26 &
\begin{CJK}{UTF8}{gbsn}干净 酒店\end{CJK} (clean hotel) : 48 &
\begin{CJK}{UTF8}{gbsn}近 地铁\end{CJK} (near subway) : 12 &
\begin{CJK}{UTF8}{gbsn}新 酒店\end{CJK} (new hotel) : 2 &
\begin{CJK}{UTF8}{gbsn}棒 水平\end{CJK} (great level) : 3 \\
&
\begin{CJK}{UTF8}{gbsn}不错 早餐\end{CJK} (nice breakfast) : 26 &
\begin{CJK}{UTF8}{gbsn}大 空间\end{CJK} (big space) : 16 &
\begin{CJK}{UTF8}{gbsn}干净 打扫\end{CJK} (clean up) : 9 &
\begin{CJK}{UTF8}{gbsn}近 车站\end{CJK} (near the station) : 10 &
&
\begin{CJK}{UTF8}{gbsn}棒 温泉\end{CJK} (great hot spring) : 3 \\ \hline
\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}5: 15,000 to\\ 20,000 yen\end{tabular}}} &
\begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) : 1925 &
\begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) : 1277 &
\begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) : 1348 &
\begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) : 1016 &
\begin{CJK}{UTF8}{gbsn}新\end{CJK} (new) : 234 &
\begin{CJK}{UTF8}{gbsn}棒\end{CJK} (great) : 241 \\
&
\begin{CJK}{UTF8}{gbsn}不错 位置\end{CJK} (nice location) : 207 &
\begin{CJK}{UTF8}{gbsn}大 房间\end{CJK} (big room) : 316 &
\begin{CJK}{UTF8}{gbsn}干净 房间\end{CJK} (clean room) : 234 &
\begin{CJK}{UTF8}{gbsn}近 酒店\end{CJK} (near hotel) : 82 &
\begin{CJK}{UTF8}{gbsn}新 设施\end{CJK} (new facility) : 47 &
\begin{CJK}{UTF8}{gbsn}棒 位置\end{CJK} (great position) : 33 \\
&
\begin{CJK}{UTF8}{gbsn}不错 酒店\end{CJK} (nice hotel) : 168 &
\begin{CJK}{UTF8}{gbsn}大 床\end{CJK} (big bed) : 140 &
\begin{CJK}{UTF8}{gbsn}干净 酒店\end{CJK} (clean hotel) : 161 &
\begin{CJK}{UTF8}{gbsn}近 站\end{CJK} (near station) : 35 &
\begin{CJK}{UTF8}{gbsn}新 酒店\end{CJK} (new hotel) : 25 &
\begin{CJK}{UTF8}{gbsn}棒 酒店\end{CJK} (great hotel) : 25 \\
&
\begin{CJK}{UTF8}{gbsn}不错 服务\end{CJK} (nice service) : 131 &
\begin{CJK}{UTF8}{gbsn}大 超市\end{CJK} (big supermarket) : 73 &
\begin{CJK}{UTF8}{gbsn}干净 卫生\end{CJK} (clean and hygienic) : 92 &
\begin{CJK}{UTF8}{gbsn}近 地铁站\end{CJK} (near subway station) : 34 &
\begin{CJK}{UTF8}{gbsn}新 装修\end{CJK} (new decoration) : 15 &
\begin{CJK}{UTF8}{gbsn}棒 服务\end{CJK} (great service) : 22 \\
&
\begin{CJK}{UTF8}{gbsn}不错 早餐\end{CJK} (nice breakfast) : 109 &
\begin{CJK}{UTF8}{gbsn}大 酒店\end{CJK} (big hotel) : 49 &
\begin{CJK}{UTF8}{gbsn}干净 设施\end{CJK} (clean facilities) : 19 &
\begin{CJK}{UTF8}{gbsn}近 桥\end{CJK} (near bridge) : 29 &
\begin{CJK}{UTF8}{gbsn}新 房间\end{CJK} (new room) : 10 &
\begin{CJK}{UTF8}{gbsn}棒 早餐\end{CJK} (great breakfast) : 8 \\ \hline
\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}6: 20,000 to\\ 30,000 yen\end{tabular}}} &
\begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) : 3110 &
\begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) : 2245 &
\begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) : 1940 &
\begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) : 1433 &
\begin{CJK}{UTF8}{gbsn}新\end{CJK} (new) : 517 &
\begin{CJK}{UTF8}{gbsn}棒\end{CJK} (great) : 440 \\
&
\begin{CJK}{UTF8}{gbsn}不错 位置\end{CJK} (nice location) : 409 &
\begin{CJK}{UTF8}{gbsn}大 房间\end{CJK} (big room) : 680 &
\begin{CJK}{UTF8}{gbsn}干净 房间\end{CJK} (clean room) : 360 &
\begin{CJK}{UTF8}{gbsn}近 酒店\end{CJK} (near hotel) : 164 &
\begin{CJK}{UTF8}{gbsn}新 设施\end{CJK} (new facility) : 89 &
\begin{CJK}{UTF8}{gbsn}棒 酒店\end{CJK} (great hotel) : 51 \\
&
\begin{CJK}{UTF8}{gbsn}不错 酒店\end{CJK} (nice hotel) : 326 &
\begin{CJK}{UTF8}{gbsn}大 床\end{CJK} (big bed) : 198 &
\begin{CJK}{UTF8}{gbsn}干净 酒店\end{CJK} (clean hotel) : 203 &
\begin{CJK}{UTF8}{gbsn}近 地铁\end{CJK} (near subway) : 34 &
\begin{CJK}{UTF8}{gbsn}新 酒店\end{CJK} (new hotel) : 51 &
\begin{CJK}{UTF8}{gbsn}棒 位置\end{CJK} (great position) : 45 \\
&
\begin{CJK}{UTF8}{gbsn}不错 服务\end{CJK} (nice service) : 206 &
\begin{CJK}{UTF8}{gbsn}大 酒店\end{CJK} (big hotel) : 102 &
\begin{CJK}{UTF8}{gbsn}干净 卫生\end{CJK} (clean and hygienic) : 137 &
\begin{CJK}{UTF8}{gbsn}近 地铁站\end{CJK} (near subway station) : 31 &
\begin{CJK}{UTF8}{gbsn}新 装修\end{CJK} (new decoration) : 24 &
\begin{CJK}{UTF8}{gbsn}棒 服务\end{CJK} (great service) : 23 \\
&
\begin{CJK}{UTF8}{gbsn}不错 环境\end{CJK} (nice environment) : 183 &
\begin{CJK}{UTF8}{gbsn}大 空间\end{CJK} (big space) : 64 &
\begin{CJK}{UTF8}{gbsn}干净 环境\end{CJK} (clean environment) : 21 &
\begin{CJK}{UTF8}{gbsn}近 车站\end{CJK} (near the station) : 27 &
\begin{CJK}{UTF8}{gbsn}新 房间\end{CJK} (new room) : 10 &
\begin{CJK}{UTF8}{gbsn}棒 早餐\end{CJK} (great breakfast) : 20 \\ \hline
\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}7: 30,000 to\\ 50,000 yen\end{tabular}}} &
\begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) : 2291 &
\begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) : 1913 &
\begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) : 1159 &
\begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) : 935 &
\begin{CJK}{UTF8}{gbsn}新\end{CJK} (new) : 260 &
\begin{CJK}{UTF8}{gbsn}棒\end{CJK} (great) : 448 \\
&
\begin{CJK}{UTF8}{gbsn}不错 位置\end{CJK} (nice location) : 277 &
\begin{CJK}{UTF8}{gbsn}大 房间\end{CJK} (big room) : 643 &
\begin{CJK}{UTF8}{gbsn}干净 房间\end{CJK} (clean room) : 224 &
\begin{CJK}{UTF8}{gbsn}近 酒店\end{CJK} (near hotel) : 80 &
\begin{CJK}{UTF8}{gbsn}新 设施\end{CJK} (new facility) : 63 &
\begin{CJK}{UTF8}{gbsn}棒 酒店\end{CJK} (great hotel) : 68 \\
&
\begin{CJK}{UTF8}{gbsn}不错 酒店\end{CJK} (nice hotel) : 274 &
\begin{CJK}{UTF8}{gbsn}大 床\end{CJK} (big bed) : 141 &
\begin{CJK}{UTF8}{gbsn}干净 酒店\end{CJK} (clean hotel) : 146 &
\begin{CJK}{UTF8}{gbsn}近 站\end{CJK} (near station) : 24 &
\begin{CJK}{UTF8}{gbsn}新 酒店\end{CJK} (new hotel) : 25 &
\begin{CJK}{UTF8}{gbsn}棒 位置\end{CJK} (great position) : 34 \\
&
\begin{CJK}{UTF8}{gbsn}不错 服务\end{CJK} (nice service) : 140 &
\begin{CJK}{UTF8}{gbsn}大 超市\end{CJK} (big supermarket) : 74 &
\begin{CJK}{UTF8}{gbsn}干净 卫生\end{CJK} (clean and hygienic) : 71 &
\begin{CJK}{UTF8}{gbsn}近 桥\end{CJK} (near bridge) : 20 &
\begin{CJK}{UTF8}{gbsn}新 装修\end{CJK} (new decoration) : 15 &
\begin{CJK}{UTF8}{gbsn}棒 服务\end{CJK} (great service) : 24 \\
&
\begin{CJK}{UTF8}{gbsn}不错 环境\end{CJK} (nice environment) : 140 &
\begin{CJK}{UTF8}{gbsn}大 酒店\end{CJK} (big hotel) : 66 &
\begin{CJK}{UTF8}{gbsn}干净 环境\end{CJK} (clean environment) : 16 &
\begin{CJK}{UTF8}{gbsn}近 山\end{CJK} (near mountain) : 12 &
\begin{CJK}{UTF8}{gbsn}新 房间\end{CJK} (new room) : 11 &
\begin{CJK}{UTF8}{gbsn}棒 早餐\end{CJK} (great breakfast) : 14 \\ \hline
\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}8: 50,000 to\\ 100,000 yen\end{tabular}}} &
\begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) : 4451 &
\begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) : 3670 &
\begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) : 1577 &
\begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) : 1354 &
\begin{CJK}{UTF8}{gbsn}新\end{CJK} (new) : 1634 &
\begin{CJK}{UTF8}{gbsn}棒\end{CJK} (great) : 1626 \\
&
\begin{CJK}{UTF8}{gbsn}不错 酒店\end{CJK} (nice hotel) : 587 &
\begin{CJK}{UTF8}{gbsn}大 房间\end{CJK} (big room) : 1340 &
\begin{CJK}{UTF8}{gbsn}干净 房间\end{CJK} (clean room) : 310 &
\begin{CJK}{UTF8}{gbsn}近 酒店\end{CJK} (near hotel) : 88 &
\begin{CJK}{UTF8}{gbsn}新 设施\end{CJK} (new facility) : 141 &
\begin{CJK}{UTF8}{gbsn}棒 酒店\end{CJK} (great hotel) : 281 \\
&
\begin{CJK}{UTF8}{gbsn}不错 位置\end{CJK} (nice location) : 415 &
\begin{CJK}{UTF8}{gbsn}大 床\end{CJK} (big bed) : 238 &
\begin{CJK}{UTF8}{gbsn}干净 酒店\end{CJK} (clean hotel) : 161 &
\begin{CJK}{UTF8}{gbsn}近 桥\end{CJK} (near bridge) : 76 &
\begin{CJK}{UTF8}{gbsn}新 酒店\end{CJK} (new hotel) : 123 &
\begin{CJK}{UTF8}{gbsn}棒 早餐\end{CJK} (great breakfast) : 112 \\
&
\begin{CJK}{UTF8}{gbsn}不错 服务\end{CJK} (nice service) : 328 &
\begin{CJK}{UTF8}{gbsn}大 酒店\end{CJK} (big hotel) : 144 &
\begin{CJK}{UTF8}{gbsn}干净 卫生\end{CJK} (clean and hygienic) : 101 &
\begin{CJK}{UTF8}{gbsn}近 地铁站\end{CJK} (near subway station) : 35 &
\begin{CJK}{UTF8}{gbsn}新 装修\end{CJK} (new decoration) : 57 &
\begin{CJK}{UTF8}{gbsn}棒 位置\end{CJK} (great position) : 96 \\
&
\begin{CJK}{UTF8}{gbsn}不错 早餐\end{CJK} (nice breakfast) : 251 &
\begin{CJK}{UTF8}{gbsn}大 商场\end{CJK} (big market) : 88 &
\begin{CJK}{UTF8}{gbsn}干净 服务\end{CJK} (clean service) : 13 &
\begin{CJK}{UTF8}{gbsn}近 铁\end{CJK} (Kintetsu) : 24 &
\begin{CJK}{UTF8}{gbsn}新 斋\end{CJK} (new) : 22 &
\begin{CJK}{UTF8}{gbsn}棒 服务\end{CJK} (great service) : 86 \\ \hline
\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}9: 100,000 to\\ 200,000 yen\end{tabular}}} &
\begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad) : 375 &
\begin{CJK}{UTF8}{gbsn}大\end{CJK} (big) : 315 &
\begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean) : 72 &
\begin{CJK}{UTF8}{gbsn}近\end{CJK} (near) : 65 &
\begin{CJK}{UTF8}{gbsn}新\end{CJK} (new) : 77 &
\begin{CJK}{UTF8}{gbsn}棒\end{CJK} (great) : 189 \\
&
\begin{CJK}{UTF8}{gbsn}不错 酒店\end{CJK} (nice hotel) : 53 &
\begin{CJK}{UTF8}{gbsn}大 房间\end{CJK} (big room) : 131 &
\begin{CJK}{UTF8}{gbsn}干净 房间\end{CJK} (clean room) : 9 &
\begin{CJK}{UTF8}{gbsn}近 酒店\end{CJK} (near hotel) : 8 &
\begin{CJK}{UTF8}{gbsn}新 酒店\end{CJK} (new hotel) : 19 &
\begin{CJK}{UTF8}{gbsn}棒 酒店\end{CJK} (great hotel) : 36 \\
&
\begin{CJK}{UTF8}{gbsn}不错 位置\end{CJK} (nice location) : 30 &
\begin{CJK}{UTF8}{gbsn}大 面积\end{CJK} (large area) : 19 &
\begin{CJK}{UTF8}{gbsn}干净 酒店\end{CJK} (clean hotel) : 8 &
\begin{CJK}{UTF8}{gbsn}近 地铁站\end{CJK} (near subway station) : 3 &
\begin{CJK}{UTF8}{gbsn}新 设施\end{CJK} (new facility) : 13 &
\begin{CJK}{UTF8}{gbsn}棒 体验\end{CJK} (great experience) : 10 \\
&
\begin{CJK}{UTF8}{gbsn}不错 环境\end{CJK} (nice environment) : 27 &
\begin{CJK}{UTF8}{gbsn}大 床\end{CJK} (big bed) : 15 &
\begin{CJK}{UTF8}{gbsn}干净 卫生\end{CJK} (clean and hygienic) : 5 &
\begin{CJK}{UTF8}{gbsn}近 市场\end{CJK} (near market) : 3 &
\begin{CJK}{UTF8}{gbsn}新 装修\end{CJK} (new decoration) : 3 &
\begin{CJK}{UTF8}{gbsn}棒 服务\end{CJK} (great service) : 10 \\
&
\begin{CJK}{UTF8}{gbsn}不错 服务\end{CJK} (nice service) : 22 &
\begin{CJK}{UTF8}{gbsn}大 卫生间\end{CJK} (big toilet) : 13 &
&
&
\begin{CJK}{UTF8}{gbsn}新 位置\end{CJK} (new location) : 2 &
\begin{CJK}{UTF8}{gbsn}棒 早餐\end{CJK} (great breakfast) : 8 \\ \hline
\end{tabular}%
}
\end{table}
\end{landscape}
\begin{landscape}
\begin{table}[p]
\centering
\caption{Top 4 words related to the mainly used adjectives in negative texts.}
\label{tab:adj_zh_neg}
\resizebox{0.7\linewidth}{!}{%
\begin{tabular}{|c|l|l|l|}
\hline
\multicolumn{1}{|l|}{\textbf{Price range}} &
\multicolumn{1}{c|}{\textbf{\begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general)}} &
\multicolumn{1}{c|}{\textbf{\begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete)}} &
\multicolumn{1}{c|}{\textbf{\begin{CJK}{UTF8}{gbsn}老\end{CJK} (old)}} \\ \hline
\multirow{5}{*}{\textbf{0: All Prices}} &
\begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) : 1713 &
\begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete) : 319 &
\begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) : 297 \\
&
\begin{CJK}{UTF8}{gbsn}一般 设施\end{CJK} (general facilities) : 137 &
\begin{CJK}{UTF8}{gbsn}陈旧 设施\end{CJK} (obsolete facilities) : 184 &
\begin{CJK}{UTF8}{gbsn}老 酒店\end{CJK} (old hotel) : 74 \\
&
\begin{CJK}{UTF8}{gbsn}一般 服务\end{CJK} (general service) : 115 &
\begin{CJK}{UTF8}{gbsn}陈旧 设备\end{CJK} (obsolete equipment) : 18 &
\begin{CJK}{UTF8}{gbsn}老 设施\end{CJK} (old facility) : 58 \\
&
\begin{CJK}{UTF8}{gbsn}一般 酒店\end{CJK} (average hotel) : 106 &
\begin{CJK}{UTF8}{gbsn}陈旧 房间\end{CJK} (outdated room) : 10 &
\begin{CJK}{UTF8}{gbsn}老 店\end{CJK} (old shop) : 15 \\
&
\begin{CJK}{UTF8}{gbsn}一般 早餐\end{CJK} (average breakfast) : 97 &
\begin{CJK}{UTF8}{gbsn}陈旧 酒店\end{CJK} (outdated hotel) : 10 &
\begin{CJK}{UTF8}{gbsn}老 装修\end{CJK} (old decoration) : 11 \\ \hline
\multirow{5}{*}{\textbf{3: 5000 to 10,000 yen}} &
\begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) : 28 &
&
\begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) : 2 \\
&
\begin{CJK}{UTF8}{gbsn}一般 设施\end{CJK} (general facilities) : 5 &
&
\\
&
\begin{CJK}{UTF8}{gbsn}一般 早餐\end{CJK} (average breakfast) : 3 &
&
\\
&
\begin{CJK}{UTF8}{gbsn}一般 味道\end{CJK} (general taste) : 2 &
&
\\
&
\begin{CJK}{UTF8}{gbsn}一般 效果\end{CJK} (general effect) : 2 &
&
\\ \hline
\multirow{5}{*}{\textbf{4: 10,000 to 15,000 yen}} &
\begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) : 91 &
\begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete) : 34 &
\begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) : 30 \\
&
\begin{CJK}{UTF8}{gbsn}一般 设施\end{CJK} (general facilities) : 10 &
\begin{CJK}{UTF8}{gbsn}陈旧 设施\end{CJK} (obsolete facilities) : 17 &
\begin{CJK}{UTF8}{gbsn}老 酒店\end{CJK} (old hotel) : 8 \\
&
\begin{CJK}{UTF8}{gbsn}一般 位置\end{CJK} (general location) : 8 &
\begin{CJK}{UTF8}{gbsn}陈旧 家具\end{CJK} (obsolete furniture) : 2 &
\begin{CJK}{UTF8}{gbsn}老 设施\end{CJK} (old facility) : 7 \\
&
\begin{CJK}{UTF8}{gbsn}一般 酒店\end{CJK} (average hotel) : 6 &
\begin{CJK}{UTF8}{gbsn}陈旧 设备\end{CJK} (obsolete equipment) : 2 &
\begin{CJK}{UTF8}{gbsn}老 建筑\end{CJK} (old building) : 3 \\
&
\begin{CJK}{UTF8}{gbsn}一般 早餐\end{CJK} (average breakfast) : 5 &
&
\\ \hline
\multirow{5}{*}{\textbf{5: 15,000 to 20,000 yen}} &
\begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) : 218 &
\begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete) : 43 &
\begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) : 26 \\
&
\begin{CJK}{UTF8}{gbsn}一般 设施\end{CJK} (general facilities) : 23 &
\begin{CJK}{UTF8}{gbsn}陈旧 设施\end{CJK} (obsolete facilities) : 25 &
\begin{CJK}{UTF8}{gbsn}老 酒店\end{CJK} (old hotel) : 11 \\
&
\begin{CJK}{UTF8}{gbsn}一般 酒店\end{CJK} (average hotel) : 21 &
\begin{CJK}{UTF8}{gbsn}陈旧 设备\end{CJK} (obsolete equipment) : 3 &
\begin{CJK}{UTF8}{gbsn}老 设施\end{CJK} (old facility) : 7 \\
&
\begin{CJK}{UTF8}{gbsn}一般 早餐\end{CJK} (average breakfast) : 14 &
\begin{CJK}{UTF8}{gbsn}陈旧 酒店\end{CJK} (outdated hotel) : 2 &
\begin{CJK}{UTF8}{gbsn}老 外观\end{CJK} (old appearance) : 2 \\
&
\begin{CJK}{UTF8}{gbsn}一般 卫生\end{CJK} (general hygiene) : 8 &
&
\\ \hline
\multirow{5}{*}{\textbf{6: 20,000 to 30,000 yen}} &
\begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) : 504 &
\begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete) : 75 &
\begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) : 55 \\
&
\begin{CJK}{UTF8}{gbsn}一般 设施\end{CJK} (general facilities) : 42 &
\begin{CJK}{UTF8}{gbsn}陈旧 设施\end{CJK} (obsolete facilities) : 42 &
\begin{CJK}{UTF8}{gbsn}老 酒店\end{CJK} (old hotel) : 9 \\
&
\begin{CJK}{UTF8}{gbsn}一般 酒店\end{CJK} (average hotel) : 37 &
\begin{CJK}{UTF8}{gbsn}陈旧 设备\end{CJK} (obsolete equipment) : 7 &
\begin{CJK}{UTF8}{gbsn}老 设施\end{CJK} (old facility) : 8 \\
&
\begin{CJK}{UTF8}{gbsn}一般 服务\end{CJK} (general service) : 34 &
\begin{CJK}{UTF8}{gbsn}陈旧 装修\end{CJK} (old decoration) : 3 &
\begin{CJK}{UTF8}{gbsn}老 店\end{CJK} (old shop) : 3 \\
&
\begin{CJK}{UTF8}{gbsn}一般 早餐\end{CJK} (average breakfast) : 21 &
\begin{CJK}{UTF8}{gbsn}陈旧 酒店\end{CJK} (outdated hotel) : 2 &
\begin{CJK}{UTF8}{gbsn}老 房间\end{CJK} (old room) : 3 \\ \hline
\multirow{5}{*}{\textbf{7: 30,000 to 50,000 yen}} &
\begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) : 311 &
\begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete) : 71 &
\begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) : 45 \\
&
\begin{CJK}{UTF8}{gbsn}一般 设施\end{CJK} (general facilities) : 23 &
\begin{CJK}{UTF8}{gbsn}陈旧 设施\end{CJK} (obsolete facilities) : 43 &
\begin{CJK}{UTF8}{gbsn}老 酒店\end{CJK} (old hotel) : 11 \\
&
\begin{CJK}{UTF8}{gbsn}一般 服务\end{CJK} (general service) : 22 &
\begin{CJK}{UTF8}{gbsn}陈旧 设备\end{CJK} (obsolete equipment) : 5 &
\begin{CJK}{UTF8}{gbsn}老 设施\end{CJK} (old facility) : 7 \\
&
\begin{CJK}{UTF8}{gbsn}一般 早餐\end{CJK} (average breakfast) : 19 &
\begin{CJK}{UTF8}{gbsn}陈旧 房间\end{CJK} (outdated room) : 3 &
\begin{CJK}{UTF8}{gbsn}老 店\end{CJK} (old shop) : 3 \\
&
\begin{CJK}{UTF8}{gbsn}一般 酒店\end{CJK} (average hotel) : 15 &
&
\begin{CJK}{UTF8}{gbsn}老 房间\end{CJK} (old room) : 2 \\ \hline
\multirow{5}{*}{\textbf{8: 50,000 to 100,000}} &
\begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) : 510 &
\begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete) : 90 &
\begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) : 134 \\
&
\begin{CJK}{UTF8}{gbsn}一般 服务\end{CJK} (general service) : 39 &
\begin{CJK}{UTF8}{gbsn}陈旧 设施\end{CJK} (obsolete facilities) : 53 &
\begin{CJK}{UTF8}{gbsn}老 酒店\end{CJK} (old hotel) : 34 \\
&
\begin{CJK}{UTF8}{gbsn}一般 设施\end{CJK} (general facilities) : 32 &
\begin{CJK}{UTF8}{gbsn}陈旧 房间\end{CJK} (outdated room) : 5 &
\begin{CJK}{UTF8}{gbsn}老 设施\end{CJK} (old facility) : 26 \\
&
\begin{CJK}{UTF8}{gbsn}一般 早餐\end{CJK} (average breakfast) : 30 &
\begin{CJK}{UTF8}{gbsn}陈旧 感觉\end{CJK} (Stale feeling) : 2 &
\begin{CJK}{UTF8}{gbsn}老 装修\end{CJK} (old decoration) : 9 \\
&
\begin{CJK}{UTF8}{gbsn}一般 酒店\end{CJK} (average hotel) : 25 &
&
\begin{CJK}{UTF8}{gbsn}老 店\end{CJK} (old shop) : 7 \\ \hline
\multirow{5}{*}{\textbf{9: 100,000 to 200,000}} &
\begin{CJK}{UTF8}{gbsn}一般\end{CJK} (general) : 51 &
\begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete) : 6 &
\begin{CJK}{UTF8}{gbsn}老\end{CJK} (old) : 5 \\
&
\begin{CJK}{UTF8}{gbsn}一般 服务\end{CJK} (general service) : 7 &
\begin{CJK}{UTF8}{gbsn}陈旧 设施\end{CJK} (obsolete facilities) : 4 &
\begin{CJK}{UTF8}{gbsn}老 设施\end{CJK} (old facility) : 2 \\
&
\begin{CJK}{UTF8}{gbsn}一般 早餐\end{CJK} (average breakfast) : 5 &
&
\\
&
\begin{CJK}{UTF8}{gbsn}一般 位置\end{CJK} (general location) : 2 &
&
\\
&
\begin{CJK}{UTF8}{gbsn}一般 房间\end{CJK} (average room) : 2 &
&
\\ \hline
\end{tabular}%
}
\end{table}
\end{landscape}
\begin{landscape}
\begin{table}[p]
\centering
\caption{Top 4 words related to the mainly used adjectives in positive English texts.}
\label{tab:adj_en_pos}
\resizebox{\linewidth}{!}{%
\begin{tabular}{|c|l|l|l|l|l|l|l|l|}
\hline
\textbf{Price range} &
\multicolumn{1}{c|}{\textbf{good}} &
\multicolumn{1}{c|}{\textbf{clean}} &
\multicolumn{1}{c|}{\textbf{comfortable}} &
\multicolumn{1}{c|}{\textbf{helpful}} &
\multicolumn{1}{c|}{\textbf{free}} &
\multicolumn{1}{c|}{\textbf{large}} &
\multicolumn{1}{c|}{\textbf{firendly}} &
\multicolumn{1}{c|}{\textbf{great}} \\ \hline
\multirow{5}{*}{\textbf{0: All Prices}} &
good : 19148 &
clean : 9064 &
comfortable : 5625 &
helpful : 5846 &
free : 4318 &
large : 4104 &
friendly : 5606 &
great : 16127 \\
&
good location : 1985 &
clean room : 3596 &
comfortable bed : 1919 &
helpful staff : 2927 &
free wifi : 773 &
large room : 1256 &
friendly staff : 3819 &
great location : 2313 \\
&
good service : 1042 &
clean hotel : 969 &
comfortable room : 1098 &
helpful concierge : 304 &
free shuttle : 286 &
large hotel : 268 &
friendly service : 169 &
great view : 1099 \\
&
good breakfast : 942 &
clean bathroom : 282 &
comfortable stay : 272 &
helpful desk : 110 &
free drink : 234 &
large bathroom : 202 &
friendly hotel : 73 &
great service : 841 \\
&
good hotel : 874 &
clean everything : 200 &
comfortable hotel : 238 &
helpful service : 74 &
free bus : 225 &
larger room : 192 &
friendly person : 63 &
great hotel : 802 \\ \hline
\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}3: 5000 to\\ 10,000 yen\end{tabular}}} &
good : 206 &
clean : 174 &
comfortable : 79 &
helpful : 70 &
free : 35 &
large : 31 &
friendly : 64 &
great : 143 \\
&
good location : 30 &
clean room : 55 &
comfortable bed : 21 &
helpful staff : 36 &
free wifi : 10 &
large room : 7 &
friendly staff : 53 &
great location : 21 \\
&
good value : 19 &
clean bathroom : 14 &
comfortable room : 9 &
&
free tea : 4 &
large area : 2 &
friendly everyone : 2 &
great view : 14 \\
&
good english : 10 &
clean place : 12 &
comfortable futon : 8 &
&
free raman : 2 &
large size : 2 &
friendly service : 2 &
great place : 13 \\
&
good place : 7 &
clean hotel : 6 &
comfortable stay : 3 &
&
free toothbrush : 2 &
&
&
great experience : 5 \\ \hline
\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}4: 10,000 to\\ 15,000 yen\end{tabular}}} &
good : 1399 &
clean : 656 &
comfortable : 391 &
helpful : 393 &
free : 271 &
large : 250 &
friendly : 400 &
great : 961 \\
&
good location : 159 &
clean room : 247 &
comfortable bed : 123 &
helpful staff : 206 &
free wifi : 53 &
large room : 84 &
friendly staff : 292 &
great location : 158 \\
&
good breakfast : 87 &
clean hotel : 74 &
comfortable room : 90 &
helpful concierge : 20 &
free breakfast : 15 &
large bathroom : 20 &
friendly service : 15 &
great service : 51 \\
&
good hotel : 71 &
clean bathroom : 20 &
comfortable hotel : 26 &
helpful desk : 10 &
free service : 12 &
larger room : 12 &
friendly hotel : 7 &
great hotel : 43 \\
&
good service : 67 &
clean everything : 14 &
comfortable stay : 20 &
helpful service : 4 &
free drink : 11 &
large hotel : 10 &
friendly person : 6 &
great place : 35 \\ \hline
\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}5: 15,000 to\\ 20,000 yen\end{tabular}}} &
good : 2242 &
clean : 1204 &
comfortable : 621 &
helpful : 552 &
free : 581 &
large : 349 &
friendly : 615 &
great : 1414 \\
&
good location : 242 &
clean room : 440 &
comfortable bed : 219 &
helpful staff : 301 &
free wifi : 109 &
large room : 85 &
friendly staff : 444 &
great location : 199 \\
&
good hotel : 116 &
clean hotel : 133 &
comfortable room : 99 &
helpful desk : 11 &
free shuttle : 35 &
large suitcase : 18 &
friendly hotel : 12 &
great view : 81 \\
&
good breakfast : 113 &
clean bathroom : 38 &
comfortable stay : 30 &
helpful concierge : 9 &
free bus : 30 &
larger room : 18 &
friendly service : 8 &
great hotel : 68 \\
&
good service : 108 &
clean everything : 26 &
comfortable hotel : 20 &
helpful reception : 5 &
free breakfast : 27 &
large hotel : 17 &
friendly most : 7 &
great place : 61 \\ \hline
\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}6: 20,000 to\\ 30,000 yen\end{tabular}}} &
good : 6550 &
clean : 3364 &
comfortable : 1941 &
helpful : 1970 &
free : 1186 &
large : 1257 &
friendly : 1915 &
great : 5074 \\
&
good location : 703 &
clean room : 1379 &
comfortable bed : 658 &
helpful staff : 1019 &
free wifi : 269 &
large room : 329 &
friendly staff : 1311 &
great location : 881 \\
&
good service : 331 &
clean hotel : 379 &
comfortable room : 359 &
helpful concierge : 79 &
free breakfast : 68 &
large hotel : 87 &
friendly service : 51 &
great service : 249 \\
&
good english : 304 &
clean bathroom : 95 &
comfortable stay : 100 &
helpful desk : 42 &
free coffee : 57 &
larger room : 81 &
friendly person : 21 &
great hotel : 232 \\
&
good breakfast : 303 &
clean everything : 77 &
comfortable hotel : 82 &
helpful receptionist : 17 &
free drink : 38 &
large bed : 43 &
friendly hotel : 19 &
great view : 220 \\ \hline
\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}7: 30,000 to\\ 50,000 yen\end{tabular}}} &
good : 3407 &
clean : 1750 &
comfortable : 1000 &
helpful : 1147 &
free : 933 &
large : 580 &
friendly : 1001 &
great : 2620 \\
&
good location : 380 &
clean room : 725 &
comfortable bed : 345 &
helpful staff : 607 &
free drink : 145 &
large room : 174 &
friendly staff : 715 &
great location : 393 \\
&
good breakfast : 191 &
clean hotel : 197 &
comfortable room : 193 &
helpful concierge : 53 &
free wifi : 129 &
larger room : 32 &
friendly service : 24 &
great view : 162 \\
&
good service : 182 &
clean bathroom : 61 &
comfortable hotel : 49 &
helpful service : 20 &
free coffee : 45 &
large hotel : 30 &
friendly hotel : 13 &
great hotel : 134 \\
&
good english : 155 &
clean everything : 36 &
comfortable stay : 47 &
helpful desk : 17 &
free bus : 38 &
large bed : 28 &
friendly person : 13 &
great service : 114 \\ \hline
\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}8: 50,000 to\\ 100,000 yen\end{tabular}}} &
good : 4350 &
clean : 1655 &
comfortable : 1246 &
helpful : 1313 &
free : 1072 &
large : 1233 &
friendly : 1238 &
great : 4425 \\
&
good location : 406 &
clean room : 648 &
comfortable bed : 425 &
helpful staff : 589 &
free shuttle : 181 &
large room : 442 &
friendly staff : 810 &
great location : 506 \\
&
good service : 296 &
clean hotel : 156 &
comfortable room : 266 &
helpful concierge : 108 &
free wifi : 172 &
large hotel : 109 &
friendly service : 51 &
great view : 436 \\
&
good hotel : 196 &
clean bathroom : 48 &
comfortable stay : 56 &
helpful service : 28 &
free bus : 127 &
large bathroom : 58 &
friendly hotel : 20 &
great service : 267 \\
&
good breakfast : 191 &
cleanliness : 40 &
comfortable hotel : 51 &
helpful desk : 26 &
free service : 65 &
larger room : 38 &
friendly person : 12 &
great hotel : 241 \\ \hline
\multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}9: 100,000 to\\ 200,000 yen\end{tabular}}} &
good : 994 &
clean : 261 &
comfortable : 347 &
helpful : 401 &
free : 240 &
large : 404 &
friendly : 370 &
great : 1488 \\
&
good location : 65 &
clean room : 102 &
comfortable bed : 128 &
helpful staff : 169 &
free wifi : 31 &
large room : 135 &
friendly staff : 194 &
great location : 155 \\
&
good service : 56 &
clean hotel : 24 &
comfortable room : 82 &
helpful concierge : 35 &
free breakfast : 19 &
large bathroom : 38 &
friendly service : 18 &
great view : 155 \\
&
good breakfast : 53 &
cleanliness : 8 &
comfortable stay : 16 &
helpful everyone : 7 &
free drink : 16 &
large hotel : 15 &
friendly everyone : 7 &
great service : 101 \\
&
good hotel : 40 &
clean place : 7 &
comfortable hotel : 10 &
helpful team : 5 &
free bus : 14 &
large bed : 12 &
friendly person : 4 &
great hotel : 80 \\ \hline
\end{tabular}%
}
\end{table}
\end{landscape}
\begin{landscape}
\begin{table}[p]
\centering
\caption{Top 4 words related to the mainly used adjectives in negative English texts.}
\label{tab:adj_en_neg}
\resizebox{0.8\linewidth}{!}{%
\begin{tabular}{|c|l|l|l|l|l|}
\hline
\textbf{Price range} &
\multicolumn{1}{c|}{\textbf{poor}} &
\multicolumn{1}{c|}{\textbf{dated}} &
\multicolumn{1}{c|}{\textbf{worst}} &
\multicolumn{1}{c|}{\textbf{dirty}} &
\multicolumn{1}{c|}{\textbf{uncomfortable}} \\ \hline
\multirow{5}{*}{\textbf{0: All Prices}} &
poor : 460 &
dated : 431 &
worst : 327 &
dirty : 188 &
uncomfortable : 253 \\
&
poor service : 55 &
outdated : 128 &
worst hotel : 43 &
dirty carpet : 34 &
uncomfortable bed : 63 \\
&
poor breakfast : 41 &
outdated room : 20 &
worst experience : 18 &
dirty room : 23 &
uncomfortable pillow : 20 \\
&
poor quality : 27 &
outdated hotel : 10 &
worst part : 15 &
not dirty : 7 &
uncomfortable mattress : 8 \\
&
poor english : 24 &
outdated bathroom : 7 &
worst service : 10 &
dirty bathroom : 6 &
uncomfortable night : 8 \\ \hline
\multirow{5}{*}{\textbf{3: 5000 to 10,000 yen}} &
poor : 3 &
&
worst : 6 &
dirty : 3 &
uncomfortable : 2 \\
&
&
&
worst room : 2 &
&
\\
&
&
&
&
&
\\
&
&
&
&
&
\\
&
&
&
&
&
\\ \hline
\multirow{5}{*}{\textbf{4: 10,000 to 15,000 yen}} &
poor : 29 &
dated : 40 &
worst : 24 &
dirty : 11 &
uncomfortable : 23 \\
&
poor breakfast : 3 &
outdated : 11 &
worst hotel : 4 &
dirty floor : 2 &
uncomfortable bed : 4 \\
&
poor service : 3 &
outdated decor : 2 &
worst experience : 2 &
&
not uncomfortable : 2 \\
&
poor conditioning : 2 &
outdated room : 2 &
&
&
uncomfortable night : 2 \\
&
poor view : 2 &
&
&
&
uncomfortable pillow : 2 \\ \hline
\multirow{5}{*}{\textbf{5: 15,000 to 20,000 yen}} &
poor : 57 &
dated : 41 &
worst : 36 &
dirty : 14 &
uncomfortable : 26 \\
&
poor service : 10 &
outdated : 8 &
worst hotel : 8 &
dirty room : 2 &
uncomfortable bed : 7 \\
&
poor breakfast : 6 &
&
worst experience : 3 &
&
uncomfortable pillow : 2 \\
&
poor hotel : 5 &
&
worst part : 2 &
&
\\
&
poor experience : 3 &
&
worst service : 2 &
&
\\ \hline
\multirow{5}{*}{\textbf{6: 20,000 to 30,000 yen}} &
poor : 136 &
dated : 131 &
worst : 86 &
dirty : 67 &
uncomfortable : 103 \\
&
poor breakfast : 15 &
outdated : 31 &
worst hotel : 11 &
dirty room : 10 &
uncomfortable bed : 24 \\
&
poor service : 14 &
outdated room : 6 &
worst part : 7 &
dirty carpet : 8 &
uncomfortable pillow : 11 \\
&
poor english : 9 &
outdated hotel : 2 &
worst breakfast : 5 &
dirty bathroom : 3 &
uncomfortable night : 4 \\
&
poor quality : 9 &
&
worst experience : 5 &
dirty chair : 2 &
uncomfortable experience : 3 \\ \hline
\multirow{5}{*}{\textbf{7: 30,000 to 50,000 yen}} &
poor : 92 &
dated : 65 &
worst : 64 &
dirty : 51 &
uncomfortable : 55 \\
&
poor service : 8 &
outdated : 17 &
worst hotel : 10 &
dirty carpet : 11 &
uncomfortable bed : 20 \\
&
poor breakfast : 7 &
outdated hotel : 4 &
worst room : 3 &
dirty room : 7 &
uncomfortable mattress : 6 \\
&
poor english : 7 &
outdated bathroom : 2 &
worst service : 3 &
dirty clothe : 2 &
uncomfortable pillow : 5 \\
&
poor connection : 5 &
outdated decor : 2 &
worst part : 2 &
dirty luggage : 2 &
uncomfortable room : 5 \\ \hline
\multirow{5}{*}{\textbf{8: 50,000 to 100,000 yen}} &
poor : 124 &
dated : 150 &
worst : 98 &
dirty : 36 &
uncomfortable : 33 \\
&
poor service : 16 &
outdated : 58 &
worst hotel : 9 &
dirty carpet : 12 &
uncomfortable bed : 7 \\
&
poor breakfast : 9 &
outdated room : 9 &
worst experience : 5 &
dirty room : 3 &
\\
&
poor quality : 9 &
outdated furniture : 6 &
worst part : 3 &
dirty cup : 2 &
\\
&
poor english : 6 &
outdated hotel : 4 &
&
dirty rug : 2 &
\\ \hline
\multirow{5}{*}{\textbf{9: 100,000 to 200,000 yen}} &
poor : 19 &
dated : 3 &
worst : 12 &
dirty : 6 &
uncomfortable : 8 \\
&
poor service : 4 &
outdated : 2 &
worst experience : 2 &
&
little uncomfortable : 2 \\
&
poor choice : 2 &
&
&
&
\\
&
poor experience : 2 &
&
&
&
\\
&
&
&
&
&
\\ \hline
\end{tabular}%
}
\end{table}
\end{landscape}
\subsection{Determining hard and soft attribute usage}\label{det_hard_soft}
To further understand the differences in satisfaction and dissatisfaction in Chinese and Western customers of Japanese hotels, we classified these factors as either hard or soft attributes of a hotel. We define hard attributes as matters regarding the hotel's physical or environmental aspects, such as facilities, location, or infrastructure. Some of these aspects would be impractical for the hotel to change, such as its surroundings and location. Others can be expensive to change, such as matters requiring construction costs, which are possible but would require significant infrastructure investment. On the other hand, soft attributes are the non-physical attributes of the hotel service and staff behavior that are practical to change through management. For example, the hotel's services or the cleanliness of the rooms are soft attributes. For our purposes, amenities, clean or good quality bed sheets or curtains, and other physical attributes that are part of the service and not the hotel's physical structure are considered soft attributes. Thus, we can observe the top 10 satisfaction and dissatisfaction keywords and determine whether they are soft or hard attributes.
We manually labeled each language's top keywords into either hard or soft by considering how the word would be used when writing a review. If the word described unchangeable physical factors by the staff or management, we consider them hard. If the word implied an issue that could be solved or managed by the hotel staff or management, we consider it soft. For adjectives, we looked at the top four adjective and noun pairings used in the entire dataset and counted the usage percentage in each context. If it was not clear from the word or the pairing alone, we declared it undefined. Then, we added the counts of these words in each category. A single word with no pairing is always deemed 100\% in the category it corresponds to. We add the partial percentages for each category when an adjective includes various contexts. The interpretation of these keywords is shown in the Tables \ref{tab:zh_hard_soft_keywords} and \ref{tab:en_hard_soft_keywords}. We can see the summarized results for the hard and soft percentages of positive and negative Chinese keywords in Figure \ref{fig:hard_soft_zh}. For the English keywords, see Figure \ref{fig:hard_soft_en}.
\begin{table}[ht]
\centering
\caption{Determination of hard and soft attributes for Chinese keywords. }
\label{tab:zh_hard_soft_keywords}
\resizebox{0.7\textwidth}{!}{%
\begin{tabular}{|c|l|l|}
\hline
\textbf{Keyword Emotion} & \multicolumn{1}{c|}{\textbf{Keyword}} & \multicolumn{1}{c|}{\textbf{Attribute Category}} \\ \hline
\multirow{19}{*}{\textbf{Positive Keywords}} & \begin{CJK}{UTF8}{gbsn}不错\end{CJK} & 50\% hard, 25\% soft, 25\% undefined \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}大\end{CJK} & 100\% hard \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}干净\end{CJK} & 25\% hard, 75\% soft \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}早餐\end{CJK} & 100\% soft \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}交通\end{CJK} & 100\% hard \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}棒\end{CJK} & 25\% hard, 50\% soft, 25\% undefined \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}近\end{CJK} & 100\% hard \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}购物\end{CJK} & 100\% hard \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}环境\end{CJK} & 100\% hard \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}地铁\end{CJK} & 100\% hard \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}卫生\end{CJK} & 100\% soft \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}新\end{CJK} & 50\% hard, 25\% soft, 25\% undefined \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}推荐\end{CJK} & 100\% undefined \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}选择\end{CJK} & 100\% undefined \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}地铁站\end{CJK} & 100\% hard \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}远\end{CJK} & 100\% hard \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}附近\end{CJK} & 100\% hard \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}周边\end{CJK} & 100\% hard \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}赞\end{CJK} & 100\% undefined \\ \hline
\multirow{8}{*}{\textbf{Negative Keywords}} & \begin{CJK}{UTF8}{gbsn}价格\end{CJK} & 100\% soft \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}一般\end{CJK} & 50\% hard, 50\% soft \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}中文\end{CJK} & 100\% soft \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}距离\end{CJK} & 100\% hard \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}地理\end{CJK} & 100\% hard \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} & 100\% hard \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}老\end{CJK} & 75\% hard, 25\% soft \\ \cline{2-3}
& \begin{CJK}{UTF8}{gbsn}华人\end{CJK} & 100\% soft \\ \hline
\end{tabular}%
}
\end{table}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{hard_soft_attr_zh_pos.png}
\caption{Positive keywords}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{hard_soft_attr_zh_neg.png}
\caption{Negative keywords}
\end{subfigure}
\caption{Hard and soft attributes from the top Chinese keywords for all price ranges}
\label{fig:hard_soft_zh}
\end{figure}
\begin{table}[ht]
\centering
\caption{Determination of hard and soft attributes for English keywords. }
\label{tab:en_hard_soft_keywords}
\resizebox{0.7\textwidth}{!}{%
\begin{tabular}{|c|l|l|}
\hline
\textbf{Keyword Emotion} & \multicolumn{1}{c|}{\textbf{Keyword}} & \multicolumn{1}{c|}{\textbf{Attribute Category}} \\ \hline
\multirow{18}{*}{\textbf{Positive Keywords}} & good & 25\% hard, 50\% soft, 25\% undefined \\ \cline{2-3}
& great & 50\% hard, 25\% soft, 25\% undefined \\ \cline{2-3}
& staff & 100\% soft \\ \cline{2-3}
& clean & 100\% soft \\ \cline{2-3}
& location & 100\% hard \\ \cline{2-3}
& nice & 50\% hard, 25\% soft, 25\% undefined \\ \cline{2-3}
& excellent & 25\% hard, 50\% soft, 25\% undefined \\ \cline{2-3}
& helpful & 100\% soft \\ \cline{2-3}
& comfortable & 25\% hard, 50\% soft, 25\% undefined \\ \cline{2-3}
& shopping & 100\% hard \\ \cline{2-3}
& beautiful & 25\% hard, 75\% soft \\ \cline{2-3}
& friendly & 100\% soft \\ \cline{2-3}
& train & 100\% hard \\ \cline{2-3}
& large & 100\% hard \\ \cline{2-3}
& free & 100\% soft \\ \cline{2-3}
& subway & 100\% hard \\ \cline{2-3}
& recommend & 100\% undefined \\ \cline{2-3}
& wonderful & 50\% soft, 50\% undefined \\ \hline
\multirow{24}{*}{\textbf{Negative Keywords}} & pricey & 100\% soft \\ \cline{2-3}
& worst & 25\% hard, 50\% soft, 25\% undefined \\ \cline{2-3}
& dated & 75\% hard, 25\% undefined \\ \cline{2-3}
& poor & 100\% soft \\ \cline{2-3}
& walkway & 100\% hard \\ \cline{2-3}
& sense & 100\% undefined \\ \cline{2-3}
& unable & 100\% soft \\ \cline{2-3}
& disappointing & 50\% soft, 50\% undefined \\ \cline{2-3}
& minor & 100\% undefined \\ \cline{2-3}
& worse & 100\% undefined \\ \cline{2-3}
& annoying & 75\% hard, 25\% undefined \\ \cline{2-3}
& lighting & 100\% soft \\ \cline{2-3}
& uncomfortable & 100\% soft \\ \cline{2-3}
& carpet & 100\% soft \\ \cline{2-3}
& dirty & 75\% soft, 25\% undefined \\ \cline{2-3}
& cigarette & 100\% soft \\ \cline{2-3}
& funny smell & 100\% soft \\ \cline{2-3}
& rude & 100\% soft \\ \cline{2-3}
& smallest & 75\% hard, 25\% undefined \\ \cline{2-3}
& mixed & 100\% undefined \\ \cline{2-3}
& renovation & 100\% hard \\ \cline{2-3}
& paper & 100\% undefined \\ \cline{2-3}
& disappointment & 100\% undefined \\ \cline{2-3}
& outdated & 75\% hard, 25\% undefined \\ \hline
\end{tabular}%
}
\end{table}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{hard_soft_attr_en_pos.png}
\caption{Positive keywords}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{hard_soft_attr_en_neg.png}
\caption{Negative keywords}
\end{subfigure}
\caption{Hard and soft attributes from the top English keywords for all price ranges}
\label{fig:hard_soft_en}
\end{figure}
\section{Results}\label{results}
\subsection{Experimental results and answers to research questions}
Our research questions were related to two issues. Based on research questions \ref{rsq:hospitality} and \ref{rsq:hospitality_both}, the objective of this study was to determine the differences in how Chinese and Western tourists perceive Japanese hotels, whose hospitality and service are influenced by the \textit{omotenashi} culture.
Observing the top-ranking positive keywords in Chinese reviews, as shown in Tables \ref{tab:freq_res_pos} and Table \ref{tab:adj_zh_pos}, it was revealed that, while service, cleanliness, and breakfast were praised in most hotels, the location was more important when observing the pairings. Hard attributes were abundant lower on the lists. The negative keywords in Table \ref{tab:freq_res_neg} indicate that a lack of a Chinese-friendly environment was perceived, although there were more complaints about hard attributes such as the building's age and the distance from other convenient spots. However, most complaints were about the hotel's price, which included all of the price ranges; therefore, the price was the primary concern for Chinese customers with different travel purposes.
On the other hand, the word ``staff'' is the second or third in the lists of satisfaction factors in English-written reviews in all the price ranges. This word is followed by a few other keywords lower in the top 10 list, such as ``helpful'' or ``friendly''. When we look at the pairings of the top-ranked keyword ``good'' in Table \ref{tab:adj_en_pos}, we find that customers mostly praise the location, service, breakfast, or English availability. When we look at the negative keyword ``poor'' and its pairings in Table \ref{tab:adj_en_neg}, we see that it is also service-related concepts that the Western tourists are disappointed with.
We can also observe some keywords that are not considered by their counterparts. For example, English-speaking customers mentioned tobacco smell in many reviews. However, it was not statistically identified as a problem for their Chinese counterparts. On the other hand, although they appear in both English and Chinese lists, references to ``\begin{CJK}{UTF8}{gbsn}购物\end{CJK} (shopping)'' are more common in the Chinese lists across hotels of 15,000 yen to 200,000 yen per night. Meanwhile, the term ``shopping'' appeared solely in the top 10 positive keywords list for English speakers who stayed in rooms priced 20,000–30,000 yen per night.
With these results, we can observe that both Chinese and English-speaking tourists in Japan have different priorities. However, both populations consider the hotel's location and transport availability (subways and trains) nearby as secondary but still essential points in their satisfaction with a hotel. The Chinese customers are primarily satisfied with the room quality in spaciousness and cleanliness and the service of breakfast.
For research questions \ref{rsq:hard_soft} and \ref{rsq:hard_soft_diff}, we considered how customers of both cultural backgrounds evaluated the hard and soft attributes of hotels. Our study discovered that Chinese tourists mostly positively react to the hotel's hard attributes, albeit the negative evaluations are more uniform than the positive evaluations, with a tendency of 53 \% towards hard attributes. On the other hand, English-speaking tourists were more responsive to soft attributes, either positively or negatively. In the case of negative keywords, they were more concerned about the hotel's soft attributes.
One factor that both populations had in common is that, when perceiving the hotel negatively, the ``\begin{CJK}{UTF8}{gbsn}老\end{CJK} (old),'' ``dated,'' ``outdated,'' or ``\begin{CJK}{UTF8}{gbsn}陈旧\end{CJK} (obsolete)'' aspects of the room or the hotel were surprisingly criticized across most price ranges. However, this is a hard attribute and is unlikely to change for most hotels.
\subsection{Chinese tourists: A big and clean space}\label{disc:zh}
We found that mainland Chinese tourists were mainly satisfied by big and clean spaces in Japanese hotels. The adjectival pairings extracted with dependency parsing and POS tagging (Table \ref{tab:adj_zh_pos}) imply big and clean rooms. Other mentions included big markets nearby or a big bed. Across different price ranges, the usage of the word ``\begin{CJK}{UTF8}{gbsn}大\end{CJK} (big)'' increased with the increasing price of the hotel. When inspecting closer by taking random samples of the pairs of ``\begin{CJK}{UTF8}{gbsn}大 空间\end{CJK} (big space)'' or ``\begin{CJK}{UTF8}{gbsn}大 面积\end{CJK} (large area),'' we notice that there were also many references to the public bathing facilities in the hotel. Such references were also implied by a word pairing ``\begin{CJK}{UTF8}{gbsn}棒 温泉\end{CJK} (great hot spring).''
In Japan, there are the so-called ``\begin{CJK}{UTF8}{min}銭湯\end{CJK} (\textit{sent\=o}),'' which are artificially constructed public bathing facilities, including saunas and baths with unique qualities. On the other hand, there are natural hot springs, called ``\begin{CJK}{UTF8}{gbsn}温泉\end{CJK} (\textit{onsen}).'' However, they are interchangeable if natural hot spring water is used in artificially made tiled bath facilities. It is a Japanese custom that all customers first clean themselves in a shower and afterward use the baths nude. It could be a cultural shock for many tourists but a fundamental attraction for many others.
Chinese customers are satisfied with the size of the room or bed; however, it is not trivial to change this. In contrast, cleanliness is mostly related to soft attributes when we observe its adjectival pairings. We can observe pairs such as ``\begin{CJK}{UTF8}{gbsn}干净 房间\end{CJK} (clean room)'' at the top rank of all price ranges and thereupon ``\begin{CJK}{UTF8}{gbsn}干净 酒店\end{CJK} (clean hotel),'' ``\begin{CJK}{UTF8}{gbsn}干净 总体\end{CJK} (clean overall),'' ``\begin{CJK}{UTF8}{gbsn}干净 环境\end{CJK} (clean environment),'' and ``\begin{CJK}{UTF8}{gbsn}干净 设施\end{CJK} (clean facilities),'' among other examples. In negative reviews, there was a mention of criticizing the ``\begin{CJK}{UTF8}{gbsn}一般 卫生\end{CJK} (general hygiene)'' of the hotel, although it was an uncommon pair. Therefore, we can assert that cleanliness was an important soft attribute for Chinese customers, and they were mostly pleased when their expectations were fulfilled.
A key soft satisfaction factor was the inclusion of breakfast within the hotel. While other food-related words were extracted, most of them were general, such as ``food'' or ``eating,'' and were lower-ranking. In contrast, the word ``\begin{CJK}{UTF8}{gbsn}早餐\end{CJK} (breakfast),'' referring to the hotel commodities, was frequently used in positive texts compared to other food-related words across all price ranges, albeit at different priorities in each of them. For this reason, we regard it as an important factor. From the word pairs of the positive Chinese keywords in Table \ref{tab:adj_zh_pos}, we can also note that ``\begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad)'' is paired with ``\begin{CJK}{UTF8}{gbsn}不错 早餐\end{CJK} (nice breakfast)'' in four of the seven price ranges with reviews available as part of the top four pairings. It is only slightly lower in other categories, although it is not depicted in the table. Thus, we consider that a recommended strategy for hotel management is to invest in the inclusion or improvement of hotel breakfast to increase good reviews.
\subsection{Western tourists: A friendly face and absolutely clean}\label{disc:en}
From the satisfaction factors of English-speaking tourists, we observed at least three words were related to staff friendliness and services in the general database: ``staff,'' ``helpful,'' and ``friendliness.'' The word ``staff'' is the highest-ranked of these three, ranking second for satisfied customers across most price ranges and only third in one of them. The word ``good'' mainly refers to the location, service, breakfast, or English availability in Table \ref{tab:adj_en_pos}. Similar to Chinese customers, Western customers also seemed to enjoy the included breakfasts regarding their satisfaction keyword pairings. However, the relevant word does not appear in the top 10 list directly, in contrast to their Chinese counterparts. The words ``helpful'' and ``friendly'' are mostly paired with ``staff,'' ``concierge,'' ``desk,'' and ``service.'' By considering the negative keyword ``poor'' and its pairings in Table \ref{tab:adj_en_neg}, we realized once again that Western tourists were disappointed with service-related concepts and reacted negatively.
Another soft attribute that is high on the list for most of the price ranges is the word ``clean'', so we examined its word pairings. Customers largely praised ``clean rooms'' and ``clean bathrooms'' and also referred to the hotel in general. When observing the negative keyword frequencies for English speakers, we can find words such as ``dirty'' and ``carpet'' as well as word pairings such as ``dirty carpet,'' ``dirty room,'' and ``dirty bathroom.'' Along with complaints about off-putting smells, we could conclude that Western tourists had high expectations about cleanliness when traveling in Japan.
An interesting detail of the keyword ranking is that the word ``comfortable'' was high on the satisfaction factors, and ``uncomfortable'' was high on the dissatisfaction factors. The words were paired with nouns such as ``bed,'' ``room,'' ``pillow,'' and ``mattress,'' when they generally referred to their sleep conditions in the hotel.
It seems that Western tourists were particularly sensitive about the hotels' comfort levels and whether they reached their expectations. The ranking for the negative keyword ``uncomfortable'' is similar across most price ranges except the two most expensive ones, where this keyword disappears from the top 10 list.
Albeit lower in priority, the price range of 15,000 to 20,000 yen hotels also includes ``free'' as one of the top 10 positive keywords, mainly paired with ``Wi-Fi.'' This price range corresponds to business hotels, where users would expect this feature the most.
\subsection{Tobacco, an unpleasant smell in the room}\label{disc:tobacco}
A concern for Western tourists was uncleanliness and the smell of cigarettes in their room, which can be regarded as soft attributes. Cigarette smell was an issue even in the middle- and high-class hotels, of which the rooms were priced at more than 30,000 yen per night. For hotels with rooms priced above 50,000 yen per night, however, this problem seemed to disappear from the list of top 10 concerns. Tobacco was referenced singularly as ``cigarette'', but also in word pairs in Table \ref{tab:adj_en_neg} as ``funny smell.'' By manually inspecting a sample of reviews with this keyword, we noticed that the room was often advertised as non-smoking; however, the smell permeated the room and curtains. Another common complaint was that there were no nonsmoking facilities available. The smell of smoke can completely ruin some customers' stay, leading to bad reviews, thereby lowering the number of future customers.
In contrast, Chinese customers seemed not to be bothered by this. Previous research has stated that 49–60 \% of Chinese men (and 2.0–2.8 \% of women) currently smoke or smoked in the past. This was derived from a sample of 170,000 Chinese adults in 2013–2014, which is high compared to many English-speaking countries \cite[][]{zhang2019tobacco,who2015tobacco}.
Japan has a polarized view on the topic of smoking. Although it has one of the world's largest tobacco markets, tobacco use has decreased in recent years. Smoking in public spaces is prohibited in some wards of Tokyo (namely Chiyoda, Shinjuku, and Shibuya). However, it is generally only suggested and not mandatory to lift smoking restrictions in restaurants, bars, hotels, and public areas. Many places have designated smoking rooms to keep the smoke in an enclosed area and avoid bothering others.
Nevertheless, businesses, especially those who cater to certain customers, are generally discouraged by smoking restrictions if they want to maintain their clientele. To cater to all kinds of customers, including Western and Asian, Japanese hotels must provide spaces without tobacco smell. Even if the smoke does not bother a few customers, the lack of such a smell will make it an appropriate space for all customers.
\subsection{Location, location, location}\label{disc:location}
The hotel's location, closeness to the subway and public transportation, and availability of nearby shops proved to be of importance to both Chinese and English-speaking tourists. In positive word pairings in Tables \ref{tab:adj_zh_pos} and \ref{tab:adj_en_pos}, we can find pairs such as ``\begin{CJK}{UTF8}{gbsn}不错 位置\end{CJK} (nice location),'' ``\begin{CJK}{UTF8}{gbsn}近 地铁站\end{CJK} (near subway station),'' ``\begin{CJK}{UTF8}{gbsn}近 地铁\end{CJK} (near subway)'' in Chinese texts and ``good location,'' ``great location,'' and ``great view'' as well as single keywords ``location'' and ``shopping'' for English speakers, and ``\begin{CJK}{UTF8}{gbsn}交通\end{CJK} (traffic),'' ``\begin{CJK}{UTF8}{gbsn}购物\end{CJK} (shopping),'' ``\begin{CJK}{UTF8}{gbsn}地铁\end{CJK} (subway),'' and ``\begin{CJK}{UTF8}{gbsn}环境\end{CJK} (environment or surroundings)'' for Chinese speakers. All of these keywords and their location in each population's priorities across the price ranges signify that the hotel's location was a secondary but still important point for their satisfaction. However, since this is a hard attribute, it is not often considered in the literature. By examining examples from the data, we recognized that most customers were satisfied if the hotel was near at least two of the following facilities: subway, train, and convenience stores.
Japan is a country with a peculiar public transportation system. During rush hour, the subway is crowded with commuters, and trains and subway stations create a confusing public transportation map for a visitor in Tokyo. Buses are also available, albeit less used than rail systems in metropolitan cities. These three means of transportation are usually affordable in price. There are more expensive means, such as the bullet train \textit{shinkansen} for traveling across the country and taxis. The latter is a luxury in Japan compared to other countries. In Japan, taxis provide a high-quality experience with a matching price. Therefore, for people under a budget, subway availability and maps or GPS applications, as well as a plan to travel the city, are of utmost necessity for tourists, using taxis only as a last resort.
Japanese convenience stores are also famous worldwide because they offer a wide range of services and products, from drinks and snacks to full meals, copy and scanning machines, alcohol, cleaning supplies, personal hygiene items, underwear, towels, and international ATMs. If some trouble occurs, or a traveler forgot to pack a particular item, it is most certain that they can find it.
Therefore, considering that both transportation systems and nearby shops are points of interest for Chinese and Western tourists, and perhaps offering guide maps and information about these as an appeal point could result in greater satisfaction.
\section{Discussion}\label{discussion}
\subsection{Western and Chinese tourists in the Japanese hospitality environment}\label{disc:omotenashi}
To date, scholars have been correcting our historical bias towards the West. Studies have determined that different cultural backgrounds lead to different expectations, which influence tourists' satisfaction. In other words, tourists of a particular culture have different leading satisfaction factors across different destinations. However, Japan presents a particular environment; the spirit of hospitality and service, \textit{omotenashi}, which is considered to be of the highest standard across the world. Our study explores whether such an environment can affect different cultures equally or whether it is attractive only to certain cultures.
Our results indicate that Western tourists are more satisfied with soft attributes than Chinese tourists. As explained earlier in this paper, Japan is well known for its customer service. Respectful language and bowing are not exclusive to high-priced hotels or businesses; these are met in convenience stores as well. Even in the cheapest convenience store, the level of hospitality is starkly different from Western culture and perhaps unexpected. In higher-priced hotels, the adjectives used to praise the service ranged from normal descriptors like ``good'' to higher levels of praise like ``wonderful staff,'' ``wonderful experience,'' ``excellent service,'' and ``excellent staff.'' Furthermore, \cite{kozak2002} and \cite{shanka2004} have also proven that hospitality and staff friendliness are two determinants of Western tourists' satisfaction.
However, the negative English keywords indicate that a large part of the dissatisfaction with Japanese hotels stemmed from a lack of hygiene and room cleanliness. Although Chinese customers had solely positive keywords about cleanliness, English-speaking customers deemed many places unacceptable to their standards, particularly hotels with rooms priced below 50,000 yen per night. The most common complaint regarding cleanliness was about the carpet, followed by complaints about cigarette smell and lack of general hygiene. \cite{kozak2002} also proved that hygiene and cleanliness were essential satisfaction determinants for Western tourists. However, in the previous literature, this was linked merely to satisfaction. In contrast, our research revealed that words related to cleanliness were mostly linked to dissatisfaction. We could assert that Westerners had a high standard of room cleanliness compared to their Chinese counterparts.
According to previous research, Western tourists are already inclined to appreciate hospitality for their satisfaction. When presented with Japanese hospitality, this expectation is met and overcome. In contrast, according to our results, Chinese tourists were more concerned about room quality rather than hospitality, staff, or service. However, when analyzing the word pairs for ``\begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad)'' and ``\begin{CJK}{UTF8}{gbsn}棒\end{CJK} (great),'' we can see that they praise staff, service, and breakfast. By observing the percentage of hard to soft attributes in Figure \ref{fig:hard_soft_zh}, however, we discover that Chinese customers were more satisfied with hard attributes compared to Western tourists, who seemed to be meeting more than their expectations.
It could be considered that Chinese culture does not expect high-level service initially. When an expectation that is not held is met, the satisfaction derived is less than that if it was expected. In contrast, some tourists report a ``nice surprise'': when an unknown need is unexpectedly met, there is more satisfaction. It is necessary to note the difference between these two reactions. The ``nice surprise'' reaction fulfills a need unexpectedly. Perhaps the hospitality grade in Japan does not fulfill a need high enough for the Chinese population, thereby resulting in less satisfaction. For greater satisfaction, a need must be met. However, the word ``not bad'' is at the top of the list in most price ranges, and one of the uses is related to service. Thus, we cannot conclude that they were not satisfied with the service. Instead, they held other factors at a higher priority; thus, the keyword frequency was higher for other pairings.
Another possibility occurs when we observe the Chinese tourists' dissatisfaction factors. Chinese tourists may have expectations about their treatment that are not being met, even in this high-standard hospitality environment. This could be because Japan is monolingual and has a relatively large language barrier to tourists \cite[][]{heinrich2012making,coulmas2002japan}. While the Japanese effort to accommodate English speakers is slowly developing, efforts for Chinese accommodations can be lagging. Chinese language pamphlets and Chinese texts on instructions for the hotel room and its appliances and features (e.g., T.V. channels, Wi-Fi setup, etc.), or the treatment towards Chinese people, could be examples of these accommodations. \cite{ryan2001} also found that communication difficulty was one of the main reasons Chinese customers would state for not visiting again. However, this issue is not exclusive to Japan.
Our initial question was whether the environment of high-grade hospitality would affect both cultures equally. This study attempted to determine the answer. It is possible that Chinese customers had high-grade hospitality and were equally satisfied with Westerners. In that case, it appears that the difference in perception stems from a psychological source; expectation leads to satisfaction and a lack of expectation results in lesser satisfaction. There is also a possibility that Chinese customers are not receiving the highest grade of hospitality because of cultural friction between Japan and China.
It is unclear which of these two is most likely from our results. However, competing in hospitality and service includes language services, especially in the international tourism industry. Better multilingual support can only improve the hospitality standard in Japan. Considering that most of the tourists in Japan come from other countries in Asia, multilingual support is beneficial. Proposals for this endeavor include hiring Chinese-speaking staff, preparing pamphlets in Chinese, or having a translator application readily available with staff trained in interacting through an electronic translator.
\subsection{Hard vs. soft satisfaction factors}\label{disc:hard_soft}
As stated in section \ref{theory_satisfaction}, previous research has mostly focused on the hotel's soft attributes and their influence on customer satisfaction \cite[e.g.,][]{shanka2004,choi2001}. Examples of soft attributes include staff behavior, commodities, amenities, and appliances that can be improved within the hotel. However, hard attributes are not usually analyzed in satisfaction studies. It is important to consider both kinds of attributes. If the satisfaction was based on soft attributes, a hotel can improve its services to attract more customers in the future. Otherwise, if the satisfaction was related more to hard attributes overall, hotels should be built considering the location while minimizing other costs. Because the satisfaction factors were decided statistically in our study via customers' online reviews, we can see the importance of the hard or soft attributes in their priorities.
Figure \ref{fig:hard_soft_zh} shows that, in regards to Chinese customer satisfaction, in general, 68 \% of the top 10 keywords are hard factors; in contrast, only 20 \% are soft factors. The rates are similar for most price ranges except the highest-priced hotels. However, two of these soft attributes are all concentrated at the top of the list (``\begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad),'' ``\begin{CJK}{UTF8}{gbsn}干净\end{CJK} (clean)''), and the adjective pairs related to soft attributes of ``\begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad)'' are also at the top in most price ranges. Chinese tourists may expect spaciousness and cleanliness when coming to Japan. The expectation may be due to reputation, previous experiences, or cultural backgrounds. We can compare these results with previous literature, where traveling Chinese tourists choose their destination based on several factors, including cleanliness, nature, architecture, and scenery \cite[][]{ryan2001}. These factors found in previous literature could be linked to the keyword ``\begin{CJK}{UTF8}{gbsn}环境\end{CJK} (environment or surroundings)'' as well. This keyword was found for hotels priced at more than 20,000 yen per night.
In contrast, English speakers are mostly satisfied with the hotels' soft attributes. Figure \ref{fig:hard_soft_en} shows that soft attributes are above 48 \% in all price ranges, the highest being 65 \% in the price range of 15,000 to 20,000 yen per night, which corresponds to, for example, affordable business hotels. The exception to this is the hard attribute that is the hotels' location, which is consistently around the middle of the top 10 lists for all price ranges.
For both customer groups, the main reason for dissatisfaction was pricing, which can be interpreted as a concern about value for money. However, English-speaking customers complained less about the price in lower-priced hotels. In contrast, Chinese customers consistently had ``\begin{CJK}{UTF8}{gbsn}价格\end{CJK} (price)'' as the first or second-most concern across all price ranges. A study on Chinese tourists found that they had this concern \cite[][]{truong2009}. However, our results indicate that this has more to do with the pricing of hotels in Japan than with Chinese culture. In general, Japan is an expensive place to visit, thereby impacting this placement in the ranking. Space is scarce in Japan, and capsule hotels with cramped spaces of 2 x 1 meters cost around 3000 to 6000 yen per night. Bigger business hotel rooms are relatively expensive, ranging from 5000 to 12,000 yen per night. For comparison, hotels in the USA with a similar quality can charge half the price.
Around half of the dissatisfaction factors for both Chinese and Western customers are caused by issues that could be improved; this is true for all price ranges. The improvements could be staff training (perhaps in language), hiring professional cleaning services for rooms with cigarette smoke smells, or improving the bedding; however, these considerations can be costly. However, once the hotel's location and construction are set, only a few changes can be made to satisfy Chinese customers further. As mentioned previously, Chinese language availability is a soft attribute that can be improved with staff and training investment.
Western tourists are mainly dissatisfied with soft attributes. This is revealed by a low satisfaction level of 35 \% in the highest price range where undefined factors are the majority and a maximum of 78 \% in the price range of 30,000 to 50,000 yen per night in a hotel. Improvement scope for Western tourists is more extensive than that for their Chinese counterparts. As such, it presents a larger investment opportunity.
\subsection{Satisfaction across different price ranges}\label{disc:price}
In previous sections of this paper, we mentioned the differences reflected in hotel price ranges. The most visible change across differently priced hotels is the change in voice when describing satisfaction. We noticed this by observing the adjective-noun pairs and finding pairs with different adjectives for the same nouns. For example, in English, words describing nouns such as ``location'' or ``hotel'' are ``good'' or ``nice'' in lower-priced hotels. In contrast, the adjectives that pair with the same nouns for higher-priced hotels are ``wonderful'' and ``excellent.'' In Chinese, the change ranges from ``\begin{CJK}{UTF8}{gbsn}不错\end{CJK} (not bad)'' to ``\begin{CJK}{UTF8}{gbsn}棒\end{CJK} (great)'' or ``\begin{CJK}{UTF8}{gbsn}赞\end{CJK} (awesome).'' We can infer that the level of satisfaction is high and influences how customers write their reviews. Regarding the negative keywords, however, the change ranges from ``annoying'' or ``disappointing'' to ``worst.''
In this paper, we follow the definition of satisfaction by \cite{hunt1975}, where meeting or exceeding expectations produces satisfaction. Conversely, the failure of meeting expectations causes dissatisfaction. We can assume that a customer that pays more for a higher-class experience has higher expectations. For example, in a highly-priced hotel, any lack of cleanliness can lead to disappointment. In the case of English-speaking customers in the 30,000-–50,000 yen per night price range, cigarette smell is particularly disappointing. However, we consistently see customers with high expectations for high-class hotels reacting even more positively when satisfied. In the positive case, expectations appear to be exceeded in most cases, judging from their reactions.
We argue that these are two different kinds of expectations: logical and emotional. In the first case, customers are determined that the service must not fall below a specific standard; for example, they can be disappointed with unhygienic rooms or cigarette smell. In contrast, in the second case, customers have a vague idea of having a positive experience but do not measure it against any standard. For example, they expect a pleasant customer service experience or a hospitable treatment by the staff at a high-class hotel. Regardless of their knowledge in advance, positive emotions offer them a perception of exceeded expectations and high satisfaction. Thus, hospitality and service enhance the experience of the customers.
There are interesting differences between Chinese and English-speaking tourists in their satisfaction to differently priced hotels. For example, Chinese tourists have ``\begin{CJK}{UTF8}{gbsn}购物\end{CJK} (shopping)'' as a top keyword in all the price ranges. In contrast, English-speaking tourists mention it only as a top keyword in the 20,000–-30,000 yen price range. It is widely known in Japan that many Chinese tourists visit Japan for shopping. \cite{tsujimoto2017purchasing} analyzed the souvenir purchasing behavior of Chinese tourists in Japan and showed that common products besides food and drink are: electronics, cameras, cosmetics, and medicine, among \textit{souvenir} items representative of the culture or places that they visit \cite{japan2014consumption}. Furthermore, Chinese tourists' choice to shop in Japan is more related to the quality of the items rather than their relation to the tourist attractions. Our results suggested that Western tourists were engaging more in tourist attractions rather than shopping activities compared to Chinese tourists.
Another interesting difference is that English-speaking tourists start using negative keywords about the hotel's price only if it concerns hotels of 15,000 yen or more; thereafter, the more expensive the hotel, the higher the ranking. In contrast, for Chinese customers, this keyword is the top keyword across all price ranges. Previous research suggests that value for money is a key concern for Chinese and Asian tourists \cite[][]{choi2000,choi2001,truong2009}, whereas Western customers are more concerned about hospitality \cite[][]{kozak2002}.
While some attributes' value changes depending on the hotel's price range, some other attributes remain constant for each culture's customers. For example, appreciation for staff from English-speaking tourists is ranked close to the top satisfaction factor in all the price ranges. Satisfaction for cleanliness by both cultures constantly remains part of the top 10 keywords, except for the most expensive one, where other keywords replace keywords related to satisfaction or cleanliness in the ranking; however, they remain still high on the list. Chinese tourists have a high ranking for the word ``\begin{CJK}{UTF8}{gbsn}早餐\end{CJK} (breakfast)'' across all price ranges as well. As discussed in section \ref{disc:location}, transportation and location are also important for hotels of all classes and prices. While the ranking of attributes might differ between price ranges, hard and soft attribute proportions also appear to be constant within a 13 \% margin of error per attribute. This suggests that, from a cultural aspect, the customers have a particular bias to consider some attributes more than others.
\subsection{Cross-culture analysis of expectations and satisfaction}\label{disc:culture}
The basic premise of this study is that different cultures lead to different expectations and satisfaction factors. This premise also plays a role in the differentiation between the preferences of hard or soft attributes.
In \cite{donthu1998cultural}, subjects from 10 different countries were compared with respect to their expectations of service quality and analyzed based on Hofstede's typology of culture \cite[][]{hofstede1984culture}. The previous study states that, although culture has no specific index, five dimensions of culture can be used to analyze or categorize a country in comparison to others. These are \textit{power distance}, \textit{uncertainty avoidance}, \textit{individualism–-collectivism}, \textit{masculinity–femininity}, and \textit{long-term-–short-term orientation}. In each of these dimensions, at least one element of service expectations was found to be significantly different for countries grouped under contrasting attributes (e.g., individualistic countries vs. collectivist countries, high uncertainty avoidance countries vs. low uncertainty avoidance countries).
However, Hofstede's typology has received criticism from academics, particularly for the fifth dimension that Hofstede proposed, which was later added with the alternative name \textit{Confucian dynamic}. Academics with a Chinese background criticized Hofstede for being misinformed on the philosophical aspects of Confucianism as well as considering a difficult dimension to measure \cite[][]{fang2003critique}. Other models, such as the GLOBE model, also consider some of Hofstede's dimensions and replace them with others, making a total of nine dimensions \cite[][]{house1999cultural}. The \textit{masculinity–-femininity} dimension, for example, is proposed to be instead of two dimensions: \textit{gender egalitarianism} and \textit{assertiveness}. This addition of dimensions avoids assuming that assertiveness is either masculine or feminine, which stems from outdated gender stereotypes. Such gender stereotypes have also been the subject of critique on Hofstede's model\cite[][]{jeknic2014gender}. We agree with these critiques and thus avoid considering such stereotypes in our discussion.
For our purposes of contrasting Western vs. Chinese satisfaction stemming from expectations, these dimensions could explain why Chinese customers are generally satisfied more often with hard factors while Westerners are satisfied or dissatisfied with soft factors.
The backgrounds of collectivism in China and individualism in Western countries have been studied previously \cite[][]{gao2017chinese, kim2000}. These backgrounds as well as the differences in these cultural dimensions could be the underlying cause for differences in expectations. Regardless of the cause, however, measures in the past have proven that such differences exist \cite[][]{armstrong1997importance}.
The cultural background of Chinese tourists emphasizes their surroundings and their place in nature and the environment. Chinese historical backgrounds of Confucianism, Taoism, and Buddhism permeate the thought processes of Chinese populations. However, scholars argue that the changes in generations and their economic and recent history attaches less importance to these concepts in their lives \cite[][]{gao2017chinese}. Nevertheless, one could argue a Chinese cultural attribute emphasizes that the environment and the location affect satisfaction rather than the treatment they receive.
A more anthropocentric and individualistic Western culture could correlate more of their expectations and priorities to the treatment in social circumstances rather than the environment. According to \cite{donthu1998cultural}, highly individualistic customers, in contrast to collectivist customers, have a higher expectation of empathy and assurance from the provider, which are aspects of service, a soft attribute of a hotel.
Among other dimensions in both models, we can consider uncertainty avoidance. Customers of high uncertainty avoidance carefully plan their travel and thus have higher expectations towards service. In contrast, customers of lower uncertainty avoidance do not take risks in their decisions and thus face less disappointment with different expectations. However, according to \cite{xiumei2011cultural}, the difference between China and the USA in uncertainty avoidance is not clear when measuring with the Hofstede typology and the GLOBE typology. While the USA is not representative of Western society, uncertainty avoidance may not cause the difference in hard-soft attribute satisfaction between Chinese and Western cultures. Differences in another factor, power distance, were also noted when using Hofstede's method compared to the GLOBAL method; therefore, power distance was not considered for comparison.
\subsection{Implications for hotel managers}\label{disc:implications}
Our study reached two important conclusions: one about hospitality and cultural differences and another about managerial decisions towards two different populations. Overall, Chinese tourists did not attach much importance to hospitality and service factors. Instead, they focused on the hard attributes of a hotel. In particular, they were not satisfied with hospitality as much as Western tourists were; otherwise, they felt that basic language and communication needs were not met; thereby, they were not much satisfied. Western tourists were highly satisfied with Japanese hospitality and preferred soft attributes to hard ones.
The other conclusion is that managerial decisions could mostly benefit Western tourists, except for language improvements and breakfast inclusion could satisfy both groups. As mentioned earlier in this paper, Westerners are ``long-haul'' customers, spending more of their budget on lodging than Asian tourists \cite[][]{choi2000}. With bigger returns on managerial improvements, we recommend investing in improving attributes that dissatisfy Western customers, such as cleanliness and removing tobacco smell. In addition, breaking the language barrier is one of the few strategies to satisfy both groups. Recently, Japan has been facing an increase in Chinese students as well as students of Western universities. Hiring students as part-time workers could increase the language services of a hotel.
To satisfy both customer types, hotel managers need to invest in cleanliness, deodorizing, and making hotel rooms tobacco-free. It could also be recommended to invest in breakfast inclusion and multilingual services and staff preparedness to deal with Chinese and English speakers. Western tourists were also observed to have high comfort standards, which could be managerially improved for better reviews. Perhaps it could be suggested to perform surveys of the bedding that is most comfortable for Western tourists. However, not all hotels can invest in all of these factors simultaneously. Our results suggest that satisfying cleanliness needs could satisfy both customer types. We suggest investing in making the facilities tobacco-free. Our results are also divided by price ranges; thereby, a hotel manager could consider which analysis suits their hotel the most. Hard attributes are difficult to change; however, improvements in service can be made to accompany these attributes. For example, transportation guides for foreigners that might not know the area could increase satisfaction.
The managers must consider their business model for implementing the next strategy. One option could be attracting more Chinese customers with their observed low budgeting. Another could be attracting more big-budget Western customers. For example, investing more in cleanliness could improve Western customers looking for high-quality lodging satisfaction, even for an increased price per night. On the other hand, hotels might be deemed costly by Chinese customers wherever such an investment is made.
\section{Limitations and Future Work}\label{limitations}
In this study, we analyzed keywords based on whether they appeared on satisfied reviews or dissatisfied ones. Following that, we attempted to understand these words' context by using a dependency parser and observing the related nouns. However, a limitation is that it analyzed solely the words directly related to each keyword and did not search for further connections. This means that if the words were used in combination with other keywords, we did not trace the effects of multiple contradicting statements. For example, in the sentence ``The room is good, but the food is lacking,'' we extracted ``good room'' and ``lacking food'' but did not consider the fact that both occurred in the same sentence.
This study analyzed the differences in customers' expectations at different levels of hospitality and service factors by dividing our data into price ranges. However, in the same price range, for example, the highest one, we can find both a Western-style five-star resort and a high-end Japanese style \textit{ryokan}. Services offered in these hotels are of high quality, albeit very different. Nevertheless, most of our database was focused on the middle range priced hotels, the services of which are comparably less varied.
An essential aspect of this study is that we focused on the satisfaction and dissatisfaction towards the expectations of individual aspects of the hotels. This gave us insight into the factors that hotel managers can consider. However, each customer's overall satisfaction was not measured since it would require methods that are out of the scope of this paper. Another limitation is that further typology analysis could not be made because of the nature of the data collected (for example, Chinese men and women of different ages or their Westerner counterparts).
In future work, we plan to investigate these topics further. We plan to extend our data to research different trends and regions of Japan, different kinds of hotels, and customers traveling alone or in groups, whether for fun or for work. Another point of interest in this study's future work is to use word clusters with similar meanings instead of single words.
\section{Conclusion}\label{conclusion}
In this study, we analyzed the differences in satisfaction and dissatisfaction between Chinese and English-speaking customers of Japanese hotels, particularly in the context of Japanese hospitality, \textit{omotenashi}. We extracted keywords from their online reviews on \textit{Ctrip} and \textit{TripAdvisor} using Shannon's entropy calculations. We used these keywords for sentiment classification via an SVC. We then used dependency parsing and part of speech tagging to extract common pairs of adjectives and nouns as well as single words. We divided these data by sentiment and hotel price range (most expensive room/night).
We found that Western tourists were most satisfied with staff behavior, cleanliness, and other soft attributes. However, Chinese customers had other concerns for their satisfaction; they were more inclined to praise the room, location, and hotel's convenience. We found that the two cultures had different reactions to the hospitality environment and the prices. Thus, we discussed two possible theories on why Chinese tourists responded differently from Westerners in the environment of \textit{omotenashi}. One theory is that, although they were treated well, their experience was deteriorated by language or culture barriers. The second possible theory is that they reacted to hospitality differently since they did not have the same expectations. We theorized that a lack of expectations could result in lessened satisfaction than that to the same service if expected. On the other hand, even when they held high expectations in a high-priced hotel, Japanese hospitality exceeded Western tourists' expectations, judging by their vocabulary for expressing their satisfaction. We considered that Western tourists were more reactive to hospitality and service factors Chinese tourists.
Lastly, we measured the satisfaction and dissatisfaction factors, that is, a hotel's hard and soft attributes. Hard attributes are physical and environmental elements, and as such, are impractical elements to change. In contrast, soft attributes can be changed via management and staff by an improvement in services or amenities. We found that, for satisfaction, Western tourists favored soft attributes in contrast to Chinese tourists, who were more interested in the hard attributes of hotels across all the price ranges consistently. For dissatisfaction, Western tourists were also highly inclined to criticize soft attributes, such as cleanliness or cigarette smell in rooms. In contrast, Chinese tourists' dissatisfaction derived from both hard and soft attributes evenly.
One approach for hotel managers is to work to satisfy Chinese tourists more, who dedicate a lower percentage of their budget to hotels but are more numerous. They are less satisfied with soft attributes but have an identifiable method for improving satisfaction by lessening language barriers and providing a satisfactory breakfast. Another approach was focused on the cleanliness, comfort, and tobacco-free space expected by Western tourists. ``Long-haul'' Western tourists, who spend almost half of their budget on hotels with this strategy, were favored. Although Westerners are less in number than Chinese tourists, it could be proven that they have more substantial returns. This is because Chinese customers also favor cleanliness as a satisfaction factor, and both populations could be pleased.
\begin{acknowledgements}
During our research, we received the commentary and discussion by our dear colleagues necessary to understand particular cultural aspects that could influence the data's interpretation. We would like to show gratitude to Mr. Liangyuan Zhou, Ms. Min Fan, and Ms. Eerdengqiqige for this.
We would also like to show gratitude to Ms. Aleksandra Jajus, from whom we also received notes on the editing and commentary on the content of our manuscript.
Funding: This work was supported by the Japan Construction Information Center Foundation (JACIC).
Conflict of interest: none
\end{acknowledgements}
| {
"attr-fineweb-edu": 1.925781,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUf0rxK7FjYEXSHZES | \section{Introduction}
\label{sec:1}
Since the arrival of deep neural networks (DNNs), state-of-the-art DNN-based human pose estimation systems
have made huge progress in detection performance and precision on benchmark datasets
\cite{Wei2016, Andriluka2014, Chu2017, Yang2017, Newell2016}. Recently, these
research systems have been extended, adapted and re-trained to fit the application domain of specific
sports \cite{Zecha2017, Einfalt2018}.
Soon they will disrupt current performance analyses in all kinds of sport as the amount of
available pose data will explode due to automation. So far, pose detection and analysis of top-class
athletes has been very time-consuming manual work. It was scarcely performed by the national professional
sports associations for them and almost never for athletes below that level. The forthcoming availability
of automatic pose detection systems will make plenty of noisy pose data available from videos recorded at
a much more regular and frequent basis. Despite this imminent change in data quantity at the cost of
probably higher noise in the pose data, very little research has been devoted to explore the opportunities
of extracting informative and performance relevant information from these pose detection results
through data mining. This work is focusing on this question and presents a set of unsupervised pose
mining algorithms that extract or enable extraction of important information about athletes and how
they compare to their peers. We will use world-class swimmers in the swimming channels as an example
of a sport with dominant cyclical motion and long jumping as an example of a sport with clear
chronologically sequential phases.
In this work, pose data denotes the noisy poses produced by some image or video-based pose detection system, either with or without customized post-processing to identify and clean out errors by interpolation and/or smoothing. Our pose data is based on the image-based pose detection system presented in \cite{Einfalt2018} and \cite{Wei2016}. Examples are depicted in Figure~\ref{fig:pose_examples}.
\begin{figure}[tbh]
\centering
\includegraphics[width=0.96\columnwidth]{Figure1}
\caption{Detected poses of a swimmer and a long jumper.}
\label{fig:pose_examples}
\end{figure}
\textbf{Contributions:} (1) Our research work is among the first that does not mine manually annotated
poses with little noise (because of manual annotations by professional coaches and support staff),
but rather focus on the noisy output of a DNN-based pose detection system lacking
\textbf{any} pose annotations. (2) All manual annotations are typically confined to a few key poses
during the relevant actions (i.e., they are temporally sparse), and so are the derived key performance
parameters. We, however, exploit that pose detection systems can process every frame, producing a
temporally dense output by robustly estimating the performance parameters time-continuously at every
frame. (3) Some sports are dominated by cyclical motion, some by clear chronologically sequential phases.
We present our mining algorithms to extract or to enable extraction of key performance parameters by
picking swimming as a representative of a cyclical kind of sport and long jumping as one of the second
type of sport.
\section{Related Work}
\label{sec:2}
Human pose based semantic data mining research is dominated by works on motion segmentation and
clustering, key-pose identification and action recognition. While dimensionality and representation of
poses may differ across recent works, the goal often is to allow for retrieval and indexing of
human pose/motion in large video databases or classification of motion sequences at different abstraction levels.
\textbf{Human pose mining:} Both works in \cite{Ren2011} and\cite{Voegele2014}
cluster 3D motion capture data and determine algorithmically similar motion sequences for
database retrieval, while \cite{Sedmidubsky2013} develops a similarity algorithm for comparing key-poses,
subsequently allowing for indexing motion features in human motion databases. For the task of
action recognition, \cite{Lv2007} and \cite{Baysal2010} perform clustering on shape based representations
of 2d human poses and learn weights to favor distinctive key-poses. Both show that temporal context
is superfluous if human poses with high discriminative power are used for action recognition.
Data mining for action recognition based solely on joint location estimates is still scarce.
\cite{Wang2013} propose spatial-part-sets obtained from clustering parts of the human pose to obtain
distinctive, co-occurring spatial configurations of body parts. They show that these sets improve the
task of action recognition and additionally the initial pose estimates.
\textbf{Pose mining in sports:} In the field of sport footage analysis, the task of action
recognition often translates to the identification of specific motion sequences within a sport activity.
\cite{DeSouza2016} use latent-dynamic conditional random fields on RGB-d skeleton estimates of Taekwondo
fighters to identify specific kicks and punches in a fight sequence. Long jump video indexing has been
researched by \cite{Wu2002}, who perform motion estimation and segmentation of camera and athlete motion
velocity to extract and classify semantic sequences of long jump athletes. \cite{Li2010} build a similar
system for high diving athletes. They also derive human pose from shape and train a Hidden Markov Model
to classify a partial motion of jumps.
The extraction of kinematic parameters of athletes from video footage, specifically stroke rates of
swimmers, was recently researched by \cite{Victor2017}, who perform stroke frequency detection on
athletes in a generic swimming pool. \cite{Zecha2017} derive additional kinematic parameters from
swimmers in a swimming channel by determining inner-cyclic interval lengths and frequencies through
key-pose retrieval. Compared to other approaches that rely on the concept of identifying key-poses,
their approach lets a human expert define what a discriminative key-pose should be.
\textbf{Our work:} While our work is influenced by the related work above, the major difference is
that we only use raw joint estimates from a human pose estimator while previous work heavily relies
either on correctly annotated ground truth data to train models or recordings from motion capture
RGB-d systems. Additionally, our work connects data mining on human pose estimates with the extraction
of kinematic parameters of top athletes.
\section{Measuring Pose Similarity}
\label{sec:3}
In computer vision, the human pose at a given time is defined by a set of locations of important
key points on a human such as joint locations. The number of key points varies based on the application
domain. In the analysis of top-level athletes, the pose is the basis of many key performance indicators
and may also include points on the device(s) the athlete is using. Since the pose is so central to most
sports-related performance indicators, we need to be able to reliably evaluate the similarity or
distance between poses. This section develops our metric pose distance measure that is invariant to
translation, scale and rotation in the image plane. It will be used in all algorithms discussed in
Sections \ref{sec:4} to \ref{sec:5}.
Throughout the paper, we assume that all video recordings have been processed by some
pose detection system. In our case, we use the system from \cite{Einfalt2018} for swimming
and \cite{Wei2016} for long jump. We do not expect to have a pose for all frames.
Through some parts of a video, the athlete might not be completely in the picture, if present at all.
Or the detection conditions are so difficult that the detection system does not detect any pose.
Our mining algorithms have to deal with that. However, we discard all poses that are only partially
detected to make mining simpler.
\subsection{Pose}
\label{sec:3.1pose}
Mathematically, a 2D pose $p$ is nothing but a sequence of $N$ two-dimensional points,
where each 2D point by convention specifies the coordinates of the center of a joint location
or of some other reference location on the human or object(s) under investigation:
\begin{equation}
\label{eq:pose_def}
p = \left\{ \left( x_k, y_k \right) \right\}^{N}_{k=1} \equiv \begin{pmatrix} x_1 & \dotsb & x_N \\ y_1 & \dotsb & y_N \end{pmatrix}
\end{equation}
Our human pose model consists of $N=14$ joints. Throughout the paper, a \textit{pose clip} and \textit{pose sequence} denote a temporal sequence $\mathbf{p}_{t1:t2}$ of poses $[ p_{t1}, p_{t1+1}, \dotsc, p_{t2-1}, p_{t2} ]$. The term \textit{pose clip} hints at a short temporal pose sequences (e.g. $\frac{1}{2}$ to $2$ seconds), while \textit{pose sequence} often refers to much longer durations -- up to the complete video duration (e.g., $30$ seconds and longer). Video time and time intervals are usually expressed using sequential frame numbers as we assume recordings at a constant frame rate.
\subsection{Aligning Two Poses}
\label{sec:3.2aligning}
Before we can define our pose distance measure, we need to specify how we align a pose $p$ to a given reference pose $p_r$ by finding the scaling factor $s$ , rotation angle $\theta$ and translation $t=(t_x, t_y)$, which applied to each joint of $p$ results in $p^\prime$, which minimizes the mean square error (MSE) between the transformed pose $p^\prime$ and the reference pose $p_r$ \cite{Rowley1998}:
\begin{equation}
\label{eq:mse}
MSE(p_r, p) : = MSE(p_r, p^\prime) = \frac{1}{2N} \lVert p_{r, reshaped} - p^\prime_{reshaped} \rVert^{2}_{2}
\end{equation}
with
\begin{equation}
\label{eq:ttrans}
t_{trans} = (a, b, t_x, t_y)^T
\end{equation}
and
\begin{equation}
\label{eq:p_matrix}
p^\prime_{reshaped} := \begin{pmatrix} x^\prime_1 \\ y^\prime_1 \\ x^\prime_2 \\ y^\prime_2 \end{pmatrix} = \begin{pmatrix} x_1 & -y_1 & 1 & 0 \\ y_1 & x_1 & 0 & 1 \\ x_2 & -y_2 & 1 & 0 \\ y_2 & x_2 & 0 & 1 \\ \end{pmatrix} \begin{pmatrix} a \\ b \\ t_x \\ t_y \end{pmatrix} =: A \cdot t_{trans}
\end{equation}
Note that the $N \times 2$ matrix $p^\prime$ is reshaped to a $2N \times 1$ vector $p^\prime_{reshaped}$. The pseudo-inverse $t^{opt}_{trans} = (A^TA)^{-1}A^Tp_{r, reshaped}$ gives us in closed form the transformation of pose $p$ that minimizes the mean squared error between the joints of reference pose $p_r$ and transformed pose $p^\prime$. Each joint $(x,y)$ of $p$ is mapped to
\begin{equation}
\label{eq:pose_transform}
\begin{pmatrix} x^{\prime} \\ y^{\prime} \end{pmatrix} = \begin{pmatrix} s \cos \theta & -s \sin \theta \\ s \sin \theta & s \cos \theta \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} + \begin{pmatrix} t_x \\ t_y \end{pmatrix} = \begin{pmatrix} a & -b & t_x \\ b & a & t_y \end{pmatrix} \begin{pmatrix} x \\ y \\ 1 \end{pmatrix}
\end{equation}
using the optimal transformation $t^{opt}_{trans}$. The associated $MSE$ value indicates how well a pose fits a reference pose. Thus, given a set of poses, their associated $MSE$ values can be used to rank these poses according to their fitness to the reference pose. However, two peculiarities about $MSE(p_r, p)$ need to be emphasized:
\begin{enumerate}
\item It is not symmetric, i.e., generally $MSE(p_r, p) \neq MSE(p, p_r)$. The reason for this is that the pose is always scaled to the size of the reference pose. Thus, if their two scales are very different, so will be $MSE(p_r, p)$ and $MSE(p, p_r)$.
\item Its magnitude depends on the scale of the reference pose. Doubling the reference pose's scale will quadruple the $MSE$ value. Thus, if a pose is compared against various reference poses, the scale of the references poses matters.
\end{enumerate}
Both peculiarities of the $MSE(p_r, p)$ value suggest that we need to normalize the poses we are comparing to get universally comparable MSE values and thus a universally applicable distance measure between two poses.
\subsection{Pose Distance Measure}
\label{sec:3.3pose_distance_measure}
It is common in pose detection evaluation to scale a reference pose by assigning a fixed size either to the length of the distance between two characteristic points of the pose or to the head. While using a single rectangle or two reference points may be fine in case of ground truth annotations, it is statistically not advisable for noisy detection results. We need a normalization that is based on more joints to reduce noise. Hence the scale $s_p$ of pose $p$ is defined as the average distance of all joints of a pose to its center of mass $c_p = (c_{p,x}, c_{p,y})^T$:
\begin{equation}
\label{eq:pose_scale}
s_p = \frac{1}{N} \sum_{k=1}^{N} \begin{Vmatrix} \begin{pmatrix} x_k \\ y_k \end{pmatrix} - \begin{pmatrix} c_{p,x} \\ c_{p,y} \end{pmatrix} \end{Vmatrix}_2
\end{equation}
with
\begin{equation}
\label{eq:center_of_mass}
\begin{pmatrix} c_{p,x} \\ c_{p,y} \end{pmatrix} = \frac{1}{N} \sum_{k=1}^{N} \begin{pmatrix} x_k \\ y_k \end{pmatrix}
\end{equation}
Given an arbitrary reference scale $s_{ref}$, we define our symmetric translation, rotation and scale invariant distance measure between two poses as
\begin{equation}
\label{eq:mse_norm}
MSE_{norm}(p_1, p_2) = \frac{s^2_{ref}}{2s^2_{p_1}} MSE(p_1, p_2) + \frac{s^2_{ref}}{2s^2_{p_2}} MSE(p_2, p_1)
\end{equation}
It enables us to judge pose similarity between poses derived from videos recorded by different cameras, at different locations and distances to the athletes.
\section{Mining Pose Data of Swimmers}
\label{sec:4}
Cyclical motions play a decisive and dominant role in numerous sports disciplines, e.g., in cycling, rowing, running, and swimming. In this section, we use swimming as an example to explore what kind of automated mining we can perform on the detected noisy poses. We use the pose data derived from world class swimmers recorded in a swimming channel. A single athlete jumps into the flowing water against the flow (from the right in Figure~\ref{fig:pose_examples} left), swims to the middle in any manner (e.g., by an extended set of underwater kicks or by freestyle on the water surface) and then starts the cyclic stroke under test. The video recording can start any time between the dive and the action of interest (= swimming a stroke) and stops shortly after it ended. During most of the recording time the athlete executes the cyclic motion under test.
\subsection{Time-Continuous Cycle Speeds}
\label{sec:4.1cycle_speeds}
For all types of sport with dominant cyclical motions, the change in cycle speed over time is a very indicative performance parameter. It can be derived through data mining without providing any knowledge to the system, but the automatically detected joint locations for each pose throughout a video sequence. Given a pose at time $t$, the \textit{cycle speed} at time $t$ is defined as $1$ over the time needed to arrive at this pose from the same pose one cycle before. In the case of a swimmer, the desired cycle speed information is strokes per minutes, which can be derived from the stroke length in frames given the video sampling rate in frames per seconds by
\begin{equation}
\label{eq:stroke_rate}
\frac{\text{\# strokes}}{\text{minute}} = \left(\frac{\text{\# frames}}{\text{stroke}}\right)^{-1} \cdot \frac{\text{\# frames}}{\text{seconds}} \cdot \frac{60 \text{ seconds}}{\text{minute}}
\end{equation}
The \textit{stroke length} is measured by the number of frames passed from the same pose one cycle before to the current pose.
In the following, we describe the individual steps of our statistically robust algorithm to extract time-continuous cycle speeds by first stating the characteristic property of cyclic motion we exploit, followed by an explanation how we exploit it. The adjective \textit{time-continuous} denotes that we will estimate the \textbf{cycle speed for every frame} of a video in which the cyclic motion is performed:
\begin{enumerate}
\item \textbf{Input:} A sequence $P$ of poses $p$ for a video: $P=\{(f_p, p)\}_{f_p}$.\\
It contains pairs consisting of a detected pose $p$ and a frame number $f_p$ in which it was detected. The subscript $f$ in $\{(f,\dotsc)\}$ indicates that the elements in the set $\{\dotsc\}$ are ordered and indexed by frame number $f$. Note that we might not have a pose for every video frame.
\item \textbf{Property:} Different phases of a cycle and their associated poses are run through regularly. As a consequence a pose $p$ from a cycle should match periodically at cycle speed with poses in $P$. These matching poses $p^\prime$ to a given pose $p$ identify themselves visually as minima in the graph plotting the frame number of poses $p^\prime$ against its normalized distance to given pose $p$. Therefore, we compare every pose $p$ in a video against every other pose $p^\prime$ and keep for each pose $p$ a list $L_p$ of matches:
\begin{equation}
\label{eq:match_list}
L_p = \left\{ \left( f_{p^\prime}, p^\prime, MSE_{norm}\left(p, p^\prime\right) \right) \right\}_{f_{p^\prime}} \;\;\; \forall p \in P
\end{equation}
Poses match if their normalized $MSE$ value is below a given threshold. For a target scale of $s_{ref}=100$ we use a threshold of $49$ (on avg. $7$ pixels in each direction for each joint).
\item \textbf{Property:} Not every pose is temporally striking.\\
An athlete might stay for some time even during a cycle in a very similar pose, e.g., in streamline position in breaststroke after bringing the arms forward. However, at one point this specific pose will end to enter the next phase of the cycle. Thus, from step 2, we sometime not only get the correct matches, but also nearby close matches. We consolidate our raw matches in $L_p$ by first temporally clustering poses $p^\prime$. A new cluster is started if a gap of more than a few frames lies between two chronologically consecutive poses in $L_p$. Each temporal cluster is then consolidated to the pose $p_c$ with minimal normalized $MSE$ to the pose $p$. The cluster is also attributed with its \textit{temporal spread}, i.e., the maximal temporal distance of a pose in the cluster from the frame with the consolidated pose $p_c$, leading us to the \textit{reoccurrence sequences} $L^\prime_p$ with
\begin{equation}
\label{eq:reoccurence_seq}
L^\prime_p = \left\{ \left( f_{p_c}, p_c, spread \right) \right\}_{f_{p}} \;\;\; \forall p \in P
\end{equation}
and for the complete video to $L_{video}=\left\{ \left( f_p, p, L^\prime_p \right) \right\}_{f_{p}}$.
\item \textbf{Property:} Temporally non-striking poses are unsuitable to identify cyclic motion. Therefore, all clusters with a temporal spread larger than a given threshold are deleted.\\
In our experiments we set this value to $10$ frames, resulting in
\begin{equation}
\label{eq:striking_poses}
L^{\prime \prime}_p = \left \{ \left( f_{p_c}, p_c, spread \right) \middle \vert spread < 10 \right \}_{f_p} \;\;\; \forall p \in P.
\end{equation}
\item \textbf{Property:} Most of the time the video shows the athlete executing the cyclical motion under test. Consequently, poses from the cyclic motion should most often be found. \\
Hence, we create a histogram over the lengths of the reoccurrences sequences $(\equiv \vert L^{\prime \prime}_p \vert)$ for the various poses $p$. We decided to keep only those reoccurrence sequences $L^{\prime \prime}_p$ which belong to the $50\%$ longest ones:
\begin{equation}
\label{eq:longest_sequences}
L^{\prime}_{video} = \left\{ \left( f_p, p, L^{\prime \prime}_p \right) \middle \vert \left \vert L^{\prime \prime}_p \right \vert \geq \med_{p \in P} \left( \left \vert L^{\prime \prime}_p \right \vert \right) \right\}_{f_p}
\end{equation}
\item \textbf{Property:} The observed difference of the frame numbers in each reoccurrence sequence in $L^{\prime}_{video}$ between two chronologically consecutive matches should most frequently reflect the actual stroke length.\\
Figure~\ref{fig:reoccurence_sequences} shows two sample plots. On the x-axis, we have the minuend of the difference and the difference value on the y-axis. The blue and yellow dots display all observed difference values from $L^{\prime}_{video}$. From them we derive our final robust estimate by local median filtering in two steps: (1) We take each frame number $f$ with at least one difference value and determine the median of the observed stroke lengths (= difference values) in a window of $\pm 2$ seconds (approx. $2$ to $4$ stroke cycles). We remove all difference values at frame number $f$, which deviate more than $10\%$ from the median. E.g., @$50$ fps a median stroke length of $60$ frames results in keeping only difference values in $[54,66]$. The deleted difference values are shown in yellow in Figure~\ref{fig:reoccurence_sequences}, while the remaining ones are shown in blue. (2) We piecewise approximate the remaining data points with a polynomial of degree $5$ over roughly $3$ cycles while simultaneously enforcing a smoothness condition at the piecewise boundaries.
\begin{figure}[tb]
\centering
\includegraphics[width=0.98\columnwidth]{Brust1_strokeLength_blindReview}\\
\includegraphics[width=0.98\columnwidth]{Kraul9_strokeLength_blindReview}
\caption{Examples showing frame differences between chronologically consecutive matches of in all reoccurrence sequences of against frame number. The red line visualizes the time-continuous estimate of stroke cycle length, with black lines indicated the $\pm 10\%$ corridor.}
\label{fig:reoccurence_sequences}
\end{figure}
\end{enumerate}
This approximation gives us our time-continuous estimates of the stroke cycle length over the interval in the video throughout which the stroke was performed. As a side effect it also automatically identifies the temporal range in the video during which the stroke was performed by the frame number ranges for which we have cycle speeds. The same technique is applicable to determine the kicks per minutes for freestyle and backstroke by restricting the pose to joints from the hip downwards.
\subsection{Temporally Striking Poses}
\label{sec:4.2temp_striking_poses}
During a cyclical motion some poses are more striking than others with respect to a given criterion. One such highly relevant criterion is how well a repeating pose can be localized temporally, i.e., how unique and salient it is with respect to its temporally nearby poses. The temporally most striking poses can be used, e.g., to align multiple cycles of the same swimmer for visual comparison.
Commonly, local salience is measured by comparing the local reference to its surrounding. In our case the local reference is a pose $p_r$ at frame $r$ or a short sequence of poses $p_{r-\triangle w_l}, \dotsc, p_r, \dotsc, p_{r+\triangle w_l}$ centered around that pose, and we compare the sequence to the temporally nearby poses. Thus, we can compute saliency by:
\begin{equation*}
\label{eq:saliency}
saliency \left( p_r \right) = \sum_{\triangle w_s = -w_s}^{w_s} \sum_{\triangle w_l= -w_l}^{w_l}
\frac{MSE \left( p_{r + \triangle w_l}, p_{r + \triangle w_l + \triangle w_s} \right)} {\left( 2 w_s + 1 \right) \left( 2 w_l + 1 \right)}
\end{equation*}
Experimentally, the saliency measure was insensitive with respect to the choices of $w_l$ and $w_s$. Both were arbitrarily set to $4$.
The salience values for each pose during the cyclic motion of a video can be exploited to extract the $K$ most salient poses of a cycle. Hereto, we take the top $N$ most salient poses ($N \gg K$) and cluster them with affinity propagation (AP) \cite{Frey2007}. Salient poses due to pose errors will be in small clusters, while our most representative poses are the representative poses of the $K$ largest clusters.
For determining the most salient pose of an athlete's stroke, it is sufficient to pick the top $20$ most salient poses, cluster them with AP and retrieve the cluster representative with the most poses assigned. Figure~\ref{fig:striking_poses} shows one example for each stroke. Note that the most salient pose is another mean to determine the cycle speed reliably cycle by cycle, as this pose is most reliably localized in time. However, we only get one cycle speed value per cycle.
\begin{figure}[tb]
\centering
\includegraphics[width=0.98\columnwidth]{Figure3}
\caption{Examples of temporally striking poses; top left to bottom right: fly, breast, back and free.}
\label{fig:striking_poses}
\end{figure}
\subsection{Cycle Stability}
\label{sec:4.3cycle_stability}
A common and decisive feature among winning top athletes is their trait to show off a very stable stroke pattern over time, under increasing fatigue and at different pace. One way to measure stroke cycle stability is to select a reference pose clip of one complete cycle and match this reference pose clip repeatedly over the complete pose sequence of the same video or a set of pose sequences derived from a set of videos recordings of some performance test (e.g., the $5 \times 200m$ step test after Pansold \cite{Pyne2001, Pansold1985}). Given all these clip matches and their associated matching scores, an average score of matching can be computed and taken as an indicator of stroke cycle stability: The better the average matching score, the more stable the stroke of the athlete. Alternatively, the matching score may be plotted versus time in order to analyze, how much the stroke changes from the desired one over (race) time. A reference pose cycle may automatically be chosen by selecting a clip between two contiguous occurrences of a temporally striking pose or by specifying a desired/ideal stroke cycle.
\textbf{Levenshtein distance:} With regards to that goal, we first turn our attention to the task of how to match a pose clip to a longer pose sequence and compute matching scores. We phrase the task to solve in terms of the well-studied problem of approximate substring matching: The task of finding all matches of a substring $pat$ in a longer document $text$, while allowing up to some specified level of discrepancies. In our application, a pose represents a character and a clip/sequence of poses our substring/document. The difference between `characters' is measured by a $[0,1]$-bounded distance function derived from the normalized $MSE$ between two poses:
\begin{align*}
\label{eq:lev_pose_distance}
& dist\_fct \left( p_1, p_2 \right) = \nonumber \\
& = \begin{cases}
0& \text{ if } MSE_n\left( p_1, p_2 \right) \leq th_{same}\\
\frac{MSE_n\left( p_1, p_2 \right) - th_{same}}{th_{diff} - th_{same}}& \text{ if } MSE_n\left( p_1, p_2 \right) \geq th_{diff} \\
1& \text{ else.} \\
\end{cases}
\end{align*}
The cost of transforming one pose into another is $0$ for poses which are considered the same ($MSE_n( p_1, p_2 ) \leq th_{same}$) and 1 for poses which are considered different ($MSE_n( p_1, p_2 ) \geq th_{diff}$). Between these two extremes, the transformation cost is linearly scaled based on the $MSE_n$ value.
Any algorithm to compute the Levenshtein distance \cite{Levenshtein1966, Meyers1994} and its generalization called edit distance is suitable to perform matching and compute a matching score between a search pattern $pat$ and a longer document $text$ at every possible end point location of a match within $text$. It results in a matrix $d$ of matching costs of size $len(pat) \times len(text)$, where $d[i,j]$ is the cost of matching the first $i$ characters of $pat$ up to end point $j$ in $text$.
We use our custom distance function not only for transformations, but also for insertions and deletions. We deliberately made this chose as it better fits the characteristic of swimming: The absolute duration of a stroke cycle, i.e. the number of poses in a sequence, depends on the pace of the swimmer. However, the better the athlete, the more consistent he/she executes the pose successions across different paces. We therefore do not want to see an additional cost if, e.g., a swimmer stays longer/shorter in a perfect streamline position or if he/she goes slower/faster through the recovery phase of a stroke cycle than the reference clip. Pace is already captured by the cycle speed. Here we only want to focus on the stability of the stroke pattern, no matter how fast the stroke is executed. Note that swimmers with less than perfect swimming technique typically modify their poses when changing pace.
\textbf{Match extraction:} The matching distances $d[len(pat),j]$ of the complete search pattern $pat$ computed by the edit distance at end point $j$ in $text$ are normalized by the virtual matching length, i.e., by the number of transformations, deletions and insertions needed for that match. We call this $len(text)$-dimensional vector of normalized matching scores over all possible end points in $text$ $score_{match}(pat, text)$. All clear minima in it identify the end points of all matches of the pose clip to the sequence together with the associated matching distances. Since our pose clips are highly specific in matching, our minima search does not require any non-maximum suppression. The matching sequence is derived by backtracking from this end point to the beginning of the match by using $d[i,j]$. Figure~\ref{fig:pose_alignment} found shows one example of matched poses of two different stroke cycles.
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\columnwidth]{Alignment_example_cut}
\caption{Alignment example of the same swimmer at different stroke cycles. Joints of the reference/matching pose are in shown in red/ green.}
\label{fig:pose_alignment}
\end{figure}
\textbf{Athlete Recognition:} While we were matching a given pose clip to all videos in our video database, we accidentally discovered that $score_{match}$ is also a perfect tool to automatically recognize a specific athlete. Usually, when matching a pose clip to the pose sequence of a different male or female swimmer, $score_{match}$ is 4 to 8 times higher in comparison to the score computed against the video the pose clip was taken from. However, in this case the matching score was as low as matched against the same video despite being a recording at a different test in a different swimming channel. Thus, $score_{match}$ can be used to identify a swimmer.
\subsection{Experimental Results}
\label{sec:4.4experimental_results}
We tested our mining algorithms on a set of $233$ videos (see Table~\ref{tab:swimmer_results}), showing over $130$ different athletes swimming in two structurally different swimming channels. Videos were recorded either at 720$\times$576@50i or at 1280$\times$720@50p. The videos cover different swimmers (in age, gender, physique, body size and posture) swimming in a swimming channel at different velocities between $1ms^{-1}$ and $1.75ms^{-1}$ and very different stroke rates. All mining was performed before any ground truth annotations were created.
\begin{table}
\centering
\caption{Swimming test video DB with mining results}
\label{tab:swimmer_results}
\begin{tabu}{llcccc}
\tabucline[1.5pt]{-}
\multicolumn{2}{l}{Stroke} & \multicolumn{1}{l}{Fly} & \multicolumn{1}{l}{Back} & \multicolumn{1}{l}{Breast} & \multicolumn{1}{l}{Free} \\
\tabucline[1.5pt]{-}
\multicolumn{2}{l}{\# videos} & 80 & 28 & 79 & 46 \\
\hline
\multirow{3}{*}{length~$[$s$]$} & min & 18.3 & 15.8 & 19.3 & 17.2 \\
& median & 35.0 & 31.2 & 35.5 & 33.9 \\
& max & 72.7 & 49.7 & 85.7 & 83.8 \\
\hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}GT stroke length \\$[$\# frames$]$\\\end{tabular}} & min & 51 & 58 & 48 & 52 \\
& median & 67 & 69 & 69 & 67 \\
& max & 101 & 85 & 119 & 108 \\
\hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}stroke length error\\$[$\# frames$]$~~ \end{tabular}} & avg & 0.53 & 0.32 & 0.39 & 0.39 \\
& \# 2 & 6 & 0 & 1 & 0 \\
& \# not det. & 0 & 1 & 0 & 1 \\
\hline
\multicolumn{2}{l}{\# w/o det. stroke range} & 0 & 2 & 0 & 0 \\
\hline
\multicolumn{2}{l}{\begin{tabular}[c]{@{}l@{}}\% of detected cyclic~\\stroke range \end{tabular}} & 96.0 & 84.5 & 91.1 & 82.8 \\
\hline
\multicolumn{2}{l}{\begin{tabular}[c]{@{}l@{}}\% of erroneously detected~\\non-cyclic stroke range \end{tabular}} & 1.8 & 3.2 & 6.0 & 0.3 \\
\tabucline[1.5pt]{-}
\end{tabu}
\end{table}
\textbf{Time-Continuous Cycle Speeds:} The precision of the time-continuous cycle speeds expressed by the number of frames per cycle was estimated by randomly picking one frame from each video and annotating it manually with the actual stroke length. In $2$ video sequences, our mining system did not determine a cycle speed at the frame of the ground truth. For another $6$ sequences the error in frames was larger than $2$, while for the remaining $225$ sequences the average deviation in frames from the ground truth was $0.43$ frames and $0.53$, $0.32$, $0.39$ and $0.39$ frames for breast, fly, back, and freestyle (see Table~\ref{tab:swimmer_results}). This exceptional quantitative performance can intuitively be grasped by a human observer from the stroke length graphs in Figure~\ref{fig:reoccurence_sequences}. In these graphs it is also visually striking if something has gone wrong, which was the case for $6$ videos. Figure~\ref{fig:stroke_rate_failure} depicts one of the few videos where the stroke length was incorrectly estimated twice as high as it actually was due to difficulties in detecting the joints reliably.
\textbf{Identify Cyclic Motion:} We annotated all $236$ videos roughly with the start and end time of the stroke. This sounds like an unambiguous task, but it was not: When the swimmer was starting the stroke out of the break-out from the dive, the starting point is fluent over some range. We decided to be more inclusive and marked the point early. However, it was extremely difficult to specify when the athlete stopped the stroke. Many athletes were drifting partially out of the image while still swimming when getting tired due to fast water velocities. This violated the assumption of our pose detection system that the simmer has to be completely visible. We decided to mark the end of the stroke range when a swimmer was knees downwards out of the picture. This choice, however, did not fit breast stroke well: During a cycle the swimmer pulls the heels towards the buttocks, bringing the feet back into the image, providing the system suddenly with a complete pose. We can see this effect in Table~\ref{tab:swimmer_results}, there our algorithm over-detects up to $6\%$ of the stroke range according to our early cut-off ground truth. This over-detection is primarily an artifact of how we determined the ground truth range of the stroke, but no real error. Our mining algorithm detected overall $89.5\%$ of all ground truth stroke ranges, while only detecting $3.1\%$ additionally outside. This performance is more than sufficient in practice. Moreover, the length of the detected cyclic motion range(s) per video was an excellent indicator to identify unstable and/or erroneous pose detection results. A cyclic motion range of less than $10$ seconds indicated that our automatic pose detection system had difficulties to detect the human joints due to strong reflections, water splashes, spray and/or air bubbles in the water. For these sequences determining the stroke cycle stability based on the identified temporally striking poses of the athlete does not make sense. Hence, in the subsequent experiments, only cyclic motion sequences of $10$ seconds or longer were used. This reduced the number of videos from $233$ down to $213$.
\begin{figure}[tb]
\centering
\includegraphics[width=0.98\columnwidth]{Arti_strokeLength_blindReview}
\caption{One of the 6 videos where the stroke length was incorrectly estimated twice as high as it actually was.}
\label{fig:stroke_rate_failure}
\end{figure}
\textbf{Temporally Striking Poses:} Poses which are temporally salient and unambiguously easy to determine by humans typically focus on one or two characteristic angles. An example is when the upper arm is vertical in freestyle (in the water) or backstroke (outside the water). Everything else of the pose is ignored. This is not how our temporally striking pose is defined: a pose which is easy to localize temporally by our system. Due to this mismatch between what the human is good at and our system, we only evaluate the temporally striking poses indirectly via their use to capture cycle stability.
\textbf{Cycle stability:} For each video we computed the stroke stability indicator value based on a single reference stroke clip. The reference stroke clip was selected by using the ground truth frame from the time-continuous cycle speed evaluation as the end point and by subtracting our estimated stroke length from that to compute the start frame. For each stroke we sorted the videos based on its stroke cycle stability indicator value and picked randomly one video from the top $20\%$, one from the middle $20\%$ and one from the bottom $20\%$. We then asked a swim coach to sort these three videos based on his assessed stroke cycle stability. We compared the result to the automatically computed ordering. Very similar results were obtained with the temporally striking poses as reference:
\textit{Breast:} There was an agreement in the ordering of the videos ranked 1st and 2nd. The athlete of the first video showed off an exceptionally stable stroke pattern. However, the video ranked 3rd was judged by the coach as being equivalent to the one ranked 2nd. The 3rd video is one of the instances there the swimmer is getting tired, drifting regularly with his lower legs out of the picture during the stretching phase in breast stroke. This explains the discrepancy between the judgement of the coach and our system.
\textit{Fly:} The coach and the system agreed on the ordering. We also notice that our system was picking up those athlete, who were breathing every other stroke and exhibit a strong difference between the cycle with and without the breath. With respect to a two-cycle pattern their stroke was stable. Typically, coaches emphasize that there should be as little difference as possible between a breathing cycle and a non-breathing cycle.
\textit{Back:} The coach and the system agreed on the ordering.
\textit{Free:} The coach was ranking the second video as having a slightly better stroke stability than the first video. They agreed on the video ranked 3rd as the athlete was showing an unsteady and irregular flutter flick. The discrepancy between the first two videos can be explained by peculiarities of the video ranked 2nd: water flow speed was higher than normal, leading to a slightly higher error frequency in the automatically detected poses.
\section{Mining Long Jump Pose Data}
\label{sec:5}
As a second example for pose data mining, we look at data of long jump athletes recorded at athletics championships and training events. Long jumping is different from swimming in many respects: Firstly, long jump features only semi-cyclic movement patterns. While the run-up is composed of repetitive running motion, the final jump itself is strikingly different and only performed once per trial. Secondly, the action is performed over a complete running track and recorded by a movable camera from varying angles. Third, spectators and other objects in the background along the track are likely to cause regular false detections of body joints. Our data consists of $65$ videos recorded at $200$Hz, where each video shows one athlete during a long jump trial from the side. The camera is mounted on a tripod and panned from left to right to track the athlete. The videos cover various athletes and six different long jump tracks. Figure ~\ref{fig:long_jump_example} shows exemplary video frames from one trial. The long jump pose database consists of $45,436$ frames with full-body pose estimates.
\begin{figure*}[tb]
\centering
\includegraphics[width=0.98\textwidth]{reduced_long_jump_example}
\caption{Qualitative comparison of predicted and ground truth long jump phases in one test video. Exemplary video frames and their estimated poses are depicted for each phase.}
\label{fig:long_jump_example}
\end{figure*}
\subsection{Automatic Temporal Classification of Long Jump Pose Sequences}
\label{sec:5.1long_jump_sequence_classification}
Video based performance analysis for long jump athletes involves various time dependent measures like the number of steps until the final jump, the relative joint angles during the run-up, the vertical velocity during the final jump, and the flight phase duration. To obtain such measures automatically, pose information alone does not suffice. Instead it requires to pick the poses from the right phase of a long jump. Therefore, we present here how to mine the pose data to temporally identify the different phases of a long jump such that the phase specific performance measures can be computed from the detected poses. We partition a long jump action during one trial into a periodic and an aperiodic part. The periodic run-up consists of repeated \textit{jumps} (the rear leg pushes the body upwards), \textit{airtimes} (no contact with the ground) and \textit{landings} (from first contact with the ground till the jump phase). The aperiodic part consists of the \textit{flight phase} and the \textit{final landing} in the sandpit. We annotated the long jump videos with respect to these five phases. Given a long jump video of length $T$ and the extracted pose sequence $p_{1:T}$, our mining task is now to predict the phase class $c_t \in C = \{ \text{jump}, \text{airtime}, \dotsc, \text{final landing} \}$ the athlete is in at each time step $t \in [1,T]$. Figure~\ref{fig:long_jump_example} depicts exemplary frames for each phase.
\textbf{Pose Clustering:} Since the pose space itself is large, finding a direct mapping from the pose space to the possible long jump phases $C$ is difficult. Similar to the cyclic strokes in swimming we expect poses in identical long jump phases to be similar to each other. We expect this to be true even across videos of different athletes and slightly varying camera viewpoints. This leads to assumption 1: \textit{Similar poses often belong to the same phase} (Asm. 1).
Instead of learning a direct mapping from pose to phase, we first partition the space of poses into a fixed number of subspaces. Henceforth, each pose is described by the discrete index of its subspace. As long as the subspace partition preserves similarity, we expect that the distribution of phases in one pose subspace is informative, i.e. non uniform with respect to phase class $c_t$. Let $S$ be the set of poses in our database. We perform unsupervised $k$-Medoids clustering on $S$ with our normalized pose similarity measure from Equation~(\ref{eq:mse_norm}) to create our subspace partition. The clustering defines a function $h(p) \mapsto [1,k]$ that maps a pose $p$ to the index of its nearest cluster centroid. With Asm. 1 we define the probability $P(c \vert h(p))$ as the fraction of poses in cluster $h(p)$ labeled with phase $c$:
\begin{equation}
\label{eq:phase_given_pose}
P( c \vert h(p) ) = \frac{\left\vert \left\{ p_i \in S \middle\vert h(p_i) = h(p) \wedge c_i = c \right\} \right\vert}{\left\vert \left\{ p_i \in S \middle\vert h (p_i) = h(p) \right\} \right\vert}
\end{equation}
\textbf{Markov Representation of Long Jump Sequence:} With Equation~(\ref{eq:phase_given_pose}) we could already predict the phase for each pose in a video individually. However, noisy predictions and phase-unspecific poses may render Asm. 1 in a fraction of the poses as incorrect. We have to incorporate the complete pose sequence to obtain correct phase predictions even for frames with wrongly estimated or ambiguous poses. With the rigid long jump movement pattern and the chosen phase definition, we can make two more assumptions: \textit{An athlete stays in a phase for some time before entering a different phase. Subsequent poses are likely to belong to the same phase} (Asm. 2).\textit{ Also, the possible transitions between long jump phases are limited by a fixed sequential pattern} (Asm. 3).
We can model these assumptions by stating the temporal succession of long jump phases as a state transition graph. Each state corresponds to one possible phase. Asm. 2 and 3 are reflected by self-loops and a small number of outgoing edges at each state, respectively. At each time step $t$ the athlete is in a phase which we cannot directly observe. The pose (or rather its estimate) at time $t$ is observable, however. Combining the graph with emission probabilities $P(h(p) \vert c)$ and transition probabilities $P(c_{t+1} \vert c_t)$ we obtain a classical Hidden Markov Model. The emission probabilities $P(h(p) \vert c)$ can be computed as
\begin{equation}
\label{eq:emission_prob}
P( h (p) \vert c ) = \alpha \cdot P ( c \vert h(p)) \cdot P ( h(p) ),
\end{equation}
where $\alpha$ is a normalization constant. The transition probabilities are obtained similarly by counting the number of observed transitions in the dataset.
Given a new long jump video and the corresponding pose sequence $p_{1:T}$ we first transform the sequence to the clustering-based discrete pose description $h(p_{1:T})$. We then use the Viterbi algorithm for the most likely phase sequence $c_{1:T}^{*}$ with
\begin{equation}
\label{eq:viterbi_phase_sequence}
c_{1:T}^{*} = \arg \max_{c_{1:T}} P \left( c_{1:T} \middle\vert h(p)_{1:T} \right).
\end{equation}
\subsection{Experimental Results}
\label{sec:5.2experimental_results}
\begin{table}
\caption{Results of long jump phase detection (AP) with IoU threshold $\tau=0.5$ (upper part) and the derived length and step count during the long jump run-up (lower part).}
\label{tab:long_jump_results}
\centering
\begin{tabu}{lc|lc}
\tabucline[1.5pt]{-}
Jump & 0.84 & Flight Phase & 0.94 \\
Airtime & 0.91 & Final Landing & 0.97 \\
Landing & 0.80 & \multicolumn{2}{l}{} \\
\hline
mAP & \multicolumn{1}{c}{} & & 0.89 \\
\tabucline[1.5pt]{-}
\multicolumn{2}{l}{\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}\# videos with given abs.\\error in step count\end{tabular}}} & $\vert error_{steps} \vert = 0$ & 53 \\
\multicolumn{2}{l}{} & $\vert error_{steps} \vert = 1$ & 7 \\
\multicolumn{2}{l}{} & $\vert error_{steps} \vert > 1$ & 0 \\
\hline
\multicolumn{3}{l}{\begin{tabular}[c]{@{}l@{}}Average abs. error in derived\\run-up length [s]\end{tabular}} & 0.06 \\
\tabucline[1.5pt]{-}
\end{tabu}
\end{table}
Although we formulated our problem as a per-frame classification task, the predictions should reflect the sequential phase transitions as well as the length of each annotated phase. Therefore, we evaluate our phase detection mining by the standard protocol of average precision (AP) and mAP for temporal event detection in videos \cite{Gorban2015, Heilbron2015}. For each video we combine sequential timestamps belonging to the same long jump phase $c$ into one \textit{event} $e_j=(t_{j,1}, t_{j,2}, c_j)$ with $t_{j,1}$ and $t_{j,2}$ being the start and stop time of the event. Let $E=\{ e_j \}_{j=1}^{J}$ be the set of sequential events in one video. In the same manner we split the predicted phase sequence $c_{1:T}^{*}$ into disjoint predicted events $e_j^*$. Two events match temporally if their intersection over union (IoU) surpasses a fixed threshold $\tau$. A predicted event $e_j^*$ is correct if there exists a matching ground truth event $e_j \in E$ in the same video with
\begin{equation}
\label{eq:phase_sequence_match}
c_j = c_j^* \wedge \frac{\left[ t_{j,1}, t_{j,2} \right] \cap \left[ t_{j,1}^*, t_{j,2}^* \right]}{\left[ t_{j,1}, t_{j,2} \right] \cup \left[ t_{j,1}^*, t_{j,2}^* \right]} > \tau.
\end{equation}
We optimize clustering parameters on a held-out validation set and use the remaining $60$ videos to evaluate our approach using six-fold cross evaluation. Table~\ref{tab:long_jump_results} depicts the results at a fixed $\tau=0.5$ IoU threshold. We achieve a mAP of $0.89$ for long jump phase detection. Due to their length and the unique poses observed during the flight and landing in the sandpit, these two phases are recognized very reliably with $0.94$ and $0.97$ AP, respectively. The phases of the periodic part show more uncertainty since each phase is considerably shorter and poses of the jump-airtime-landing cycle are more similar to each other.Figure~\ref{fig:long_jump_example} depicts qualitative results on one test video. Our method is able to reliably divide the cyclic run-up and the final flight phase and landing. Few predictions for the periodic phases are slightly misaligned, but the overall cyclic pattern is preserved. The phase predictions can directly be used to derive further kinematic parameters like the duration of the run-up and the number of steps. The results in Table~\ref{tab:long_jump_results} show that the run-up duration can be derived very accurately with an average deviation of $60$ms. The correct number of steps is recovered in the majority of videos.
\section{Conclusion}
\label{sec:6}
Noisy pose data of individual sport recordings will soon be available in abundance due to DNN-based pose detections systems. This work has presented unsupervised mining algorithms that can extract time-continuous cycle speeds, cycle stability scores and temporal cyclic motion durations from pose sequences of sport dominated by cyclic motion patterns such as swimming. We also showed how to match pose clips across videos and identify temporally striking poses. As it has become apparent from the analysis, results from our mining algorithms can be further improved if automatic pose detection system focus on dealing with athletes that are not fully visible in the video. We additionally apply our concept of pose similarity to pose estimates in long jump recordings. We model the rigid sequential progression of movement phases as a Markov sequence and combine it with an unsupervised clustering-based pose discretization to automatically divide each video into its characteristic parts. We are even able to identify short intra-cyclic phases reliably. The derived kinematic parameters show a direct application of this approach.
\section*{Acknowledgement}
This research was partially supported by FXPAL during Rainer Lienhart's sabbatical. He thanks the many colleagues from FXPAL (Lynn Wilcox, Mitesh Patel, Andreas Girgensohn, Yan-Ying Chen, Tony Dunnigan, Chidansh Bhatt, Qiong Liu, Matthew Lee and many more) who greatly assisted the research by providing an open-minded and inspiring research environment.
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 1.983398,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUciK6NNjgBpvIA02q |
\section{Introduction}
Skyline computation aims at looking for the set of tuples that are not worse than any other tuples in all dimensions with respect to given criteria from a multidimensional database.
Indeed, the formal concept of {\em skyline} was first proposed in 2001 by extending SQL queries to find interesting tuples with respect to multiple criteria \cite{Borzsony2001Operator}, with the notion of {\em dominance}: we say that a tuple $t$ {\em dominates} another tuple $t^\prime$ if and only if for each dimension, the value in $t$ is better than the respective value in $t^\prime$.
The predicate {\em better} can be defined by any total order, such as {\em less than} or {\em greater than}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.6]{images/skyline.jpg}
\end{center}
\caption{The hotel Skyline on distance and price.}
\label{fig:hotel}
\end{figure}
For instance, Figure \ref{fig:hotel} shows the most mentioned example in skyline related literature where we consider the prices of hotels with respect to their distances to the city center (sometimes to the beach, or to the railway station, etc.).
If we are interested in hotels which are not only cheap but also close to the city center (the less value is the better), those represented by $a$, $d$, $f$, $h$, and $i$ constitute the skyline.
It's obvious that the hotel $d$ dominates the hotel $b$ since $d$ is better than $b$ in both distance and price; however, $a$ does not dominate $b$ because $a$ is better than $b$ in price but $b$ is however better than $a$ in distance.
In real-world database and user centric applications, such {\em Price-Distance} liked queries are doubtless interesting and useful, and have been widely recognized.
Since the first proposed {{\sffamily\tt{BNL}}} algorithm \cite{Borzsony2001Operator}, the skyline computation problem has been deeply studied for about two decades and many algorithms have been developed to compute the Skyline, such as {{\sffamily\tt{Bitmap}}}/{{\sffamily\tt{Index}}} \cite{Tan2001IndexBitmap}, {{\sffamily\tt{NN}}} \cite{Kossmann2002NN}, {{\sffamily\tt{BBS}}} \cite{Papadias2005BBS}, {{\sffamily\tt{SFS}}} \cite{Jan2005SFS}, {{\sffamily\tt{LESS}}} \cite{Godfrey2005Less}, {{\sffamily\tt{SaLSa}}} \cite{Bartolini2006SaLSa}, {{\sffamily\tt{SUBSKY}}} \cite{Tao2007Subsky}, {{\sffamily\tt{ZSearch}}} \cite{Lee2007ZSearch}, and {{\sffamily\tt{ZINC}}} \cite{Liu2010ZINC}.
However, the efficiencies brought by existing algorithms often depend on either complex data structures or specific application/data settings.
For instance, {{\sffamily\tt{Bitmap}}} is based on a bitmap representation of dimensional values but is also limited by the cardinality; {{\sffamily\tt{Index}}} is integrated into a B$^+$-tree construction process; {{\sffamily\tt{NN}}} and {{\sffamily\tt{BBS}}} rely to a specific data structure as R-tree besides {{\sffamily\tt{NN}}} handles difficultly high-dimensional data (for instance $d > 4$ \cite{Tan2001IndexBitmap}); {{\sffamily\tt{SUBSKY}}} specifically requires the B-tree and tuning the number of anchors; {{\sffamily\tt{ZSearch}}} and {{\sffamily\tt{ZINC}}} are built on top of the ZB-tree.
On the other hand, as well as paralleling Skyline computation \cite{chester2015scalable}, different variants of standard skylines have been defined and studied, such as top-k Skylines \cite{Tao2007Subsky}, streaming data Skylines \cite{lin2005stabbing}, partial ordered Skylines \cite{Liu2010ZINC}, etc., which are not in our scope.
In this paper, we present the {{\sffamily\tt{SDI}}} ({\em Skyline on Dimension Index}) framework that allows efficient skyline computation by indexing dimensional values.
We first introduce the notion of {\em dimensional index}, based on which we prove that in order to determine whether a tuple belongs the skyline, it is enough to compare it only with existing skyline tuples present in any one dimensional index instead of comparing it with all existing skyline tuples, which can significantly reduce the total count of dominance comparisons while computing the skyline.
Furthermore, within the context of dimension indexing, we show that in most cases, one comparison instead of two is enough to confirm the dominance relation.
These properties can significantly reduce the total count of dominance comparisons that is mostly the bottleneck of skyline computation.
Different form all existing sorting/indexing based skyline algorithms, the application of dimension indexing allows to extend skyline computation to any total ordered categorical data such as, for instance, user preference on colors and on forms.
Based on {{\sffamily\tt{SDI}}}, we also prove that any skyline tuple can be used to define a {\em stop line} crossing dimension indexes to terminate the skyline computation, which is in particular efficient on correlated data.
We therefore develop the algorithm {{\sffamily\tt{SDI-RS}}} (RangeSearch) for efficient skyline computation with dimension indexing.
Our experimental evaluation shows that {{\sffamily\tt{SDI-RS}}} outperforms our baseline algorithms ({{\sffamily\tt{BNL}}}, {{\sffamily\tt{SFS}}}, and {{\sffamily\tt{SaLSa}}}) in general, especially on high-dimensional data.
The remainder of this paper is organized as follows.
Section 2 reviews related skyline computation approaches.
In Section 3, we present our dimension indexing framework and prove several important properties, based on which we propose the algorithm {{\sffamily\tt{SDI-RS}}} in Section 4.
Section 5 reports our experimental evaluation of the performance of {{\sffamily\tt{SDI-RS}}} in comparison with several baseline benchmarks.
Finally, we conclude in Section 6.
\section{Related Work}
In this section, we briefly introduce mainstream skyline computation algorithms.
B\"{o}rzs\"{o}ny et al.\cite{Borzsony2001Operator} first proposed the concept of skyline and several basic computation algorithms, of wihch Nested Loop ({{\sffamily\tt{NL}}}) is the most straightforward algorithm by comparing each pair of tuples, but always has the the same time complexity $O(n^2)$ no matter the distribution of the data.
Built on top of the naive {{\sffamily\tt{NL}}} algorithm, Block Nested Loop ({{\sffamily\tt{BNL}}}) algorithm employees memory window to speed up the efficiency significantly, and of which the best case complexity is reduced to $O(n)$ when there is no temporary file generated during the process of {{\sffamily\tt{BNL}}}, however the worst case is $O(n^2)$, such as all tuples in database are incomparable with each other.
{{\sffamily\tt{Bitmap}}} and {{\sffamily\tt{Index}}} \cite{Tan2001IndexBitmap} are two efficient algorithms for skyline computation.
{{\sffamily\tt{Bitmap}}} based skyline computation is very efficient, however it limits to databases with limited distinct value of each dimension; it also consumes high I/O cost and requires large memory when the database is huge.
{{\sffamily\tt{Index}}} generates the index based on the best value's dimension of tuples.
It is clear that skyline tuples are more likely to be on the top of each index table, so index tables can prune tuple if one tuple's minimum value in all dimensions is larger than the maximal value of all dimensions of another tuple.
Sorted First Skyline ({{\sffamily\tt{SFS}}}) \cite{Jan2005SFS} and Sort and Limit Skyline algorithm ({{\sffamily\tt{SaLSa}}}) \cite{Bartolini2006SaLSa} are another two pre-sort based algorithms.
{{\sffamily\tt{SFS}}} has a similar process as {{\sffamily\tt{BNL}}} but presorts tuples based on the skyline criteria before reading them into window.
{{\sffamily\tt{SaLSa}}} shares the same idea as {{\sffamily\tt{SFS}}} to presort tuples, but the difference between {{\sffamily\tt{SFS}}} and {{\sffamily\tt{SaLSa}}} is that they use different approach to optimize the comparison passes: {{\sffamily\tt{SFS}}} uses entropy function to calculate the probability of one tuple being skyline and {{\sffamily\tt{SaLSa}}} uses stop point.
Indeed, {{\sffamily\tt{SaLSa}}} is designed on top of such an observation: if a skyline tuple can dominate all unread tuples, then the skyline computation can be terminated.
Such a special tuple is called the {\em stop point} in {{\sffamily\tt{SaLSa}}}, which can effectively prune irrelevant tuples that they cannot be in the Skyline.
However, the selection of the stop point depends on dominance comparisons that is completely different from our notion of stop line, which is determined by dimensional indexes without dominance comparison.
{{\sffamily\tt{SUBSKY}}} algorithm \cite{Tao2007Subsky} converts the $d$-dimensional tuples into 1D value $f(t)$ so all tuples will be sorted based on $f(t)$ value and that helps to determine whether a tuple is dominated by a skyline tuple.
{{\sffamily\tt{SUBSKY}}} sorts the whole database on full space but calculates skyline on subspace based on user criteria.
Nevertheless, the full space index may not be accurate when pruning data as the index maybe calculated on unrelated dimension.
{{\sffamily\tt{SDI}}} also supports to calculate skyline on subspace but without re-sorting tuples.
Moreover, dimension index could guarantee the best sorting of subspace and prune more tuples.
Besides sorting based algorithms, there are some algorithms solve the skyline computation problem using R-tree structure, such as {{\sffamily\tt{NN}}} (Nearest Neighbors) \cite{Kossmann2002NN} and {{\sffamily\tt{BBS}}} (Branch-and-Bound Skyline) \cite{Papadias2005BBS}.
{{\sffamily\tt{NN}}} discovers the relationships between nearest neighbors and skyline results.
It is observed that the skyline tuple must be close to the coordinate origin: the tuple which stays closest to the coordinate origin must be a part of the skyline.
Using the first skyline tuple, the database can be further split to several regions, and the first skyline tuple becomes the coordinate origin of these regions.
The nearest point of each region are part of skyline tuples as well, so the whole process iterates until there is no more region split.
{{\sffamily\tt{BBS}}} uses the similar idea as {{\sffamily\tt{NN}}}.
The main difference between {{\sffamily\tt{NN}}} and {{\sffamily\tt{BBS}}} is that {{\sffamily\tt{NN}}} process may include redundant searches but {{\sffamily\tt{BBS}}} only needs one traversal path.
{{\sffamily\tt{NN}}} and {{\sffamily\tt{BBS}}} are both efficient but nevertheless rely on complex data structure which is not necessary for {{\sffamily\tt{SDI}}} algorithm.
\section{Dimension Indexing for Skyline Computation}
We present in this section the {{\sffamily\tt{SDI}}} (Skyline on Dimension Index) framework, within which we prove several interesting properties that allow to significantly reduce the total count of dominance comparisons during the skyline computation.
Let $\mathcal D$ be a $d$-dimensional database that contains $n$ tuples, each tuple $t \in \mathcal D$ is a vector of $d$ attributes with $|t| = d$.
We denote $t[i]$, for $1 \leq i \leq d$, the {\em dimensional value} of a tuple $t$ in {\em dimension} $i$ (in the rest of this paper, we consider by default that $i$ satisfies $1 \leq i \leq d$).
Given a total order $\succ_i$ on all values in dimension $i$, we say that the value $t[i]$ of the tuple $t$ is {\em better} than the respective value $t^\prime[i]$ of the tuple $t^\prime$ if and only if $t[i] \succ_i t^\prime[i]$; if $t[i] = t^\prime[i]$, we say that $t[i]$ is {\em equal} to $t^\prime[i]$, and so that $t^\prime[i]$ is {\em not worse} than $t[i]$ if and only if $t[i] \succ_i t^\prime[i] \lor t[i] = t^\prime[i]$, denoted by $t[i] \succeq_i t^\prime[i]$.
Besides, $t^\prime[i]$ is {\em not better} than $t[i]$ is denoted by $t[i] \not\succ t^\prime[i]$
We have that $t[i] \succ_i t^\prime[i] \Rightarrow t[i] \succeq_i t^\prime[i]$.
Without lose of the generality, we denote by the total order $\succ$ the ensemble of all total orders $\succ_i$ on all dimensions and, without confusion, $\{\succ, \succeq, \not\succ\}$ instead of $\{\succ_i, \succeq_i, \not\succ_i\}$.
\begin{definition}[Dominance]
Given the total order $\succ$ and a database $\mathcal D$, a tuple $t \in \mathcal D$ dominates a tuple $t^\prime \in \mathcal D$ if and only if $t[i] \succeq t^\prime[i]$ on each dimension $i$, and $t[k] \succ t^\prime[k]$ for at least one dimension $k$, denoted by $t \succ t^\prime$.
\end{definition}
Further, we denote $t \prec\succ t^\prime \iff t \not\succ t^\prime \land t^\prime \not\succ t$ that the tuple $t$ and the tuple $t^\prime$ are {\em incomparable}.
A tuple is a {\em skyline tuple} if and only if there is no tuple can dominate it.
We therefore formally define {\em skyline} as follows.
\begin{definition}[Skyline]
Given the total order $\succ$ and a database $\mathcal D$, a tuple $t \in \mathcal D$ is a skyline tuple if and only if $\not\exists u \in \mathcal D$ such that $u \succ t$.
The skyline $\mathcal S$ on $\succ$ is the complete set of all skyline tuples that $\mathcal S = \{t \in \mathcal D \mid \not\exists u \in \mathcal D, u \succ t\}$.
\end{definition}
It's easy to see that the skyline $\mathcal S$ of a database $\mathcal D$ is the complete set of all incomparable tuples in $D$, that is, $s \prec\succ t$ for any two tuples $s, t \in \mathcal S$.
\begin{table}[htbp]
\begin{center}
{\scriptsize\begin{tabular}{|r|cccccc|c|}
\hline
ID & $D_1$ & $D_2$ & $D_3$ & $D_4$ & $D_5$ & $D_6$ & Skyline\\
\hline
$t_0$ & 7.5 & 1.3 & 7.5 & 4.5 & 5.3 & 2.1 & {\bf Yes}\\
\hline
$t_1$ & 4.7 & 6.7 & 6.7 & 9.3 & 3.8 & 5.1 & {\bf Yes}\\
\hline
$t_2$ & 8.4 & 9.4 & 5.3 & 5.8 & 6.7 & 7.5 & No\\
\hline
$t_3$ & 5.3 & 6.6 & 6.7 & 6.8 & 5.8 & 9.3 & {\bf Yes}\\
\hline
$t_4$ & 8.4 & 5.2 & 5.1 & 5.5 & 4.1 & 7.5 & {\bf Yes}\\
\hline
$t_5$ & 9.1 & 7.6 & 2.6 & 4.7 & 7.3 & 6.2 & {\bf Yes}\\
\hline
$t_6$ & 5.3 & 7.5 & 1.9 & 5.9 & 3.4 & 1.8 & {\bf Yes}\\
\hline
$t_7$ & 5.3 & 7.5 & 6.7 & 7.2 & 6.3 & 8.8 & No\\
\hline
$t_8$ & 6.7 & 7.3 & 7.6 & 9.7 & 5.3 & 8.7 & No\\
\hline
$t_9$ & 7.5 & 9.6 & 4.8 & 8.9 & 9.5 & 6.5 & No\\
\hline
\end{tabular}}
\end{center}
\caption{A sample database with $d = 6$, $n = 10$, and $m = 6$.}
\label{tab:sample}
\end{table}
Table \ref{tab:sample} shows a sample database of 6 dimensions ($d = 6$) that contains 10 tuples ($n = 10$), of which 6 are skyline tuples ($|\mathcal S| = 6$, we also note the size of Skyline as $m$ with reference to most literature) while the order {\em less than} is applied to all dimensions.
\begin{example}
Among all the 10 tuples $t_0, t_1, \ldots, t_9$ listed in Table \ref{tab:sample}, $t_1 \succ t_8$, $t_4 \succ t_2$, $t_6 \succ t_7$, and $t_6 \succ t_9$; $t_0$, $t_3$, and $t_5$ do not dominate any tuples and are not dominated by any tuples.
The Skyline is therefore $\mathcal S = \{t_0, t_1, t_3, t_4, t_5, t_6\}$.
\qed
\end{example}
The basis of our approach is to build dimensional indexes with respect to the concerned per-dimension total orders that allow to determine the skyline without performing dominance comparisons neither to all tuples in the database nor to all tuples in current skyline.
In general, our approach can significantly reduce the total number of dominance comparisons, which plays an essential role that definitively affects the total processing time of Skyline computation.
Furthermore, our approach constructs the Skyline progressively so no delete operation is required.
For each dimension $i$ of the database $\mathcal D$, the total order $\prec_i$ can be considered as a sorting function $f_i : \mathcal D[i] \rightarrow \mathcal I_i$, where $\mathcal I_i$ is an ordered list of all tuple values in the dimension $i$ of database.
We call such a list $\mathcal I_i$ a {\em dimensional index}.
\begin{definition}[Dimensional Index]
Given a database $\mathcal D$, the dimensional index $\mathcal I_i$ for a dimension $i$ is an ordered list of tuple IDs sorted first by dimensional values with respect to the total order $\succ$, and then, in case of ties, by their lexicographic order.
\end{definition}
In order to avoid unnecessary confusions, we represent a dimensional index $\mathcal I_i$ as a list of entries $\left<t[i]:t.id\right>$ such as which shown in Table \ref{tab:full-di} (where all skyline tuples are in bold).
\begin{table}[htbp]
\begin{center}
{\scriptsize\begin{tabular}{|c|c|c|c|c|c|}
\hline
$\mathcal I_1$ & $\mathcal I_2$ & $\mathcal I_3$ & $\mathcal I_4$ & $\mathcal I_5$ & $\mathcal I_6$\\
\hline
{\bf 4.7:1} & {\bf 1.3:0} & {\bf 1.9:6} & {\bf 4.5:0} & {\bf 3.4:6} & {\bf 1.8:6}\\
{\bf 5.3:3} & {\bf 5.2:4} & {\bf 2.6:5} & {\bf 4.7:5} & {\bf 3.8:1} & {\bf 2.1:0}\\
{\bf 5.3:6} & {\bf 6.6:3} & {4.8:9} & {\bf 5.5:4} & {\bf 4.1:4} & {\bf 5.1:1}\\
{5.3:7} & {\bf 6.7:1} & {\bf 5.1:4} & {5.8:2} & {\bf 5.3:0} & {\bf 6.2:5}\\
{6.7:8} & {7.3:8} & {5.3:2} & {\bf 5.9:6} & {5.3:8} & {6.5:9}\\
{\bf 7.5:0} & {\bf 7.5:6} & {\bf 6.7:1} & {\bf 6.8:3} & {\bf 5.8:3} & {7.5:2}\\
{7.5:9} & {7.5:7} & {\bf 6.7:3} & {7.2:7} & {6.3:7} & {\bf 7.5:4}\\
{8.4:2} & {\bf 7.6:5} & {6.7:7} & {8.9:9} & {6.7:2} & {8.7:8}\\
{\bf 8.4:4} & {9.4:2} & {\bf 7.5:0} & {\bf 9.3:1} & {\bf 7.3:5} & {8.8:7}\\
{\bf 9.1:5} & {9.6:9} & {7.6:8} & {9.7:8} & {9.5:9} & {\bf 9.3:3}\\
\hline
\end{tabular}}
\end{center}
\caption{Dimension indexing of the sample database shown in Table \ref{tab:sample}.}
\label{tab:full-di}
\end{table}
\begin{example}
Table \ref{tab:full-di} shows the 6 dimensional indexes $\mathcal I_1, \mathcal I_2, \ldots, \mathcal I_6$ with respect to all the 6 dimensions $D_I, D_2, \ldots, D_6$ of the sample database shown in Table \ref{tab:sample}.
We show in detail that in $\mathcal I_1$, the dimensional value 5.3 appears in 3 tuples so these 3 entries are secondarily sorted by tuple IDs for $3 < 6 < 7$.
\qed
\end{example}
Now let us consider the dimensional indexes containing distinct dimensional values only, such as $\mathcal I_4$ shown in Table \ref{tab:full-di}.
In such an index $\mathcal I_i$ without duplicate dimensional values, we see that a tuple $t$ can only be dominated by a tuple $s$ such that $o_i(s) < o_i(t)$ (implies that $s[i] < t[i]$ since for any tuple $u$ such that $o_i(u) > o_i(t)$, we have that $t[i] \succ u[i]$ so $u$ cannot dominates $t$.
\begin{lemma}
Given a database $\mathcal D$, let $\mathcal S$ be the skyline of $\mathcal D$, $\mathcal I_i$ be a dimensional index containing only distinct dimensional values, and $t \in \mathcal D$ be a tuple.
Then, $t \in \mathcal S$ if and only if we have $s \not\succ t$ for any skyline tuple $s \in \mathcal S$ such that $o_i(s) < o_i(t)$ on $\mathcal I_i$.
\qed
\label{lem:dist}
\end{lemma}
\begin{proof}
If $o_i(t) = 0$, then $t$ is a skyline tuple because no tuple is better than $t$ in the dimension $D_i$ since all dimensional values are distinct.
If $o_i(t) > 0$, let $s \in \mathcal S$ be a skyline tuple such that $o_i(s) < o_i(t)$, then $s[i] < t[i]$, thus, $s \not\succ t \Rightarrow \exists l \neq i$ such that $t[l] \succ s[l]$, that is, $s \prec\succ t$; now let $s^\prime \in \mathcal S$ be a skyline tuple such that $o_i(t) < o_i(s^\prime)$, then $s^\prime \in \mathcal S \Rightarrow t \not\succ s^\prime$, further, $t[i] \prec_i s^\prime[i] \Rightarrow s^\prime \not\succ s^\prime$, so we also have $t \prec\succ s^\prime$.
Thus, $t$ is incomparable to any skyline tuple so $t$ is a skyline tuple, that is, $t \in \mathcal S$.
\end{proof}
With Lemma \ref{lem:dist}, to determine whether a tuple $t$ is a skyline tuple, it is only necessary to compare $t$ with each skyline tuple $s$ in one dimension $i$ such that $o_i(s) < o_i(t)$, instead of comparing $t$ with all skyline tuples.
Furthermore, we recall that {{\sffamily\tt{BNL}}}-like algorithms dynamically update the early skyline set that require a second dominance comparison between an incoming tuple $t$ and early skyline tuple $s$ to determine whether $t \succ s$.
However, with dimensional indexes, Lemma \ref{lem:dist} shows that one dominance comparison $s \not\succ t$ is enough to determine $t \in \mathcal S$, instead of two comparisons.
Lemma \ref{lem:dist} also ensures a {\em progressive} construction of the skyline.
However, in most cases and particularly in real data, there are often duplicate values in each dimension where Lemma \ref{lem:dist} cannot be established.
As shown in Table \ref{tab:full-di}, we can find that there are duplicate values in most dimensions, where a typical instance is $\mathcal I_1$, in which two different cases should be identified:
\begin{enumerate}
\item The dimensional value 5.3 appears in three entries $\left<5.3:3\right>$, $\left<5.3:6\right>$, and $\left<5.3:7\right>$ where $t_3$ and $t_6$ are skyline tuples and $t_7$ is not skyline tuple.
\item The dimensional value 8.4 appears in both of the two entries $\left<8.4:2\right>$ and $\left<8.4:4\right>$, where $t_2$ is not skyline tuple but is indexed before the skyline tuple $t_4$.
\end{enumerate}
In the case (1), a simple straightforward scan on these three dimensional index entries can progressively identify that $t_3$ ($t_1 \not\succ t_3$) and $t_6$ ($t_1 \not\succ t_6$ and $t_3 \not\succ t_6$) are skyline tuples and filter out $t_7$ ($t_6 \succ t_7$).
However, in the case (2), a straightforward scan cannot progressively identify skyline tuples because: there is no precedent tuple dominating $t_2$ so $t_2$ will be first identified as a skyline tuple; then, since no tuple can dominate $t_4$, $t_4$ will identified as a skyline tuple without checking whether $t_4 \not\succ t_2$, hence, finally the output skyline is wrong.
To resolve such misidentifications of skyline tuples, we propose a simple solution that first divides a dimensional index into different logical {\em blocks} of entries with respect to each distinct dimensional value, then apply the {{\sffamily\tt{BNL}}} algorithm to each {\em block} containing more than one entry to find {\em block skyline tuples} in order to establish Lemma \ref{lem:dist}.
\begin{definition}[Index Block]
Given a database $\mathcal D$, let $\mathcal I_i$ be the dimensional index of a dimension $i$.
An index block of $\mathcal I_i$ is a set of dimensional index entries that share the same dimensional value sorted by the lexicographical order of tuple IDs.
\end{definition}
If each block contains one entry, the only tuple will be compared with existing skyline tuples with respect to Lemma \ref{lem:dist}; otherwise, for any block contains more than one entry, each {\em block skyline tuple} must be compared with existing skyline tuples with respect to Lemma \ref{lem:dist}.
We can generalize the notion of tuples in Lemma \ref{lem:dist} to block skyline tuples because one block contains one entry, the concerned tuples are block skyline tuples.
\begin{theorem}
Given a database $\mathcal D$, let $\mathcal S$ be the Skyline of $\mathcal D$, $\mathcal I_i$ be a dimensional index, and $t \in \mathcal D$ be a block skyline tuple on $\mathcal I_i$.
Then, $t \in \mathcal S$ if and only if we have $s \not\succ t$ for any skyline tuple $s \in \mathcal S$ such that $o_i(s) < o_i(t)$ on $\mathcal I_i$.
\label{the:sdi}
\end{theorem}
\begin{proof}
With the proof of Lemma \ref{lem:dist} and the statement of block skyline tuples, the proof of Theorem \ref{the:sdi} is immediate.
\end{proof}
\begin{table}[htbp]
\begin{center}
{\scriptsize\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|l|}{$\mathcal I_1$}\\
\hline
\hline
4.7:1 & \multicolumn{3}{l|}{}\\
\hline
{\bf 5.3:3} & {\bf 5.3:6} & 5.3:7 & ~~~~~~~~\\
\hline
6.7:8 & \multicolumn{3}{l|}{}\\
\hline
{\bf 7.5:0} & {\bf 7.5:9} & \multicolumn{2}{l|}{}\\
\hline
8.4:2 & {\bf 8.4:4} & \multicolumn{2}{l|}{}\\
\hline
9.1:5 & \multicolumn{3}{l|}{}\\
\hline
\end{tabular}}
\end{center}
\caption{A block view of the dimensional index $\mathcal I_i$.}
\label{tab:block}
\end{table}
\begin{example}
As shown in Table \ref{tab:block}, 6 blocks can be located from $\mathcal I_1$ with respect to all 6 distinct values: 4.7, 5.3, 6.7, 7.5, 8.4, and 9.1.
According to Theorem \ref{the:sdi}:
the block 4.7 contains $t_1$, so $t_1$ is a block skyline tuple and is the first skyline tuple;
the block 5.3 contains $t_3$, $t_6$, and $t_7$ where $t_3 \prec\succ t_6$ and $t_6 \succ t_7$, so $t_3$ and $t_6$ block skyline tuples such that $t_1 \prec\succ t_3$ and $t_1 \prec\succ t_6$, hence, $t_3$ and $t_6$ are new skyline tuples;
the block 6.7 contains $t_8$, so $t_8$ is a block skyline tuple that is dominated by $t_6$;
the block 7.5 is different from the block 5.3, where $t_0 \prec\succ t_9$ so both of them are block skylines, and we have $t_6 \succ t_9$ so $t_0$ is a skyline tuple;
the block 8.3 is the same case as the block 5.3, where $t_4$ is a skyline tuple;
finally, no skyline tuple dominates $t_5$, so the Skyline is $\{t_0, t_1, t_3, t_4, t_5, t_6\}$.
\qed
\end{example}
It is important to note that Theorem \ref{the:sdi} allows dominance comparisons to be performed on arbitrary dimensional indexes and the computation stops while the last entry in any index is reached.
Therefore, we see that a dynamic {\em dimension switching} strategy can further improve the efficiency of the Skyline computing based on dimension indexing.
For instance, if we proceed a breadth-first search strategy among all dimensional indexes shown in Table \ref{tab:full-di}, while we examine the second entry $\left<2.6:5\right>$ in $\mathcal I_3$, although currently $\mathcal S = \{t_0, t_1, t_6, t_3, t_4\}$, we do not have to compare $t_5$ with all those skyline tuples but only with $t_6$; if we continue to examine the second entry in $\mathcal I_4$, $t_5$ can be ignored since it is already a skyline tuple.
We also note that duplicate dimensional values present in tuples severely impact the overall performance of dimensional index based Skyline computation, therefore, reasonable dimension selection/sorting heuristics shall be helpful.
\section{A Range Search Approach to Skyline}
In this section, we first propose the notion of {\em stop line} that allows terminate searching skyline tuples by pruning non relevant tuples, then present the algorithm {{\sffamily\tt{SDI-RS}}} ({{\sffamily\tt{RangeSearch}}}) for skyline computation based on the {{\sffamily\tt{SDI}}} framework.
Notice that the name {{\sffamily\tt{RangeSearch}}} stands for the bounded search range while determining skyline tuples.
\subsection{Stop Line}
Let us consider again the Skyline and the dimensional indexes shown in Table \ref{tab:full-di}.
It is easy to see that all 6 skyline tuples can be found at the first two entries of all dimensional indexes, hence, a realistic question is whether we can stop the Skyline computation before reaching the end of any dimensional index.
\begin{definition}[Stop Line]
Given a database $\mathcal D$, let $p \in \mathcal D$ be a skyline tuple. A stop line established from $p$, denoted by $S_p$, is a set of dimensional index entries $\left<p[i]:p\right>$ such that $p$ appears in each dimension.
An index entry $e \S_p$ is a stop line entry and an index block containing a stop line entry is a stop line block.
\end{definition}
Let $t$ be a tuple, we denote $b_i(t)$ the offset of the index block on a dimensional index $\mathcal I_i$ that contains $t$, that is, the position of the index block that contains $t$.
Hence, let $p$ be a stop line tuple and $t$ be a tuple, we say that the stop line $S_p$ {\em covers} the index entry $\left<t[i]:t\right>$ on a dimensional index $\mathcal I_i$ if $b_i(p) < b_i(t)$.
For instance, Table \ref{tab:sl-6} shows the stop line created from the tuple $t_6$, which totally covers 41 index entries without $\left<5.3:7\right>$ on $\mathcal I_1$ neither $\left<7.5:7\right>$ on $\mathcal I_2$.
\begin{table}[htbp]
\begin{center}
{\scriptsize\begin{tabular}{|c|c|c|c|c|c|}
\hline
$\mathcal I_1$ & $\mathcal I_2$ & $\mathcal I_3$ & $\mathcal I_4$ & $\mathcal I_5$ & $\mathcal I_6$\\
\hline
{\bf 4.7:1} & {\bf 1.3:0} & \underline{\bf 1.9:6} & {\bf 4.5:0} & \underline{\bf 3.4:6} & \underline{\bf 1.8:6}\\
{\bf 5.3:3} & {\bf 5.2:4} & {\bf 2.6:5} & {\bf 4.7:5} & {\bf 3.8:1} & {\bf 2.1:0}\\
\underline{\bf 5.3:6} & {\bf 6.6:3} & {4.8:9} & {\bf 5.5:4} & {\bf 4.1:4} & {\bf 5.1:1}\\
{5.3:7} & {\bf 6.7:1} & {\bf 5.1:4} & {5.8:2} & {\bf 5.3:0} & {\bf 6.2:5}\\
{6.7:8} & {7.3:8} & {5.3:2} & \underline{\bf 5.9:6} & {5.3:8} & {6.5:9}\\
{\bf 7.5:0} & \underline{\bf 7.5:6} & {\bf 6.7:1} & {\bf 6.8:3} & {\bf 5.8:3} & {7.5:2}\\
{7.5:9} & {7.5:7} & {\bf 6.7:3} & {7.2:7} & {6.3:7} & {\bf 7.5:4}\\
{8.4:2} & {\bf 7.6:5} & {6.7:7} & {8.9:9} & {6.7:2} & {8.7:8}\\
{\bf 8.4:4} & {9.4:2} & {\bf 7.5:0} & {\bf 9.3:1} & {\bf 7.3:5} & {8.8:7}\\
{\bf 9.1:5} & {9.6:9} & {7.6:8} & {9.7:8} & {9.5:9} & {\bf 9.3:3}\\
\hline
\end{tabular}}
\end{center}
\caption{The stop line created from tuple $t_6$ covers 41 index entries in total.}
\label{tab:sl-6}
\end{table}
Obviously, let $p$ be a stop line tuple and $t$ be a tuple such that $p \prec t$, then we have that $b_i(p) \leq b_i(t)$ on each dimensional index $\mathcal I_i$ and $b_k(p) < b_k(t)$ on at least one dimensional index $\mathcal I_k$.
\begin{theorem}
Given a database $\mathcal D$, let $S_p$ be the stop line with respect to a skyline tuple $p$.
By following any top-down traversal of all dimensional indexes, if all stop line blocks have been traversed, then the complete set of all skyline tuples has been generated and the skyline computation can stop.
\label{the:stp}
\end{theorem}
\begin{proof}
Let $p$ be a skyline tuple and $t \in \mathcal S \setminus p$ be a skyline tuple, we have (1) $t \prec\succ p$ or (2) $t = p$, if $t$ and $p$ have identical dimensional values.
In the first case, $t \prec\succ p \Rightarrow \exists k, p[k] \succ t[k] \Rightarrow b_k(p) < b_k(t)$, that is, if the index traversal passes the stop line $S_p$, the tuple $t$ must have been identified at least in the dimension $D_k$.
In the second case, we have $b_i(p) = b_i(t)$ for any dimension $D_i$.
In both cases, if all stop line blocks have been processed, then all skyline tuples have been found.
\end{proof}
In principle, any skyline tuple can be chosen to form a stop line, however, different stop lines behave differently in pruning useless tuples.
For instance, as shown in Table \ref{tab:sl-6}, the stop line $S_6$ created from $t_6$ covers totally 41 index entries and two tuples $\{t_7, t_9\}$ can be pruned; however, as shown in Table \ref{tab:sl-0}, the the stop line $S_0$ created from $t_0$ covers only 37 index entries and no tuple can be pruned.
Obviously, a good stop line shall cover index entries at much as possible, so we can use an optimal function, $min_p$, to minimize the offsets of a skyline tuple in all dimensional indexes for building a stop line $S_{p}$, defined as:
\begin{align*}
S_{p} = \mathop{\arg\min}_{p}(max\{o_i(p)\}, \sum_{i = 1}^{d} o_i(p) / d)
\end{align*}
The function min(p) sorts tuples first by the maximum offset, then by the mean offset in all dimensional indexes, so the minimized skyline tuple is the best stop line tuple.
Hence, a dynamically updated stop tuple $p$ can be maintained by keeping $min(p) < min(t)$ for any new skyline tuple $t$.
\begin{table}[htbp]
\begin{center}
{\scriptsize\begin{tabular}{|c|c|c|c|c|c|}
\hline
$\mathcal I_1$ & $\mathcal I_2$ & $\mathcal I_3$ & $\mathcal I_4$ & $\mathcal I_5$ & $\mathcal I_6$\\
\hline
{\bf 4.7:1} & \underline{\bf 1.3:0} & {\bf 1.9:6} & \underline{\bf 4.5:0} & {\bf 3.4:6} & {\bf 1.8:6}\\
{\bf 5.3:3} & {\bf 5.2:4} & {\bf 2.6:5} & {\bf 4.7:5} & {\bf 3.8:1} & \underline{\bf 2.1:0}\\
{\bf 5.3:6} & {\bf 6.6:3} & {4.8:9} & {\bf 5.5:4} & {\bf 4.1:4} & {\bf 5.1:1}\\
{5.3:7} & {\bf 6.7:1} & {\bf 5.1:4} & {5.8:2} & \underline{\bf 5.3:0} & {\bf 6.2:5}\\
{6.7:8} & {7.3:8} & {5.3:2} & {\bf 5.9:6} & {5.3:8} & {6.5:9}\\
\underline{\bf 7.5:0} & {\bf 7.5:6} & {\bf 6.7:1} & {\bf 6.8:3} & {\bf 5.8:3} & {7.5:2}\\
{7.5:9} & {7.5:7} & {\bf 6.7:3} & {7.2:7} & {6.3:7} & {\bf 7.5:4}\\
{8.4:2} & {\bf 7.6:5} & {6.7:7} & {8.9:9} & {6.7:2} & {8.7:8}\\
{\bf 8.4:4} & {9.4:2} & \underline{\bf 7.5:0} & {\bf 9.3:1} & {\bf 7.3:5} & {8.8:7}\\
{\bf 9.1:5} & {9.6:9} & {7.6:8} & {9.7:8} & {9.5:9} & {\bf 9.3:3}\\
\hline
\end{tabular}}
\end{center}
\caption{The stop line created from tuple $t_0$ covers 37 index entries in total.}
\label{tab:sl-0}
\end{table}
Nevertheless, the use of stop lines requires that all stop line blocks in all dimensions being examined, so it is difficult to judge whether a scan reaches first at the end of any dimensional index or first finishes to examine all stop line blocks although we can state that the setting of stop lines can effectively help the Skyline computation in correlated data.
We also note that the use of stop lines require that all dimensions are indexed, which is an additional constraint while applying Theorem \ref{the:sdi} and Theorem \ref{the:stp} together since Theorem \ref{the:sdi} does not impose that all dimensions must be constructed.
We propose, thus, to consider different application strategies of Theorem \ref{the:sdi} and Theorem \ref{the:stp} with respect to particular use cases and data types to accelerate the Skyline computation.
\subsection{The RangeSearch Algorithm}
Theorem \ref{the:sdi} allows to reduce the count of dominance comparisons while computing the skyline.
However, as mentioned in Section 3, the duplicate dimensional values severely augment the dominance comparisons count because a {{\sffamily\tt{BNL}}} based local comparisons must be applied.
Notice that it is useless to apply {{\sffamily\tt{SFS}}} or {{\sffamily\tt{SaLSa}}} to such local comparisons because their settings of sorting functions disable one of the most important features of our dimension indexing based approach: individual criterion including that for non-numerical values of skyline selection can be independently applied to each dimension.
In order to reduce the impact of duplicate dimensional values, we propose a simple solution based on sorting dimensional indexes by their cardinalities $|\mathcal I_i|$.
The computation starts from the best dimensional index so the calls of {{\sffamily\tt{BNL}}} can be minimized.
For instance, in Table \ref{tab:full-di}, all dimensional indexes can be sorted as $|\mathcal I_4| > |\mathcal I_2| = |\mathcal I_5| = |\mathcal I_6| > |\mathcal I_3| > |\mathcal I_1|$, where the best dimensional index $\mathcal I_4$ contains no duplicate dimensional values so Lemma \ref{lem:dist} can be directly established so dimension switching can be performed earlier.
We present then {{\sffamily\tt{SDI-RS}}} (RangeSearch), an algorithm with the application of Theorem \ref{the:sdi} and Theorem \ref{the:stp} by performing dominance comparisons only with a range of skyline tuples instead of all, as shown in Algorithm \ref{algo:rs}, to the skyline computation on sorted dimensional indexes.
\begin{algorithm}[htbp]
\SetKw{And}{and}
\SetKw{Break}{break}
\SetKw{Or}{or}
\KwIn{Sorted dimensional indexes $\mathcal I_\mathcal D$}
\KwOut{Complete set $\mathcal S$ of all skyline tuples}
$L \leftarrow$ empty stop line\\
\While{true} {
\ForEach{$\mathcal I_i \in \mathcal I_\mathcal D$} {
\While{$B \leftarrow$ get next block from $\mathcal I_i$}{
\If{$B = null$}{
\Return{$\mathcal S$}\\
}
\ForEach{$t \in B$}{
\If{$t$ has been compared \And $t \not\in \mathcal S$}{
remove $t$ from $B$\\
}
}
$\mathcal S_B \leftarrow$ compute the block Skyline from $B$ by {{\sffamily\tt{BNL}}}\\
\ForEach{$t \in \mathcal S_B$ \And $t \not\in \mathcal S$}{
\If{$\mathcal S_i \not\prec t$}{
$\mathcal S_i \leftarrow \mathcal S_i \cup t$\\
$\mathcal S \leftarrow \mathcal S \cup t$\\
$L_t \leftarrow$ build stop line from $t$\\
\If{$L = \emptyset$ \Or $L_t$ is better than $L$}{
$L \leftarrow L_t$\\
}
}
}
\If{$o_d \geq L[d]$ for each dimension $d$}{
\Return{$\mathcal S$}\\
}
\If{\mbox{\tt [dimension-switching]}}{
\Break\\
}
}
}
}
\caption{{{\sffamily\tt{SDI-RS}}} ({{\sffamily\tt{RangeSearch}}})}
\label{algo:rs}
\end{algorithm}
The algorithm accepts a set of sorted dimensional indexes $\mathcal I_\mathcal D$ of a $d$-dimensional database $\mathcal D$ as input and outputs the complete set $\mathcal S$ of all skyline tuples.
First, we initialize an empty stop line $L$, then we enter a Round Robin loop that find the complete set of all skyline tuples with respect to Theorem \ref{the:sdi} and Theorem \ref{the:stp}.
In each dimensional index $\mathcal I_i$ based iteration, we first get the next block $B$ of index entries from $\mathcal I_i$.
According to Theorem \ref{the:sdi}, if $B$ is null, which means that the end of $\mathcal I_i$ is reached, we exit the algorithm by returning $\mathcal S$; otherwise, we treat all index entries block by block to find skyline tuples.
If a tuple $t \in B$ is already compared and marked as non skyline tuple, we should ignore it in order to prevent comparing it with other tuples again; however, if $t$ is a skyline tuple, we shell keep it because $t$ may dominate other new tuples in block-based {{\sffamily\tt{BNL}}} while computing the block Skyline $\mathcal S_B$.
Therefore, for each tuple $t \in \mathcal S_B$ such that $t \not\in \mathcal S$ (again, we do not want to compare a skyline tuple with other skyline tuples), we compare it with all existing skyline tuples $\mathcal S_i$ present in current dimension $D_i$.
Here we introduce a shortcut operator $\mathcal S_i \not\prec t$ at line 12 that means that none of skyline tuples in $\mathcal S_i$ dominates $t$, and according to Theorem \ref{the:sdi}, $t$ must be a skyline tuple in this case and must be added to the dimensional Skyline $\mathcal S_i$ and the global Skyline $\mathcal S$.
Furthermore, with respect to Theorem \ref{the:stp}, we build a new stop line $L_t$ from each new skyline tuple $t$ and if it is better than current stop line $L$ (or no stop line is defined), we update $L$ by $L_t$.
While the above dominance comparisons are finished, we compare current dimensional iteration position on all dimensions with the latest stop line, if in each dimension the stop line entry is reached, {{\sffamily\tt{RangeSearch}}} stops by returning the complete Skyline $\mathcal S$.
Otherwise, {{\sffamily\tt{RangeSearch}}} switch to the next dimension and repeat the above procedure with respect to a particular {\tt [dimension-switching]} strategy.
In our approach, we consider {\em breadth-first dimension switching} ({{\sffamily\tt{BFS}}}) and {\em depth-first dimension switching} ({{\sffamily\tt{DFS}}}).
With {{\sffamily\tt{BFS}}}, if a block is examined and if {{\sffamily\tt{SDI-RS}}} shall continue to run, then the next dimension will be token.
However, in depth-first switching, if a block is examined and if {{\sffamily\tt{SDI-RS}}} shall continue to run, {{\sffamily\tt{SDI-RS}}} continues to go ahead in current dimension if current block contains new skyline tuples, till to meet a block without any new skyline tuple.
The difference between breadth-first switching and depth-first switching is clear.
{{\sffamily\tt{DFS}}} tries to accelerate skyline tuple searching in each dimension, this strategy benefits the most from Theorem \ref{the:sdi}; furthermore, if the best stop line is balanced in each dimension, then {{\sffamily\tt{DFS}}} reaches well the stop line in each dimension so more tuples can be pruned.
However, {{\sffamily\tt{DFS}}} is not efficient if there are a large number of duplicate values in some dimensions because each block shall be examined before switching to the next dimension.
In this case, depth-first switching takes duplicate dimensional values into account: since all dimensional indexes are sorted with respect to their cardinalities, {{\sffamily\tt{SDI-RS}}} starts always from the best dimensions that contain less duplicate values and finds skyline tuples as much as possible by depth-first switching, hence, while switching to other dimensions, it is possible that some tuples in some blocks have already been compared or are already skyline tuples so no more comparisons will be performed.
In comparison with sorting based algorithms like {{\sffamily\tt{SFS}}} and {{\sffamily\tt{SaLSa}}}, {{\sffamily\tt{SDI-RS}}} allows to sort tuples with respect to each dimension, which is interesting while different criteria are applied to determine the skyline.
For instance, we can specify the order {\em less than} ($<$) a one dimension and the order {\em greater than} ($>$) to another dimension, without of additional calculation to unifying and normalizing dimensional values.
With the same reason, {{\sffamily\tt{SDI-RS}}} allows to directly process categorical data as numerical data: if any total order can be defined to a categorical attribute, for instance, the user preference on colors such that {\tt blue} $\succ$ {\tt green} $\succ$ {\tt yellow} $\succ$ {\tt red}, then {{\sffamily\tt{SDI-RS}}} can treat such values as any ordered numerical values without any adaptation.
With dimensional indexes, {{\sffamily\tt{SDI-RS}}} is efficient in both space and time complexities.
The storage requirement for dimensional indexes is guaranteed: for instance, a C/C++ implementation of {{\sffamily\tt{SDI-RS}}} may consider an index entry as a {\tt struct} of tuple ID and dimensional value that requires 16 bytes (64bit ID and 64bit value), therefore if each dimensional index corresponds to a {\tt std::vector} structure, the in-memory storage size of dimensional indexes is the double of the database size: for instance, 16GB heap memory fits the allocation of 1,000,000,000 structures of ID/value, as 12,500,000 8-dimensional tuples.
Let $d$ be the dimensionality, $n$ be the cardinality of data, and $m$ be the size of the skyline.
The generation of dimensional indexes requires $\mathcal O(dn\lg{n})$ with respect to a general-purpose sorting algorithm of $\mathcal O(n\lg{n})$ complexity.
For the best-case, that is, $m = 1$, {{\sffamily\tt{SDI-RS}}} finishes in $\mathcal O(1)$ since the only skyline tuple is the stop line and the computation stops immediately; for the worst-case, all $n$ tuples are skyline tuples, {{\sffamily\tt{SDI-RS}}} finishes in $$\mathcal O(\dfrac{n(n - 1)}{2})$$ according to Theorem \ref{the:sdi} if each block contains only one tuple (that is, the case without duplicate dimensional values).
More generally, if the best dimension index contains $k$ duplicate values, then {{\sffamily\tt{SDI-RS}}} finishes in $$\mathcal O(k^2 + \dfrac{(n - k)(n -k - 1)}{2})$$ since the worst-case is that all $k$ duplicate values appear in the same block.
\section{Experimental Evaluation}
In this section, we report our experimental results on performance evaluation of {{\sffamily\tt{SDI-RS}}} that is conducted with both of {{\sffamily\tt{BFS}}} and {{\sffamily\tt{DFS}}} dimension switching, and is compared with three baseline algorithms {{\sffamily\tt{BNL}}}, {{\sffamily\tt{SFS}}}, and {{\sffamily\tt{SaLSa}}} on synthetic and real benchmark datasets.
The {\tt vol} sorting function and the {\tt max} sorting function are respectively applied to {{\sffamily\tt{SFS}}} and {{\sffamily\tt{SaLSa}}} as mentioned in \cite{Bartolini2006SaLSa}.
\input{5.0-Figures.tex}
We generate {\em independent}, {\em correlated}, and {\em anti-correlated} synthetic datasets using the standard Skyline Benchmark Data Generator\footnote{\url{http://pgfoundry.org/projects/randdataset}} \cite{Borzsony2001Operator} with the cardinality $n \in \{100K, 1M\}$ and the dimensionality $d$ in the range of 2 to 24.
Three real datasets {\tt NBA}, {\tt HOUSE}, and {\tt WEATHER} \cite{chester2015scalable} have also been used.
Table \ref{tab:syn} and Table \ref{tab:real} show statistics of all these datasets.
\begin{table}[htbp]
\begin{center}
{\scriptsize\begin{tabular}{r|l|rrrrrr}
\hline
\multicolumn{2}{c|}{Dataset} & $d = 2$ & $d = 4$ & $d = 6$ & $d = 8$ & $d = 16$ & $d = 24$\\
\hline
\hline
& 100K & 12 & 282 & 2534 & 9282 & 82546 & 99629\\
\cline{2-8}
Independent & 1M & 17 & 423 & 6617 & 30114 & 629091 & 981611\\
\hline
& 100K & 3 & 9 & 49 & 135 & 3670 & 13479\\
\cline{2-8}
Correlated & 1M & 1 & 19 & 36 & 208 & 8688 & 58669\\
\hline
& 100K & 56 & 3865 & 26785 & 55969 & 96816 & 99730\\
\cline{2-8}
Anti-correlated & 1M & 64 & 8044 & 99725 & 320138 & 892035 & 984314\\
\hline
\end{tabular}}
\end{center}
\caption{Skyline size of synthetic datasets.}
\label{tab:syn}
\end{table}
\begin{table}[htbp]
\begin{center}
{\scriptsize\begin{tabular}{l|rrr}
\hline
Dataset & Cardinality ($n$) & Dimensionality ($d$) & Skyline Size ($m$)\\
\hline
\hline
{\tt NBA} & 17264 & 8 & 1796\\
\hline
{\tt HOUSE} & 127931 & 6 & 5774\\
\hline
{\tt WEATHER} & 566268 & 15 & 63398\\
\hline
\end{tabular}}
\end{center}
\caption{Statistics of real datasets.}
\label{tab:real}
\end{table}
We implemented {{\sffamily\tt{SDI-RS}}} in C++ with {\tt C++11} standard, where dimensional indexes were implemented by STL {\tt std::vector} and {\tt std::sort()}.
In order to to evaluate the overall performance of our {{\sffamily\tt{SDI-RS}}}, three baseline algorithms {{\sffamily\tt{BNL}}}, {{\sffamily\tt{SFS}}}, and {{\sffamily\tt{SaLSa}}} were also implemented in C++ with the same code-base.
All algorithms are compiled using {\tt LLVM Clang} with {\tt -O3} optimization flag.
All experiments have been performed on a virtual computation node with 16 vCPU and 32GB RAM hosted in a server with 4 Intel Xeon E5-4610 v2 2.30GHz processors and 256GB RAM.
Figure \ref{fig:perf} shows the overall run-time, including loading/indexing data, and the total dominance comparison count of {{\sffamily\tt{SDI-RS}}} and {{\sffamily\tt{BNL}}}/{{\sffamily\tt{SFS}}}/{{\sffamily\tt{SaLSa}}} on 100K and 1M datasets, where the dimensionality is set to 2, 4 6, 8, 16, and 24.
We note that in the case of low-dimensional datasets, such as $d \leq 8$, there are no very big differences between all these 4 algorithms; however, {{\sffamily\tt{SDI-RS}}} extremely outperforms {{\sffamily\tt{BNL}}}/{{\sffamily\tt{SFS}}}/{{\sffamily\tt{SaLSa}}} in high-dimensional datasets, for instance $d \geq 16$.
Indeed, the run-time of {{\sffamily\tt{SDI-RS}}} is almost linear with respect to the increase of dimensionality, which is quite reasonable since the main cost in skyline computation is dominance comparison and {{\sffamily\tt{SDI-RS}}} allows to significantly reduce the total count of dominance comparisons.
On the other hand, it is surprising that {{\sffamily\tt{SaLSa}}} did not finish computing on all 24-dimensional datasets as Figure \ref{fig:perf} (b) and Figure \ref{fig:perf} (j), for more than 5 hours.
Notice that {{\sffamily\tt{SaLSa}}} outperforms {{\sffamily\tt{BNL}}} and {{\sffamily\tt{SFS}}} on real datasets.
We note that the total run-time of {{\sffamily\tt{SDI-RS}}} on low-dimensional correlated datasets is much than {{\sffamily\tt{BNL}}}/{{\sffamily\tt{SFS}}}/{{\sffamily\tt{SaLSa}}} as regards independent and anti-correlated datasets, because {{\sffamily\tt{SDI-RS}}} requires building dimensional indexes.
Table \ref{tab:time} details the skyline searching time $t_S$ (the time elapsed on dominance comparisons and data access in msec) and total run-time $t_T$ (the time elapsed on the whole process, including data loading and sorting/indexing in msec).
It is clear that the construction of dimensional indexes in {{\sffamily\tt{SDI-RS}}} is essential while the total processing time is short.
\begin{table}[htbp]
\begin{center}
{\scriptsize\begin{tabular}{r|l|r|r|r|r|r|r|r|r}
\hline
\multicolumn{2}{c|}{} & \multicolumn{2}{c|}{$d = 2$} & \multicolumn{2}{c|}{$d = 4$} & \multicolumn{2}{c|}{$d = 6$} & \multicolumn{2}{c}{$d = 8$}\\
\hline
\multicolumn{2}{c|}{} & $t_S$ & $t_T$ & $t_S$ & $t_T$ & $t_S$ & $t_T$ & $t_S$ & $t_T$\\
\hline
{{\sffamily\tt{SDI-RS}}} & 100K & 0.14 & 271 & 0.38 & 664 & 423 & 1657 & 243 & 1348\\
\cline{2-10}
+{{\sffamily\tt{BFS}}} & 1M & 0.17 & 5161 & 0.64 & 10293 & 1.34 & 23388 & 9896 & 40830\\
\hline
{{\sffamily\tt{SDI-RS}}} & 100K & 0.17 & 238 & 0.25 & 692 & 0.98 & 1263 & 4.75 & 1443\\
\cline{2-10}
+{{\sffamily\tt{DFS}}} & 1M & 0.14 & 5772 & 0.42 & 12069 & 1.32 & 21845 & 5.86 & 33012\\
\hline
{{\sffamily\tt{BNL}}} & 100K & 2.51 & 75 & 5.36 & 141 & 5.33 & 243 & 10.29 & 284\\
\cline{2-10}
& 1M & 25.49 & 744 & 45.13 & 1468 & 62.56 & 2267 & 94.04 & 2901\\
\hline
{{\sffamily\tt{SaLSa}}} & 100K & 2686 & 2784 & 386 & 543 & 26.03 & 278 & 51.67 & 361\\
\cline{2-10}
{\tt max} & 1M & 88.63 & 1067 & 451 & 2117 & 377 & 2987 & 674 & 3829\\
\hline
{{\sffamily\tt{SFS}}} & 100K & 1.49 & 91 & 4.53 & 162 & 4.91 & 263 & 10.99 & 320\\
\cline{2-10}
{\tt vol} & 1M & 13.33 & 931 & 33.78 & 1605 & 43.04 & 2346 & 88.83 & 3128\\
\hline
\end{tabular}}
\end{center}
\caption{Skyline searching time (msec) total run-time (msec) on correlated datasets.}
\label{tab:time}
\end{table}
Table \ref{tab:perf-real} shows the performance of {{\sffamily\tt{SDI-RS}}} on real datasets.
{{\sffamily\tt{DFS}}} dimension switching outperforms {{\sffamily\tt{BFS}}} dimension switching on both {\tt NBA} and {\tt HOUSE} datasets however {{\sffamily\tt{BFS}}} outperforms {{\sffamily\tt{DFS}}} on {\tt WEATHER} dataset.
After having investigated these datasets, we confirm that there are a large number of duplicate values in several dimension of {\tt WEATHER} dataset so the {{\sffamily\tt{BFS}}} dimension switching strategy takes its advantage.
{{\sffamily\tt{BNL}}} outperforms all other tested algorithms on {\tt HOUSE} dataset, which corresponds to the results obtained from synthetic low-dimensional independent datasets.
Furthermore, the update numbers of the best stop line in {{\sffamily\tt{SDI-RS}}} is quite limited with respect to the size of skylines.
\begin{table}[htbp]
\begin{center}
{\scriptsize\begin{tabular}{l|r|r|r|r|r}
\hline
& {{\sffamily\tt{SDI-RS}}}+{{\sffamily\tt{BFS}}} & {{\sffamily\tt{SDI-RS}}}+{{\sffamily\tt{DFS}}} & {{\sffamily\tt{BNL}}} & {{\sffamily\tt{SaLSa}}} & {{\sffamily\tt{SFS}}}\\
\hline
\hline
Dominance & 680,388 & 662,832 & 8,989,690 & 6,592,178 & 8,989,690\\
\hline
Search Time (msec) & 54 & 38 & 151 & 108 & 147\\
\hline
Total Time (msec) & 172 & 158 & 191 & 152 & 189\\
\hline
Stop Line Update & 15 & 32 & -- & -- & --\\
\hline
\end{tabular}}\\
(a) {\tt NBA} dataset: $d = 8$, $n = 17264$, $m = 1796$.\\
~\\
{\scriptsize\begin{tabular}{l|r|r|r|r|r}
\hline
& {{\sffamily\tt{SDI-RS}}}+{{\sffamily\tt{BFS}}} & {{\sffamily\tt{SDI-RS}}}+{{\sffamily\tt{DFS}}} & {{\sffamily\tt{BNL}}} & {{\sffamily\tt{SaLSa}}} & {{\sffamily\tt{SFS}}}\\
\hline
\hline
Dominance & 4,976,773 & 4,860,060 & 59,386,118 & 51,484,870 & 59,386,118\\
\hline
Search Time (msec) & 962 & 337 & 1,486 & 1,550 & 1,534\\
\hline
Total Time (msec) & 2,663 & 1,918 & 1,716 & 1,800 & 1,768\\
\hline
Stop Line Update & 16 & 18 & -- & -- & --\\
\hline
\end{tabular}}\\
(b) {\tt HOUSE} dataset: $d = 6$, $n = 127931$, $m = 5774$.\\
~\\
{\scriptsize\begin{tabular}{l|r|r}
\hline
& {{\sffamily\tt{SDI-RS}}}+{{\sffamily\tt{BFS}}} & {{\sffamily\tt{SDI-RS}}}+{{\sffamily\tt{DFS}}}\\
\hline
\hline
Dominance & 1,744,428,382 & 1,737,143,260\\
\hline
Search Time (msec) & 48,773 & 58,047\\
\hline
Total Time (msec) & 65,376 & 77,665\\
\hline
Stop Line Update & 14 & 18\\
\hline
\end{tabular}}\\
{\scriptsize\begin{tabular}{l|r|r|r}
\hline
& {{\sffamily\tt{BNL}}} & {{\sffamily\tt{SaLSa}}} & {{\sffamily\tt{SFS}}}\\
\hline
\hline
Dominance & 14,076,080,681 & 7,919,746,895 & 14,076,080,681\\
\hline
Search Time (msec) & 539,100 & 394,995 & 545,263\\
\hline
Total Time (msec) & 541,820 & 397,650 & 547,914\\
\hline
Stop Line Update & -- & -- & --\\
\hline
\end{tabular}}\\
(c) {\tt WEATHER} dataset: $d = 15$, $n = 566268$, $m = 63398$.\\
~\\
\end{center}
\caption{Performance evaluation on real datasets.}
\label{tab:perf-real}
\end{table}
We did not directly compare {{\sffamily\tt{SDI-RS}}} with all existing skyline algorithms, but with reference to most literature comparing proposed algorithms with {{\sffamily\tt{BNL}}}, {{\sffamily\tt{SFS}}}, or {{\sffamily\tt{SaLSa}}}, the comparative results obtained in our experimental evaluation indicate that {{\sffamily\tt{SDI-RS}}} outperforms the most of existing skyline algorithms.
\section{Conclusion}
In this paper, we present a novel efficient skyline computation approach.
We proved that in multidimensional databases, skyline computation can be conducted on an arbitrary dimensional index which is constructed with respect to a predefined total order that determines the skyline, we therefore proposed a dimension indexing based general skyline computation framework {{\sffamily\tt{SDI}}}.
We further showed that any skyline tuple can be used to stop the computation process by outputting the complete skyline.
Based on our analysis, we developed a new progressive skyline algorithm {{\sffamily\tt{SDI-RS}}} that first builds sorted dimensional indexes then efficiently finds skyline tuples by dimension switching in order to minimize the count of dominance comparisons.
Our experimental evaluation shows that {{\sffamily\tt{SDI-RS}}} outperforms the most of existing skyline algorithms.
Our future research direction includes the further development of the {{\sffamily\tt{SDI}}} framework as well as adapting the {{\sffamily\tt{SDI}}} framework to the context of Big Data, for instance with the Map-Reduce programming model.
| {
"attr-fineweb-edu": 1.898438,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUesPxK19JmejM-zn3 | \section{Introduction}\label{sec:introduction}
The annual revenue of the global sports market was estimated to $\$90.9$ billions in 2017~\cite{Statista.market.sports}. From this large amount, $\$28.7$ billion came from the European soccer market~\cite{Statista.market.sports.european}, more than half ($\$15.6$ billion) of which was generated by the Big Five European soccer leagues (EPL, Ligue 1, Bundesliga, Serie A and La Liga)~\cite{Statista.market.sports.european.bigfive,Statista.market.sports.european.top}. The main interest of sports broadcast is entertainment, but sports videos are also used by professionals for strategy analysis, player scouting or statistics generation. These statistics are traditionaly gathered by professional analysts watching a lot of videos and identifying the events occuring within a game. For football, this annotation task takes over 8 hours to provide up to 2000 annotations per game, according to Matteo Campodonico, \textsc{CEO} of Wyscout, a company specialized in soccer analytics~\cite{wyscout}.
To assist sports annotators in this task, several automated computer vision methods can be devised to address many of the challenges in sports video understanding: field and lines localization~\cite{Farin2003RobustCC,Homayounfar2017SportsFL,Jiang2019OptimizingTL}, ball position~\cite{Kamble2017BallTI,Sarkar2019GenerationOB,Theagarajan2018SoccerWH} and camera motion~\cite{Lu2019PantiltzoomSF,Yao2016RobustMC} tracking, detection of players~\cite{Cioppa2019ARTHuSAR,Komorowski2019FootAndBallIP,Huda2018EstimatingTN}, their moves~\cite{Felsen2017WhatWH,Manafifard2017ASO,Thinh2019AVT}, and pose~\cite{Bridgeman2019MultiPerson3P,Zecha2019RefiningJL} and the team they are playing for~\cite{Istasse2019AssociativeEF}. Detecting key actions in soccer videos remains a difficult challenge since these events within the videos are sparse, making machine learning on massive datasets difficult to achieve. Some work has nevertheless achieved significant results in that direction~\cite{Cioppa2019ACL,Giancola_2018_CVPR_Workshops}.
In this paper, we focus on action spotting and classification in soccer videos. This task has been defined as finding the anchors of human-induced soccer events in a video~\cite{Giancola_2018_CVPR_Workshops} as well as naming the action categories. Several issues arise when dealing with this task. Important actions often have no clear start and end frames in the video, they are temporally discontinuous (i.e. adjacent frames may have different annotations), and they are rather rare. To improve action spotting performance, we propose to use both audio and video input streams while previous work did only use video. Different audio-visual neural network architectures are compared. Our intuition leads us to believe that some categories of actions trigger particular reactions on the part of the public present in the stadium. For example, when a goal is scored, fans shout out. Similarly, a red card can cause discontent. Audio signals should hence provide useful information in such key cases, for instance to distinguish real scored goals from goal attempts. This is what we will show in the paper.
\paragraph{Contributions.} \textbf{(i)} We carried out an initial analysis about the possibilities of adding audio as an input in a soccer action spotting and classification context. \textbf{(ii)} Our best approach improved the performance of action classification on SoccerNet~\cite{Giancola_2018_CVPR_Workshops} by $7.43\%$ absolute with the addition of audio, compared to the video-only baseline. \textbf{(iii)} We also increased the performance of the action spotting on the same dataset by $4.19\%$ absolute.
\section{Related Work}
\paragraph{Sports Analytics and Related Applications.} Computer vision methods have been developed to help understand sport broadcasts, carry out analytics within a game~\cite{Corscadden2018DevelopingAT,DOrazio2010ARO,Thomas2017ComputerVF}, or even assist in broadcast production. Interesting use cases innclude the automatic summarization of games~\cite{Ekin2003AutomaticSV,10.1145/3347318.3355524,10.1145/3347318.3355526}, the identification of salient game actions~\cite{Feichtenhofer2016SpatiotemporalRN,Martnez2019ActionRW,Yaparla2019ANF} or the reporting of commentaries of live game video streams~\cite{Yu2018FineGrainedVC}.
Early work used camera shot segmentation and classification to summarize games~\cite{Ekin2003AutomaticSV} or focused on identifying video production patterns in order to detect salient actions of the game~\cite{Ren2005FootballVS}. Later, Bayesian networks have been used to detect goals, penalties, corner kicks and cards events~\cite{Huang2006SemanticAO} or to summarize games~\cite{Tavassolipour2014EventDA}.
More recently, deep learning approaches have been applied. Long Short-Term Memory (\textsc{LSTM}) networks~\cite{10.1162/neco.1997.9.8.1735} enabled to temporally traverse soccer videos to identify the salient actions by temporally aggregating particular features~\cite{Tsunoda2017FootballAR}. These features can be local descriptors, extracted by a Bag-of-Words (\textsc{BOW}) approach, or global descriptors, extracted by using Convolutional Neural Networks (\textsc{CNN}). Besides features, semantic information, such as player localization~\cite{Khan2018SoccerED}, as well as pixel information~\cite{Cioppa2018ABA}, are also used to train attention models to extract relevant frame features. Besides, a loss function for action spotting was proposed to tackle the issue of unclear action temporal location, by better handling the temporal context around the actions during training~\cite{Cioppa2019ACL}.
Some of the most recent works propose to identify kicks and goals in soccer games by using automatic multi-camera-based systems~\cite{Zhang2019AnAM}. Another work uses logical rules to define complex events in soccer videos in order to perform visual reasoning on these events~\cite{Khan2019VisualRO}.
These complex events can be visualized as a succession of different visual observations during the game. For example, a \textit{``corner kick''} occurs when a player of the defending team hits the ball, which passes over the goal line. This complex event is the succession of simple visual observations: the ball is seen near a flag, a player comes near the position of the ball, this player kicks the ball, and the goal post becomes visible in the scene. The logical rules used to define these complex events are Event Calculus (\textsc{EC})~\cite{ec1}, i.e. a logic framework for representing and reasoning about events and their effects. These \textsc{EC} allow to describe a scene with atomic descriptions through First Order Logic (\textsc{FOL}).
\paragraph{Activity Recognition.} Activity recognition is the general problem of detecting and then classifying video segments according to a predefined set of activity or action classes in order to understand the videos. Most methods use temporal segments~\cite{Buch2017SSTST,Gao2017TURNTT,Yang2019ExploringFS} that need to be pruned and classified~\cite{Girdhar2017ActionVLADLS,Tang2019VideoSC}.
A common way to detect activities is to aggregate and pool these temporal segments, which allows to search for a consensus~\cite{Agethen2019DeepMC,Tran2019VideoCW}. Naive methods use average or maximum pooling, which require no learning. More complex ones aim to find a structure in a feature set by clustering and pooling these features while improving discrimination. These work use learnable pooling like \textsc{BOW}~\cite{Arandjelovic2013AllAV,Jgou2010AggregatingLD}, Fisher Vector~\cite{Darczy2013FisherKF,Nagel2015MetaFV,Pan2019ForegroundFV} or \textsc{VLAD}~\cite{Arandjelovic2013AllAV}. Some works improve these techniques by respectively extending them by the incorporation of the following Deep Neural Network (\textsc{DNN}) architectures: Net\textsc{FV}~\cite{Tang2019DeepFF}, Soft\textsc{DBOW}~\cite{Philbin2008LostIQ} or Net\textsc{VLAD}~\cite{Girdhar2017ActionVLADLS}.
Instead of pooling features, some works try to identify which features might be the more useful given the video context. Some of these approaches represent and harness information in both temporal and/or spatial neighborhoods~\cite{Dai2017TemporalCN,Liu2019MultiScaleBC}, while other ones focus on attention models~\cite{Nguyen2015STAPSA,Wang2018FastAA} as a way to better leverage the surrounding information by learning adaptive confidence scores. For instance, the evidence of objects and scenes within a video can be exploited by a semantic encoder for improving activity detection~\cite{Heilbron2017SCCSC}. Moreover, coupling recognition with context-gating allows the learnable pooling methods to produce state-of-the-art recognition performance on very large benchmarks~\cite{DBLP:journals/corr/MiechLS17}.
Advanced methods for temporal integration use particular neural network architectures, such as Convolution Neural Network (\textsc{CNN})~\cite{Shou2016TemporalAL} or Recurrent Neural Network (\textsc{RNN})~\cite{Pei2016TemporalAM}. More particularly, \textsc{LSTM} architectures are often chosen for motion-aware sequence learning tasks, which is beneficial for activity recognition~\cite{Agethen2019DeepMC,Baccouche2010ActionCI}. Attention models are also harnessed to better integrate spatio-temporal information. Within this category of approaches, recent work uses a 2-models-based attention mechanism~\cite{Peng2019TwoStreamCL}. The first one consists of a spatial-level attention model, which determines the important regions in a frame, and the second one concerns the temporal-level attention, which is used to harness the discriminative frames in a video. Another work proposes a convolutional \textsc{LSTM} network supporting multiple convolutional kernels and layers coupled with an attention-based mechanism~\cite{Agethen2019DeepMC}.
\paragraph{Multimodal approaches.} Using several different and complementary input modalities can improve model performance in both action classification and action spotting tasks, since this leverages more information about the video. Earlier work uses textual sources~\cite{Oskouie2012MultimodalFE}, such as the game logs manually encoded by operators.
Recently, research in multimodal models use, in addition to the RGB video streams, information about the motion within the video sequences: the optical flow can be used~\cite{Ye2019TwoStreamCN,Yudistira2020CorrelationNS} or even player pose sequences~\cite{Cai2018TemporalHA,Vats2019TwoStreamAR}. For golf and tennis tournaments, a multimodal architecture using the reactions (such as high fives or fist pumps) and expressions of the players (aggressive, smiling, etc.), spectators (crowd cheering) and commentator (tone and word analysis), and even game analytics, was proposed~\cite{Merler2019AutomaticCO}.
Some work use the audio stream of the video but in a different manner than ours. The audio stream was used to make audio-visual classification of sport types~\cite{Gade2015AudioVisualCO}. Also, acoustic information was used to detect tennis events and track time boundaries of each tennis point in a match~\cite{Baughman2019DetectionOT}.
\section{Methodology}
Our main objective is to set-up multimodal architectures and analyze the benefit of the audio stream on the performance of a model within the soccer action spotting and classification tasks. Action spotting is the task that consists in finding the right temporal anchors of events in a video. The more a candidate spot is close to the target event, the more the spotting is considered as good. Reaching perfect spotting is hence particularly complex. Tolerance intervals are hence typically used.Regarding classification, we will use a typology of different soccer actions classes and evaluate how well our systems distinguish those classes.
We use the SoccerNet dataset~\cite{Giancola_2018_CVPR_Workshops}. It uses a typology of 3 soccer event categories: \textit{goals}, \textit{substitutions} and \textit{cards} (both yellow and red cards).
This section starts by explaining how the video and the audio streams are represented with feature vectors to be used as input of the different models. Next, it presents the baseline approach proposed in \cite{Giancola_2018_CVPR_Workshops}. This approach consists of training models for soccer action classification but to include a background class too, so that both classification and action spotting tasks can be addressed.Since the baseline \cite{Giancola_2018_CVPR_Workshops} uses only video stream, we finish this section by exposing how we use the audio stream too, with different variants for multimodal fusion.
\subsection{Video and Audio Representations}\label{subsec:front-end}
We want to work with both video and audio streams. The volume of data available for training may however be insufficient for training fully end-to-end machine learning architectures. Hence, we will here reuse existing visual and auditory feature extraction architectures, pre-trained on relevant visual and audio reference datasets. As explained in more details later, we used a ResNet~\cite{DBLP:journals/corr/HeZRS15} trained on the ImageNet~\cite{imagenet} data for the visual stream and, for the audio stream, a \textsc{VGG}~\cite{vgg} trained on spectrogram representations of the AudioSet~\cite{audioset} data. Fine-tuning of these models might be considered in the future, but at this stage, we keep their parameters fixed during our training process. In practice, we hence extracted visual and auditory features before running our experiments.
\paragraph{Video streams.} For the video streams, we used the features extracted by~\cite{Giancola_2018_CVPR_Workshops} using ResNet-152~\cite{DBLP:journals/corr/HeZRS15}, a deep convolutional neural network, pretrained on 1000 categories ImageNet~\cite{imagenet} dataset. Particularly, they used the output of the \textit{fc1000} layer, which is a 1,000-way fully-connected layer with a softmax function in the end. This layer outputs a 2,048-dimensional feature vector representation for each frame of the video. To extract these features, each video was unified at $25$ frames per second (fps) and trimmed at the start of the game, since the reference time for the event annotations is the game start. Each frame was resized and cropped to a $224 \times 224$ resolution. A TensorFlow~\cite{DBLP:journals/corr/AbadiABBCCCDDDG16} implementation was used to extract the features of the videos every 0.5 second, hence a 2 frames per second sampling rate. Then, Principal Component Analysis (\textsc{PCA}) was applied on the extracted features to reduce their dimension to 512. This still retain $93.9\%$ of their variance\footnote{Although the reference publication does not mention it, we assume the PCA transformation matrix is estimated on the SoccerNet training data.}
\paragraph{Audio streams.} For the audio streams, we used \textsc{VGG}~\cite{vgg}, a deep convolutional network architecture. Particularly, we used \textsc{VGG}ish, a \textsc{VGG} architecture pretrained on AudioSet~\cite{audioset}. AudioSet is a benchmark dataset containing 632 audio event categories and 1,789,621 labeled 10-seconds audio segments from YouTube videos. To extract audio features, we used a TensorFlow implementation of a pretrained slim version of \textsc{VGG}ish\footnote{\href{https://github.com/DTaoo/VGGish}{https://github.com/DTaoo/VGGish}}. We extracted the output of the last convolutional layer (\textit{conv4/conv4\_2}) to which we applied a global average pooling to get 512-dimensional feature vector. Since this model uses a chunk of the audio stream spectrogram as input and we want to have the same frame rate as the video features, we trimmed the audio streams at the game start and divided them into chunks of 0.5 second.
\subsection{SoccerNet baseline approach}\label{subsec:baseline}
The baseline approach proposed in~\cite{Giancola_2018_CVPR_Workshops} is divided into two parts: \textbf{(i)} video chunk classification and \textbf{(ii)} action spotting. In order to compare the performance with and without the audio stream, we followed the same approach and used the best performing models as baselines for our approach.
\paragraph{Video chunk classification.} For the classification task, shallow pooling neural networks are used. Each video is chunked into windows of duration $T$ seconds. Since the features are sampled at 2 frames per second, the input matrix to our systems is, for each chunk to be classified, an aggregation of $W=2T$ feature vectors. Therefore, the dimension of the input is $W \times 512$. Although quite rare, some chunks may have multiple labels when several actions are temporally close-by. Our deep learning architectures will hence use a sigmoid activation function at their last layer. For all the classes, a multi binary cross-entropy loss is minimized. The Adam optimizer is used. The learning rate follows a step decay and early stopping is applied, based on the validation set performance. The final evaluation metric used is the mAP accross the classes defined on SoccerNet~\cite{Giancola_2018_CVPR_Workshops}.
One of the main challenges in designing the neural network architectures for this task was related to the temporal pooling method to be used. Indeed, the selected feature extraction approaches use fixed image based models, while we want to use chunks consisting of several video frames as input to provide the system with longer term information that should be beneficial (or even necessary) to achieve useful performance.
Seven pooling techniques have been tested by~\cite{Giancola_2018_CVPR_Workshops}: \textbf{(i)} mean pooling and \textbf{(ii)} max pooling along the aggregation axis outputting 512-long features, \textbf{(iii)} a custom \textsc{CNN} with kernel dimension of $512 \times 20$ traversing the temporal dimension. At last, the approaches and implementations proposed by~\cite{DBLP:journals/corr/MiechLS17} such as \textbf{(iv)} Soft\textsc{DBOW}, \textbf{(v)} Net\textsc{FV}, \textbf{(vi)} Net\textsc{VLAD} and \textbf{(vii)} Net\textsc{RVLAD} have also been compared. These last pooling methods use clustering-based aggregation techniques to harness context-gating, which is a learnable non-linear unit aiming to model the interdependencies among the network activations. To predict the labels for the input window, a fully connected layer was then stacked after the pooling layer of each model. Dropout is also used during training in order to improve generalization. We use a keep probability of $60\%$.
\paragraph{Action spotting.} For the spotting task, \cite{Giancola_2018_CVPR_Workshops} reused their best performing model from the classification task. This model is applied on each testing video. In this case, instead of splitting video into consecutive chunks, a sliding window of size $W$ is used through the videos, with a stride of 1 second.
Therefore, for each position of the sliding window, a $W \times 512$ matrix can be obtained by the content currently covered by this window. This matrix is used as input for the model, which computes a probability vector for classifying the video chunk. Then, we get for each game a series of predictions consisting of probabilities to belong to each class (including the background no-action class).
To obtain the final spotting candidates from the predictions series, three methods were used in \cite{Giancola_2018_CVPR_Workshops}: \textbf{(i)} a watershed method using the center time within computed segment proposals; \textbf{(ii)} the time index of the maximum value of the watershed segment as the candidate; and \textbf{(iii)} the local maxima along the video and applying non-maximum-suppression (\textsc{NMS}) within the window. Here, a tolerance $\delta$ is added to the mAP as evaluation metric. Therefore, a candidate spot is defined as positive if it lands within a tolerance of $\delta$ seconds around the true temporal anchor of an event. Another metric is the Average-mAP, which is the area under the mAP curve with $\delta$ ranging from 5 to 60 seconds.
In this paper, we use Net\textsc{VLAD} and Net\textsc{RVLAD} as pooling layers, since they are the best approaches in the chosen baseline.
\subsection{Audio Input and Multimodal Fusion}
We need to define architectures of audio-visual models in order to study the influence of the audio stream on performance. We decided to use the baseline architecture described in Section \ref{subsec:baseline} for both visual and audio streams, the only difference being the feature extraction front-end (cfr. \ref{subsec:front-end}).
Next, we need to determine where the two models need to merge their pipeline in order to get better results. Both visual and audio processing pipelines can be applied until their last layers, with their outputs then being combined with a late fusion mechanism. Earlier fusion points will also be investigated. The general appearance of our multi-modal pipeline is illustrated in Figure \ref{fig:pipeline}. We hence train our models with different \textit{merge points}, illustrated by green circles and arrows on the figure. At the merge points, the audio stream and video stream representation vectors are concatenated, and the concatenated vectors are then processed by a single pipeline consisting of the remaining part of the baseline processing pipeline downstream the fusion point. We distinguish 5 merge points and 7 methods.
The first two methods are the only ones to be applied after having trained both models. The first method multiplies the probabilities estimated for each class, while the second one averages the logits, followed by the sigmoid activation function, applied on the resulting logits vectors. The third method applies the same process then method one, but the two parallel models are trained with a loss computed from the output of the common sigmoid function, instead of using pre-trained models. Methods 4 and 5 have their merge point respectively before the fully-connected layer, and before the dropout layer of the model. Merging before and after the dropout can lead to different results. Indeed, if we merge after the dropout, we will keep for each training sample the same proportion of activations from both flows, while merging before the dropout does not ensure that the information from both streams is kept fairly. Methods 6 and 7 both merge the representation vectors before the pooling layer. The difference between these methods lies in the size of the pooling layer output (1,024 for method 5 and 512 for method 6).
Every method, except the two first ones, requires a training since there are learnable parameters in both the pooling process and in the fully-connected layers.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/pipeline.png}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:pipeline}
\end{figure*}
\section{Experiments}
\subsection{SoccerNet dataset}
For our experiments, we use the same dataset as our reference baseline: SoccerNet~\cite{Giancola_2018_CVPR_Workshops}. This dataset contains videos for 500 soccer games from the Big Five European leagues (\textsc{EPL}, La Liga, Ligue 1, Bundesliga and Serie A): 300 as training set, 100 as validation set and 100 as testing set. There are 6,637 events referenced for these games, split into 3 classes: \textit{``goals''}, the instant the ball crosses the goal line to enter the net; \textit{``cards''}, the instant a yellow or a red card is shown by the referee; and \textit{``substitutions''}, the instant a new player enters in the field to replace another one. Each one of these events is annotated by the exact second it occurs in the game. For the classification task, a fourth class was added: \textit{``background''}, which corresponds to the absence of the three events.
\subsection{Video chunk classification}\label{subsec:expclf}
We train models with our 7 fusion methods. In the baseline, the best model uses Net\textsc{VLAD} as pooling layer, with a number of clusters of $k=512$. However, such a large number of clusters incurs a larger computational load, which increases linearly with the value of $k$. Therefore, we first compare our merging methods with a smaller number of clusters: $k=64$. According to~\cite{Giancola_2018_CVPR_Workshops}, the best pooling method with a number of cluster $k = 64$ is Net\textsc{RVLAD}. Therefore, we try our merging methods on models having a $64$-clusters Net\textsc{RVLAD} as pooling layer. Our results are presented on Table~\ref{tab:rvladclassification}. The video baseline result is obtained by executing the code provided by Giancola et a., and the audio baseline uses the same code, but using the audio stream as input. We also compared the performance for chunks of size 60 seconds or else 30 seconds.
We observe that a chunk of 60 seconds provides better results. This can be explained by the fact that, with 30-seconds video chunks, the \textit{``background''} class represent $93\%$ of the training data samples, whereas for a 60-seconds window, it represents $87\%$. Since there are more samples in the \textit{``background''} class, the 30-seconds models tend to classify more samples with this label, which reduces performance on other classes.
Regarding multimodal fusion, we can see that using only the audio stream provides inferior results than the video-only model. On the other hand, all our methods to combine video and audio streams improve over the performance of mono-modal systems. The best performance is obtained by the fourth merging method, which correspond to the merge point localized before the last fully connected classification layer.
In Table~\ref{tab:scorelabel}, we compare the mAP for each class for the video baseline, the audio baseline and our best fusion method. We can observe that including audio improves the performance on each category, especially for the \textit{``goals''} event, where the relative reduction of the error rate exceeds 50\%. Moreover, if the audio baseline generally performs worse than the video baseline, this is not the case for the \textit{``goals''} class, where audio alone yields better results than video alone. This corroborates our intuitions exposed in Section~\ref{sec:introduction}. Indeed, a scored goal, which clearly leads to a strong emotional reaction from the public as well as the commentators, is easier to detect through the audio stream than the video stream, where it could for instance be confused with shots on target. However, the audio stream does not seem to provide sufficient information to efficiently detect cards, leading to a poor result for this category. Finally, audio carries information about the substitutions. This can likely be explained by the fact that the public can applaud or boo the player that comes in or comes out of the field, depending on his status or the quality of his play during the game.
\begin{table}
\caption{Classification metric (mAP) for different merging methods and different video chunk sizes using Net\textsc{RVLAD}, with $k=64$ clusters, as pooling layer.}
\label{tab:rvladclassification}
\begin{center}
\begin{tabular}{c|c|c}
\textbf{Models} & \textbf{$T = 60$ sec.} & \textbf{$T = 30$ sec.} \\ \hline
\textbf{Video baseline \cite{Giancola_2018_CVPR_Workshops}} & 66.0 & 58.7 \\
\textbf{Audio baseline} & 50.6 & 43.7 \\
\textbf{Merging method 1} & 68.4 & 63.7 \\
\textbf{Merging method 2} & 72.6 & 67.3 \\
\textbf{Merging method 3} & 73.4 & 69.3 \\
\textbf{Merging method 4} & \underline{73.7} & 68.8 \\
\textbf{Merging method 5} & 72.8 & 68.7 \\
\textbf{Merging method 6} & 64.1 & 59.6 \\
\textbf{Merging method 7} & 64.2 & 58.1
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Comparison of the classification metric (mAP) on each label.}
\label{tab:scorelabel}
\begin{center}
\begin{tabular}{c|c|c|c|}
\textbf{\begin{tabular}[c]{@{}c@{}}Labels \\ \end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Video \\ baseline \cite{Giancola_2018_CVPR_Workshops}\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Audio \\ baseline\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Merge \\ method 4\end{tabular}}\\ \hline
\textit{``background''} & 97.6 & 96.7 & 98.0 \\
\textit{``cards''} & 60.5 & 19.2 & 63.9 \\
\textit{``substitutions''} & 69.8 & 55.1 & 72.6 \\
\textit{``goals''} & 67.7 & 77.3 & 84.5
\end{tabular}
\end{center}
\end{table}
Another interesting observation concerns the difference between the confusion matrices of 60-seconds and 30-seconds models. Table~\ref{tab:confusion} presents these confusion matrices for the model trained with the merge point before the fully connected layer (fourth merging method). If we focus only on the samples classified in one of the three events of interest (\textit{``cards''}, \textit{``substitutions''} and \textit{``goals''}), we can see that the proportion of errors is lower in the 30-seconds version ($2.68\%$ instead of $4.83\%$). This observation can be explained by the fact that a smaller video chunk size reduces the probability to have multiple different events in the same window. Therefore, it becomes easier to determine the differences between the three classes of interest. However, as explained earlier, the overall mAP score is worse due to the higher proportion of \textit{``background''} samples.
\begin{table*}
\caption{Confusion matrix for the model using the fourth merging method, with 60-seconds video chunks and 30-seconds video chunks.}
\label{tab:confusion}
\begin{center}
\begin{tabular}{clcccc}
\multicolumn{6}{l}{\textbf{60-seconds video chunks}} \\ \cline{3-6}
\multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & \multicolumn{4}{c|}{\textbf{Predicted labels}} \\ \cline{3-6}
\multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\textit{background}} & \multicolumn{1}{l|}{\textit{cards}} & \multicolumn{1}{l|}{\textit{subs}} & \multicolumn{1}{l|}{\textit{goals}} \\ \hline
\multicolumn{1}{|c|}{\multirow{4}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Groundtruth\\ labels\end{tabular}}}} & \multicolumn{1}{l|}{\textit{background}} & \multicolumn{1}{c|}{7673} & \multicolumn{1}{c|}{95} & \multicolumn{1}{c|}{80} & \multicolumn{1}{c|}{42} \\ \cline{2-6}
\multicolumn{1}{|c|}{} & \multicolumn{1}{l|}{\textit{cards}} & \multicolumn{1}{c|}{178} & \multicolumn{1}{c|}{243} & \multicolumn{1}{c|}{13} & \multicolumn{1}{c|}{3} \\ \cline{2-6}
\multicolumn{1}{|c|}{} & \multicolumn{1}{l|}{\textit{subs}} & \multicolumn{1}{c|}{175} & \multicolumn{1}{c|}{9} & \multicolumn{1}{c|}{310} & \multicolumn{1}{c|}{11} \\ \cline{2-6}
\multicolumn{1}{|c|}{} & \multicolumn{1}{l|}{\textit{goals}} & \multicolumn{1}{c|}{91} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{215} \\ \hline
\multicolumn{1}{l}{} & & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\
\multicolumn{6}{l}{\textbf{30-seconds video chunks}} \\ \cline{3-6}
\multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & \multicolumn{4}{c|}{\textbf{Predicted labels}} \\ \cline{3-6}
\multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\textit{background}} & \multicolumn{1}{l|}{\textit{cards}} & \multicolumn{1}{l|}{\textit{subs}} & \multicolumn{1}{l|}{\textit{goals}} \\ \hline
\multicolumn{1}{|c|}{\multirow{4}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Groundtruth\\ labels\end{tabular}}}} & \multicolumn{1}{l|}{\textit{background}} & \multicolumn{1}{c|}{16768} & \multicolumn{1}{c|}{97} & \multicolumn{1}{c|}{116} & \multicolumn{1}{c|}{43} \\ \cline{2-6}
\multicolumn{1}{|c|}{} & \multicolumn{1}{l|}{\textit{cards}} & \multicolumn{1}{c|}{224} & \multicolumn{1}{c|}{212} & \multicolumn{1}{c|}{5} & \multicolumn{1}{c|}{0} \\ \cline{2-6}
\multicolumn{1}{|c|}{} & \multicolumn{1}{l|}{\textit{subs}} & \multicolumn{1}{c|}{217} & \multicolumn{1}{c|}{11} & \multicolumn{1}{c|}{305} & \multicolumn{1}{c|}{4} \\ \cline{2-6}
\multicolumn{1}{|c|}{} & \multicolumn{1}{l|}{\textit{goals}} & \multicolumn{1}{c|}{114} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{209} \\ \hline
\end{tabular}
\end{center}
\end{table*}
After finding the best merging method, we used the best baseline model from~\cite{Giancola_2018_CVPR_Workshops}, i.e. with a $512$-clusters Net\textsc{VLAD} as pooling layer, and we trained it three times with different input configurations: \textbf{(i)} only video stream, \textbf{(ii)} only audio stream, and \textbf{(iii)} both video and audio stream, by using our best merging method. For each one of these configurations, we used 60-seconds and 20-seconds video chunks.
Table~\ref{tab:vladclassification} presents the mAP score for each of these models. As previously, we observe that using only audio stream performs worse than video stream alone but the combination of the two streams provides significantly improved performance. Moreover, the use of 60-seconds chunks for training performs way better than the use of 20-seconds chunks, except for the combination of audio and video, where the difference is non-significant.
\begin{table}
\caption{Classification metric (mAP) for models using Net\textsc{VLAD}, with $k=512$ clusters, as pooling layer.}
\label{tab:vladclassification}
\begin{center}
\begin{tabular}{c|c|c}
\textbf{Models} & \textbf{$T = 60$ sec.} & \textbf{$T = 20$ sec.} \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Video-based\\ NetVLAD baseline\end{tabular}} & 67.5 & 56.6 \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Audio-based\\ NetVLAD baseline\end{tabular}} & 46.8 & 35.9 \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}Audio + Video\\ NetVLAD\end{tabular}} & 75.2 & 75.0
\end{tabular}
\end{center}
\end{table}
The model using Net\textsc{VLAD} as pooling layer and both video and audio streams is the one registering the best results for the classification task with a mAP score of $75.2\%$. In average, adding the audio stream as input to the models increases the mAP by $7.43\%$ in absolute terms compared to the video-only-based models.
\subsection{Action spotting}
Following the methodology proposed by \cite{Giancola_2018_CVPR_Workshops}, the action spotting task, as described in Section~\ref{subsec:baseline}, uses the best trained models from the classification task and the spotting results are obtained using three method variants: segment center, segment maximum and \textsc{NMS}. For each method, we compute the Average-mAP, which is the area under the mAP curve as a function of a tolerance $\delta$ in the precise time instant of the detected event, ranging from 5 to 60 seconds. In order to make comparisons, we applied this spotting process to the 6 trained models using Net\textsc{VLAD} as pooling layer, and to 3 of the models using Net\textsc{RVLAD}: \textbf{(i)} video-only, \textbf{(ii)} audio-only, and \textbf{(iii)} both video and audio with the merge point before the fully connected layer. Table~\ref{tab:averagemap} presents the Average-mAP for each.
Similarly to the classification task, using only audio is not as good as using video alone, but the combination improves the performance. What differs from classification is that smaller video chunks leads to better results, regardless of the method used. Our intuition is that shorter chunks enable to distinguish and detect actions that are temporally closer to each other. However, the high difference in the Average-mAP scores between models trained on 20-seconds windows and the ones trained on 60-seconds windows is particularly important. This can be explained by the fact that models trained with 60-seconds video chunks will have decreasing performances when tolerance $\delta$ becomes lower than $60$ seconds since the models was not trained for this. However, if models trained on 20-seconds video chunks performs well with $\delta=20$ seconds, it will still be efficient for higher values of $\delta$. Figure~\ref{fig:tolerance} illustrates this suggestions by showing the mAP as a function of tolerance $\delta$ for both Net\textsc{VLAD}-based models using audio and video streams. We can observe that both models tend to their best performance when $\delta$ is higher or equals to the corresponding window size.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/tolerance_curve.png}
\end{center}
\caption{mAP score as a function of tolerance $\delta$ for Net\textsc{VLAD}-based models, trained with 20-seconds and 60-seconds video chunks, by using both audio and video streams.}
\label{fig:tolerance}
\end{figure}
The best model is the one trained on 20-seconds video chunks and using Net\textsc{VLAD} as pooling layer and both video and audio streams, yielding an Average-mAP of $56\%$. In average, adding the audio stream as input to the models increases the Average-mAP of $4.19\%$ absolute compared to the video-only models.
\setlength{\tabcolsep}{4pt}
\begin{table*}
\caption{Average-mAP for action spotting.}
\label{tab:averagemap}
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
& \multicolumn{3}{c|}{\textbf{Video-only}} & \multicolumn{3}{c|}{\textbf{Audio-only}} & \multicolumn{3}{c}{\textbf{Audio + Video}} \\ \hline
\textbf{Models} & \textbf{Seg. max} & \textbf{Seg. center} & \textbf{NMS} & \textbf{Seg. max} & \textbf{Seg. center} & \textbf{NMS} & \multicolumn{1}{c|}{\textbf{Seg. Max}} & \multicolumn{1}{c|}{\textbf{Seg. center}} & \textbf{NMS} \\ \hline
\textbf{NetRVLAD}
& 30.8\% & 41.9\% & 30.2\% & 21.8\% & 30.3\% & 22.1\% & 34.0\% & 47.6\% & 33.4\% \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}60-sec. chunks\\ NetVLAD\end{tabular}}
& 29.6\% & 43.4\% & 29.0\% & 19.9\% & 27.1\% & 19.5\% & 32.3\% & 48.7\% & 31.8\% \\ \hline
\textbf{\begin{tabular}[c]{@{}c@{}}20-sec. chunks\\ NetVLAD\end{tabular}}
& 49.2\% & \underline{50.2\%} & 49.4\% & 30.0\% & \underline{31.0\%} & 30.0\% & 54.0\% & \underline{56.0\%} & 53.6\%
\end{tabular}
\end{center}
\end{table*}
\setlength{\tabcolsep}{6pt}
\subsection{Additional observations}
Figure \ref{fig:learningcurves} presents the evolution of performance on both the training set and the validation set during the training process, for our best model. The green line represents the evolution of the mAP classification score on the training set and the blue line is the evolution of the mAP score on the validation set. The horizontal dotted blue line represent the best mAP score reached on the validation set. The vertical black lines indicate the epochs at which a step decay was applied on the learning rate.
We can observe that the mAP score on the training set quickly reaches a very high value, while performance on the validation set always remains much lower. A generalization gap of about $35\%$ is visible, between the performance on the training set and on the validation set. Even our best performing model significantly overfits, possibly as a consequence of the still too small size of the training set. Indeed, despite being one of the best benchmark for the soccer action spotting and classification challenges, SoccerNet contains annotations for 500 games, which represents only 3,965 annotated events available for training.
This represents one of the current limitations of our study, and strategies to either increase the training set size, reduce over-fitting, or increase the generalization capabilities of our models should represent an important avenue for future research.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/learningcurve.png}
\end{center}
\caption{Evolution of the learning curves, i.e. training and validation mAP score curves, through the epochs for our best model.}
\label{fig:learningcurves}
\end{figure}
\section{Future Work}
In future works, we suggest to pursue the exploration of additional types of input streams, like optical flow, or even language streams such as transcriptions of commentators speech.
Furthermore, exploring more elaborate fusion mechanisms could be interesting. In order to improve our current models, one could also harness others feature extraction models than ResNet and VGG.
Another aspect that can be analyzed more in depth is trying to get a better understanding of the information carried by the audio stream. The audio in SoccerNet videos contain a mix between the commentators' voice and the sound coming from the stadium , including the field and the public. Therefore, we still do not know which of those different information source have the most impact on performance.
To address the issue related to the size of SoccerNet dataset, increasing the number of training samples is a possible solution. This could rely in part on data augmentation strategies, as well as annotating additional soccer game videos, or making use of unsupervised learning techniques.
\section{Conclusion}
In this paper, we studied the influence of the audio stream on soccer action classification and action spotting tasks, with performance evaluations on the SoccerNet baseline. For both tasks, using only the audio stream provides worse results than using only the video stream, except on the \textit{``goals''} class, where audio significantly exceeds video performance. Furthermore, combining both streams yields to better results on every category of actions. Combining audio and video streams improves, in average, the performance of action classification on SoccerNet by $7.43\%$ absolute, and the performance of action spotting by $4.19\%$. We also showed that using smaller video chunk sizes performs worse on classification, but improves the results for the action spotting task.
A more in-depth study of the audio stream could lead to a better understanding of what actually provides information that the visual processing model fails to identify. In particular, separating the voices of commentators from the sound ambiance coming from the stadium could definitely help in this study.
{\small
\bibliographystyle{ieee_fullname}
| {
"attr-fineweb-edu": 1.919922,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUffM241xiKxw8JnD0 |
\section{Conclusions}
\label{sec:conclusions}
We conclude that carefully annotated bounding boxes precisely around an action are not needed for action localization. Instead of training on examples defined by expensive bounding box annotations on every frame, we use proposals for training yielding similar results. To determine which proposals are most suitable for training we only require cheap point annotations on the action for a fraction of the frames.
Experimental evaluation on the UCF Sports and UCF 101 datasets shows that:
(i) the use of proposals over directly using the ground truth does not lead to a loss in localization performance,
(ii) action localization using points is comparable to using full box supervision, while being significantly faster to annotate,
(iii) our results are competitive to the current state-of-the-art.
Based on our approach and experimental results we furthermore introduce \emph{Hollywood2Tubes}, a new action localization dataset with point annotations for train videos. The point of this paper is that valuable annotation time is better spent on clicking in more videos than on drawing precise bounding boxes.
\section*{Acknowledgements}
This research is supported by the STW STORY project.
\section{Experimental setup}
\label{sec:experiments}
\subsection{Datasets}
\noindent
We perform our evaluation on two action localization datasets that have bounding box annotations both for training and test videos.
\\\\
\textbf{UCF Sports} consists of 150 videos covering 10 action categories \cite{RodriguezCVPR2008}, such as \emph{Diving}, \emph{Kicking}, and \emph{Skateboarding}. The videos are extracted from sport broadcasts and are trimmed to contain a single action. We employ the train and test data split as suggested in~\cite{lan2011discriminative}.
\\
\textbf{UCF 101} has 101 actions categories \cite{soomro2012ucf101} where 24 categories have spatio-temporal action localization annotations. This subset has 3,204 videos, where each video contains a single action category, but might contain multiple instances of the same action. We use the first split of the train and test sets as suggested in~\cite{soomro2012ucf101} with 2,290 videos for training and 914 videos for testing.
\subsection{Implementation details}
\label{sec:details}
\indent
\textbf{Proposals.} Our proposal mining is agnostic to the underlying proposal algorithm. We have performed experiments using proposals from both APT~\cite{vangemert2015apt} and Tubelets~\cite{jain2014action}. We found APT to perform slightly better and report all results using APT.
\\
\textbf{Features.} For each tube we extract Improved Dense Trajectories and compute HOG, HOF, Traj, MBH features~\cite{wang13}. The combined features are reduced to 128 dimensions through PCA and aggregated into a fixed-size representation using Fisher Vectors~\cite{sanchez2013image}.
We construct a codebook of 128 clusters, resulting in a 54,656-dimensional representation per proposal.
\\
\textbf{Training.} We train the proposal mining optimization for 10 iterations for all our evaluations, similar to Cinbis \emph{et al.}\xspace~\cite{cinbis2014multi}. Following further suggestions by~\cite{cinbis2014multi}, we randomly split the training videos into multiple (3) splits to train and select the instances. While training a classifier for one action, we randomly sample 100 proposals of each video from the other actions as negatives. We set the SVM regularization $\lambda$ to 100.
\\
\textbf{Evaluation.} During testing we apply the classifier to all proposals of a test video and maintain the top proposals per video. To evaluate the action localization performance, we compute the Intersection-over-Union (IoU) between proposal $p$ and the box annotations of the corresponding test example $b$ as: $\text{iou}(p, b) = \frac{1}{| \Gamma |} \sum_{f \in \Gamma} IoU_{p,b}(f)$, where $\Gamma$ is the set of frames where at least one of $p,b$ is present~\cite{jain2014action}. The function $IoU$ states the box overlap for a specified frame.
For IoU threshold $t$, a top selected proposal is deemed a positive detection if $\text{iou}(p, b) \geq t$.
After combining the top proposals from all videos, we compute the Average Precision score using their ranked scores and positive/negative detections. For the comparison to the state-of-the-art on UCF Sports, we additionally report AUC (Area under ROC curve) on the scores and detections.
\section{Introduction}
This paper is about spatio-temporal localization of actions like \emph{Driving a car}, \emph{Kissing}, and \emph{Hugging} in videos. Starting from a sliding window legacy \cite{TianPartCVPR2013}, the common approach these days is to generate tube-like proposals at test time, encode each of them with a feature embedding and select the most relevant one, \emph{e.g.,}\xspace \cite{jain2014action,yuCVPR2015fap,vangemert2015apt,soomroICCV2015actionLocContextWalk}. All these works, be it sliding windows or tube proposals, assume that a carefully annotated training set with boxes per frame is available a priori. In this paper, we challenge this assumption. We propose a simple algorithm that leverages proposals at \emph{training} time, with a minimum amount of supervision, to speedup action location annotation.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{images/3d/overview-v4.pdf}
\caption{\textbf{Overview of our approach} for a \emph{Swinging} and \emph{Standing up} action. First, the video is annotated cheaply using point-supervision. Then, action proposals are extracted and scored using our overlap measure. Finally, our proposal mining aims to discover the single one proposal that best represents the action, given the provided points.}
\label{fig:qual-method}
\end{figure}
We draw inspiration from related work on weakly-supervised object detection, \emph{e.g.,}\xspace \cite{KimNIPS2009,RussakovskyECCV2012,cinbis2014multi}. The goal is to detect an object and its bounding box at test time given only the object class label at train time and no additional supervision. The common tactic in the literature is to model this as a Multiple Instance Learning (MIL) problem \cite{cinbis2014multi,NguyenICCV2009,andrews2002support} where positive images contain at least one positive object proposal and negative images contain only negative proposals. During each iteration of MIL, a detector is trained and applied on the train set to re-identify the object proposal most likely to enclose the object of interest. Upon convergence, the final detector is applied on the test set. Methods typically vary in their choice of initial proposals and the multiple instance learning optimization.
In the domain of action localization a similar MIL tactic easily extends to action proposals as well but results in poor accuracy as our experiments show. Similar to weakly-supervised object detection, we rely on (action) proposals and MIL, but we include a minimum amount of supervision to retain action localization accuracy competitive with full supervision.
Obvious candidates for the supervision are action class labels and bounding boxes, but other forms of supervision, such as tags and line strokes, are also feasible~\cite{XuCVPR2015}. In~\cite{bearmanArXiv15whatsthepoint}, Bearman \emph{et al.}\xspace show that human-provided points on the image are valuable annotations for semantic segmentation of objects. By inclusion of an objectness prior in their loss function they report a better efficiency/effectiveness trade off compared to image-level annotations and free-from squiggles.
We follow their example in the video domain and leverage point-supervision to aid MIL in finding the best action proposals at training time.
We make three contributions in this work. First, we propose to train action localization classifiers using spatio-temporal proposals as positive examples rather than ground truth tubes. While common in object detection, such an approach is as of yet unconventional in action localization. In fact, we show that using proposals instead of ground truth annotations does not lead to a decrease in action localization accuracy. Second, we introduce an MIL algorithm that is able to mine proposals with a good spatio-temporal fit to actions of interest by including point supervision. It extends the traditional MIL objective with an overlap measure that takes into account the affinity between proposals and points. Finally, with the aid of our proposal mining algorithm, we are able to supplement the complete Hollywood2 dataset by Marsza{\l}ek \emph{et al.}\xspace \cite{marszalek09} with action location annotations, resulting in \emph{Hollywood2Tubes}. We summarize our approach in Figure~\ref{fig:qual-method}. Experiments on Hollywood2Tubes, as well as the more traditional UCF Sports and UCF 101 collections support our claims. Before detailing our pointly-supervised approach we present related work.
\section{Strong action localization using cheap annotations}
We start from the hypothesis that an action localization proposal may substitute the ground truth on a training set without a significant loss of classification accuracy. Proposal algorithms yield hundreds to thousands of proposals per video with the hope that at least one proposal matches the action well~\cite{jain2014action,vangemert2015apt,soomroICCV2015actionLocContextWalk,oneata2014spatio,chencorsoICCV2015actiondetectionMotionClustering,marianICCV2015unsupervisedTube}. The problem thus becomes how to mine the best proposal out of a large set of candidate proposals with minimal supervision effort.
\subsection{Cheap annotations: action class labels and pointly-supervision}
A minimum of supervision effort is an action class label for the whole video. For such global video labels, a traditional approach to mining the best proposal is Multiple Instance Learning~\cite{andrews2002support} (MIL). In the context of action localization, each video is interpreted as a bag and the proposals in each video are interpreted as its instances. The goal of MIL is to train a classifier that can be used for proposal mining by using only the global label.
Next to the global action class label we leverage cheap annotations within each video: for a subset of frames we simply point at the action. We refer to such a set of point annotations as \textit{pointly-supervision}. The supervision allows us to easily exclude those proposals that have no overlap with any annotated point. Nevertheless, there are still many proposals that intersect with at least one point. Thus, points do not uniquely identify a single proposal. In the following we will introduce an overlap measure to associate proposals with points. To perform the proposal mining, we will extend MIL's objective to include this measure.
\subsection{Measuring overlap between points and proposals}
To explain how we obtain our overlap measure, let us first introduce the following notation. For a video $V$ of $N$ frames, an action localization proposal $A=\{\text{BB}_i \}_{\text{i}=f}^m$ consists of connected bounding boxes through video frames $(f,...,m)$ where $1 \le f \le m \le N$. We use $\overbar{BB_{i}}$ to indicate the center of a bounding box $i$. The pointly-supervision $C=\{(x_i,y_i) \}^K$ is a set of $K \le N$ sub-sampled video frames where each frame $i$ has a single annotated point $(x_i,y_i)$. Our overlap measure outputs a score for each proposal depending on how well the proposal matches the points.
Inspired by a mild center-bias in annotators~\cite{tseng2009quantifying}, we introduce a term $M(\cdot)$ to represent how close the center of a bounding box proposal is to an annotated point, relative to the bounding box size. Since large proposals have a higher likelihood to contain any annotated point we use a regularization term $S(\cdot)$ on the proposal size. The center-bias term $M(\cdot)$ normalizes the distance to the bounding box center by the distance to the furthest bounding box side. A point $(x_i,y_i) \in C$ outside a bounding box $BB_i \in A$ scores 0 and a point on the bounding box center $\overbar{BB_{i}}$ scores 1. The score decreases linearly with the distance to the center for the point. It is averaged over all annotated points $K$:
\begin{equation}
M(A, C) = \frac{1}{K} \sum_{i=1}^{K} \text{max}(0, 1 - \frac{||(x_i,y_i) - \overbar{BB_{K_i}} ||_2}{ \max_{(u,v) \in e(BB_{K_i})} ||( (u,v) - \overbar{BB_{K_i}}) ||_2},
\label{eq:overlap1}
\end{equation}
where $e(BB_{K_i})$ denotes the box edges of box $BB_{K_i}$.
We furthermore add a regularization on the size of the proposals.
The idea behind the regularization is that small spatial proposals can occur anywhere. Large proposals, however, are obstructed by the edges of the video. This biases their middle-point around the center of the video, where the action often happens.
The size regularization term $S(\cdot)$ addresses this bias by penalizing proposals with large bounding boxes $|BB_{i}| \in A$, compared to the size of a video frame $|F_i| \in V$,
\begin{equation}
S(A, V) = \big( \frac{ \sum_{i=f}^m |BB_{i}| }{\sum_{j=1}^N |F_j|} \big) ^{2}.
\end{equation}
Using the center-bias term $M(\cdot)$ regularized by $S(\cdot)$, our overlap measure $O(\cdot)$ is defined as
\begin{equation}
O(A, C, V) = M(A, C) - S(A, V).
\label{eq:overlappoint}
\end{equation}
Recall that $A$ are the proposals, $C$ captures the pointly-supervision and $V$ the video. We use $O(\cdot)$ in an iterative proposal mining algorithm over all annotated videos in search for the best proposals.
\subsection{Mining proposals overlapping with points}
For proposal mining, we start from a set of action videos $\{ \textbf{x}_{i}, t_{i}, y_{i}, C_i\}_{i=1}^{N}$, where $\textbf{x}_{i} \in \mathbb{R}^{A_{i} \times D}$ is the $D$-dimensional feature representation of the $A_{i}$ proposals in video $i$. Variable $t_{i} = \{ \{ BB_{j} \}_{j=f}^{m} \}^{A_{i}}$ denotes the collection of tubes for the $A_{i}$ proposals. Cheap annotations consist of the class label $y_i$ and the points $C_i$.
For proposal mining we insert our overlap measure $O(\cdot)$ in a Multiple Instance Learning scheme to train a classification model that can learn the difference between good and bad proposals. Guided by $O(\cdot)$, the classifier becomes increasingly more aware about which proposals are a good representative for an action. We start from a standard MIL-SVM~\cite{cinbis2014multi,andrews2002support} and adapt it's objective with the the mining score $P(\cdot)$ of each proposal, which incorporates our function $O(\cdot)$ as:
\begin{equation}
\begin{split}
& \min_{\mathbf{w},b,\xi} \frac{1}{2} ||\mathbf{w}||^{2} + \lambda \sum_{i} \xi_{i},\\
\text{s.t.} \quad & \forall_{i} : y_{i} \cdot ( \mathbf{w} \cdot \argmax_{\mathbf{z} \in x_{i}} P(\mathbf{z} | \mathbf{w}, b, t_{i}, C_{i}, V_i) + b) \geq 1 - \xi_{i},\\
&\forall_{i} : \xi_i \geq 0,
\end{split}
\label{eq:milsvm}
\end{equation}
where $(\mathbf{w},b)$ denote the classifier parameters, $\xi_{i}$ denotes the slack variable and $\lambda$ denotes the regularization parameter. The proposal with the highest mining score per video is used to train the classifier.
The objective of Equation~\ref{eq:milsvm} is non-convex due to the joint minimization over the classifier parameters $(\mathbf{w}, b)$ and the maximization over the mined proposals $P(\cdot)$. Therefore, we perform iterative block coordinate descent by alternating between clamping one and optimizing the other. For fixed classifier parameters $(\mathbf{w}, b)$, we mine the proposal with the highest Maximum a Posteriori estimate with the classifier as the likelihood and $O(\cdot)$ as the prior:
\begin{eqnarray}
P(\mathbf{z} | \mathbf{w}, b, t_{i}, C_{i}, V_i) & \propto & \left( <\!\!\mathbf{w}, \mathbf{z}\!\!> + b \right) \cdot O(t_i, C_{i}, V_i).
\label{eq:mining-map}
\end{eqnarray}
After a proposal mining step, we fix $P(\cdot)$ and train the classifier parameters $(\mathbf{w}, b)$ with stochastic gradient descent on the mined proposals. We alternate the mining and classifier optimizations for a fixed amount of iterations. After the iterative optimization, we train a final SVM on the best mined proposals and use that classifier for action localization.
\section{Related work}
\label{sec:relwork}
Action localization is a difficult problem and annotations are avidly used. Single image bounding box annotations allow training a part-based detector~\cite{TianPartCVPR2013,lan2011discriminative} or a per-frame detector where results are aggregated over time~\cite{gkioxari2015finding,weinzaepfelICCV2015learningToTrack}. However, since such detectors first have to be trained themselves, they cannot be used when no bounding box annotations are available. Independent training data can be brought in to automatically detect individual persons for action localization~\cite{yuCVPR2015fap,lucorsoCVPR2015humanAction,wang2014video}.
A person detector, however, will fail to localize contextual actions such as \textit{Driving} or interactions such as \textit{Shaking hands} or \textit{Kissing}. Recent work using unsupervised action proposals based on supervoxels~\cite{jain2014action,soomroICCV2015actionLocContextWalk,oneata2014spatio} or on trajectory clustering
\cite{vangemert2015apt,chencorsoICCV2015actiondetectionMotionClustering,marianICCV2015unsupervisedTube}, have shown good results for action localization. In this paper we rely on action proposals to aid annotation. Proposals give excellent recall without supervision and are thus well-suited for an unlabeled train set.
Large annotated datasets are slowly coming available in action localization. Open annotations benefit the community, paving the way for new data-driven action localization methods. UCF-Sports~\cite{soomro2014actionInSports}, HOHA~\cite{raptis2012discovering} and MSR-II~\cite{cao2010crossDatasetActionDetectionMSRIIset} have up to a few hundred actions, while UCF101~\cite{soomro2012ucf101}, Penn-Action~\cite{zhangICCV13actemes}, and J-HMBD~\cite{jhuangICCV2013towardsUnderstanding} have 1--3 thousand action clips and 3 to 24 action classes. The problem of scaling up to larger sets is not due to sheer dataset size: there are millions of action videos with hundreds of action classes available~\cite{soomro2012ucf101,gorban2015thumos,karpathy2014largescalevidSports1M,kuehne2011hmdb}. The problem lies with the spatio-temporal annotation effort.
In this paper we show how to ease this annotation effort, exemplified by releasing spatio-temporal annotations for all Hollywood2 \cite{marszalek09} videos.
Several software tools are developed to lighten the annotation burden. The gain can come from a well-designed user interface to annotate videos with bounding boxes~\cite{mihalcik2003ViPER,vondrickIJCV2013crowdsourced} or even polygons~\cite{yuenICCV09labelmeVideo}. We move away from such complex annotations and only require a point. Such point annotations can readily be included in existing annotation tools which would further reduce effort. Other algorithms can reduce annotation effort by intelligently selecting which example to label~\cite{settles2010active}. Active learning~\cite{vondrick2011video} or trained detectors~\cite{biancoCVIU15interactiveAnnotation} can assist the human annotator. The disadvantage of such methods is the bias towards the used recognition method. We do not bias any algorithm to decide where and what to annotate: by only setting points we can quickly annotate all videos.
Weakly supervised methods predict more information than was annotated. Examples from static images include predicting a bounding box while having only class labels \cite{cinbis2014multi,bilenCVPR15weakObjDetConvexClust,OquabCVPR15isObjLocForFree} or even no labels al all~\cite{choCVPR15unsupervised}. In the video domain, the temporal dimension offers more annotation variation. Semi-supervised learning for video object detection is done with a few bounding boxes~\cite{aliCVPR11flowboost,MisraCVPR15semiSupObjeDetfromVid}, a few global frame labels~\cite{wangECCV14videoObject}, only video class labels~\cite{sivaECCV12defenceNegativeMining}, or no labels at all~\cite{kwakICCV15unsupervisedObjectInVid}. For action localization, only the video label is used by~\cite{mosabbeb2014multi,siva2011weakly}, whereas \cite{jain2015objects2action} use no labels. As our experiments show, using no label or just class labels performs well below fully supervised results. Thus, we propose a middle ground: pointing at the action. Compared to annotating full bounding boxes this greatly reduces annotation time while retaining accuracy.
\section{Results}
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{images/exp/exp1/exp1-sports-thresholds-bar.pdf}
\caption{UCF Sports.}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{images/exp/exp1/exp1-101-thresholds-bar.pdf}
\caption{UCF 101.}
\end{subfigure}
\caption{\textbf{Training action localization classifiers with proposals} vs ground truth tubes on (a) UCF Sports and (b) UCF 101. Across both datasets and thresholds, the best possible proposal yields similar results to using the ground truth. Also note how well our mined proposal matches the ground truth and best possible proposal we could have selected.}
\label{fig:exp1}
\end{figure}
\subsection{Training without ground truth tubes}
First we evaluate our starting hypothesis of replacing ground truth tubes with proposals for training action localization classifiers. We compare three approaches: 1) train on ground truth annotated bounding boxes; 2) train on the proposal with the highest IoU overlap for each video; 3) train on the proposal mined based on point annotations and our proposal mining. For the points on both datasets, we take the center of each annotated bounding box.
\textbf{Training with the best proposal.} Figure~\ref{fig:exp1} shows that the localization results for the best proposal are similar to the ground truth tube for both datasets and across all IoU overlap thresholds as defined in Section~\ref{sec:details}.
This result shows that proposals are sufficient to train classifiers for action localization. The result is somewhat surprising given that the best proposals used to train the classifiers have a less than perfect fit with the ground truth action. We computed the fit with the ground truth, and on average the IoU score of the best proposals (the ABO score) is 0.642 on UCF Sports and 0.400 on UCF 101. The best proposals are quite loosely aligned with the ground truth. Yet, training on such non-perfect proposals is not detrimental for results. This means that a perfect fit with the action is not a necessity during training. An explanation for this result is that the action classifier is now trained on the same type of noisy samples that it will encounter at test-time. This better aligns the training with the testing, resulting in slightly improved accuracy.
\textbf{Training with proposal mining from points.} Figure~\ref{fig:exp1} furthermore shows the localization results from training without bounding box annotations using only point annotations. On both data sets, results are competitive to the ground truth tubes across all thresholds. This result shows that when training on proposals, carefully annotated box annotations are not required.
Our proposal mining is able to discover the best proposals from cheap point annotations. The discrepancy between the ground truth and our mined proposal for training is shown in Figure~\ref{fig:exp1-qual} for thee videos. For some videos, \emph{e.g.,}\xspace Figure~\ref{fig:exp1-qual-1}, the ground truth and the proposal have a high similarity. This does however not hold for all videos, \emph{e.g.,}\xspace Figures~\ref{fig:exp1-qual-2}, where our mined proposal focuses solely on the lifter (\emph{Lifting}), and~\ref{fig:exp1-qual-3}, where our mined proposal includes the horse (\emph{Horse riding}).
\textbf{Analysis.} On UCF 101, where actions are not temporally trimmed, we observe an average temporal overlap of 0.74. The spatial overlap in frames where proposals and ground truth match is 0.38. This result indicates that we are better capable of detecting actions in the temporal domain than the spatial domain. On average, top ranked proposals during testing are 2.67 times larger than their corresponding ground truth. Despite a preference for larger proposals, our results are comparable to the fully supervised method trained on expensive ground truth bounding box tubes. Finally, we observe that most false positives are proposals from positive test videos with an overlap score below the specified threshold. On average, 26.7\% of the top 10 proposals on UCF 101 are proposals below the overlap threshold of 0.2. Regarding false negatives, on UCF 101 at a 0.2 overlap threshold, 37.2\% of the actions are not among the top selected proposals. This is primarily because the proposal algorithm does not provide a single proposal with enough overlap.
From this experiment we conclude that training directly on proposals does not lead to a reduction in action localization accuracy. Furthermore, using cheap point annotations with our proposal mining yields results competitive to using carefully annotated bounding box annotations.
\begin{figure}[t]
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/3d/exp1/walking.png}
\caption{\emph{Walking.}}
\label{fig:exp1-qual-1}
\end{subfigure}
\hspace{0.25cm}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/3d/exp1/lifting.png}
\caption{\emph{Lifting.}}
\label{fig:exp1-qual-2}
\end{subfigure}
\hspace{0.25cm}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/3d/exp1/riding-horse.png}
\caption{\emph{Riding horse.}}
\label{fig:exp1-qual-3}
\end{subfigure}
\caption{\textbf{Training video showing our mined proposal} (blue) and the ground truth (red). (a) Mined proposals might have a high similarity to the ground truth. In (b) our mining focuses solely on the person lifting, while in (c) our mining has learned to include part of the horse. An imperfect fit with the ground truth does not imply a bad proposal.}
\label{fig:exp1-qual}
\end{figure}
\subsection{Must go faster: lowering the annotation frame-rate}
\begin{figure}[t]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.479\textwidth]{images/exp/exp2/exp2-sports-02.pdf}
\includegraphics[width=0.479\textwidth]{images/exp/exp2/exp2-sports-05.pdf}
\caption{UCF Sports.}
\label{fig:exp2-sports}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.479\textwidth]{images/exp/exp2/exp2-101-02.pdf}
\includegraphics[width=0.479\textwidth]{images/exp/exp2/exp2-101-05.pdf}
\caption{UCF 101.}
\label{fig:exp2-101}
\end{subfigure}
\caption{\textbf{The annotation speedup} versus mean Average Precision scores on (a) UCF Sports and (b) UCF 101 for two overlap thresholds using both box and point annotations. The annotation frame-rates are indicated on the lines. Using points remains competitive to boxes with a 10x to 80x annotation speed-up.}
\label{fig:exp2}
\end{figure}
The annotation effort can be significantly reduced by annotating less frames. Here we investigate how a higher annotation frame-rate influences the trade-off between annotation speed-up versus classification performance. We compare higher annotation frame-rates for points and ground-truth bounding boxes.
\textbf{Setup.} For measuring annotation time we randomly selected 100 videos from the UCF Sports and UCF 101 datasets separately and performed the annotations. We manually annotated boxes and points for all evaluated frame-rates $\{1,2,5,10,...\}$. We obtain the points by simply reducing a bounding box annotation to its center. We report the speed-up in annotation time compared to drawing a bounding box on every frame.
Classification results are given for two common IoU overlap thresholds on the test set, namely 0.2 and 0.5.
\textbf{Results.} In Figure~\ref{fig:exp2} we show the localization performance as a function of the annotation speed-up for UCF Sports and UCF 101. Note that when annotating all frames, a point is roughly 10-15 times faster to annotate than a box. The reason for the reduction in relative speed-up between the higher frame-rates is due to the constant time spent on determining the action label of each video. When analyzing classification performance we note it is not required to annotate all frames. Although the performance generally decreases as less frames are annotated, using a frame rate of 10 (\emph{i.e.,}\xspace annotating 10\% of the frames) is generally sufficient for retaining localization performance. We can get competitive classification scores with an annotation speedup of 45 times or more.
The results of Figure~\ref{fig:exp2} show the effectiveness of our proposal mining after the iterative optimization. In Figure~\ref{fig:exp2-qual}, we provide three qualitative training examples, highlighting the mining during the iterations. We show two successful examples, where mining improves the quality of the top proposal, and a failure case, where the proposal mining reverts back to the initially mined proposal.
\begin{figure}[t]
\centering
\begin{subfigure}{0.875\textwidth}
\centering
\includegraphics[width=\textwidth]{images/3d/swing-golf-004/qualitative-good.pdf}
\caption{\emph{Swinging Golf.}}
\end{subfigure}
\vspace{0.05cm}\\
\begin{subfigure}{0.875\textwidth}
\centering
\includegraphics[width=\textwidth]{images/3d/qualitative-good-2.pdf}
\caption{\emph{Running.}}
\end{subfigure}
\vspace{0.05cm}\\
\begin{subfigure}{0.875\textwidth}
\centering
\includegraphics[width=\textwidth]{images/3d/skateboarding-010/qualitative-bad.pdf}
\caption{\emph{Skateboarding.}}
\end{subfigure}
\caption{\textbf{Qualitative examples} of the iterative proposal mining (blue) during training, guided by points (red) on UCF Sports. (a) and (b): the final best proposals have a significantly improved overlap (from 0.194 to 0.627 and from 0.401 to 0.526 IoU). (c): the final best proposal is the same as the initial best proposal, although halfway through the iterations, a better proposal was mined.
\label{fig:exp2-qual}
\end{figure}
Based on this experiment, we conclude that points are faster to annotate, while they retain localization performance. We recommend that at least 10\% of the frames are annotated with a point to mine the best proposals during training. Doing so results in a 45 times or more annotation time speed-up.
\subsection{Hollywood2Tubes: Action localization for Hollywood2}
Based on the results from the first two experiments, we are able to supplement the complete Hollywood2 dataset by Marsza{\l}ek \emph{et al.}\xspace \cite{marszalek09} with action location annotations, resulting in \emph{Hollywood2Tubes}. The dataset consists of 12 actions, such as \emph{Answer a Phone}, \emph{Driving a Car}, and \emph{Sitting up/down}. In total, there are 823 train videos and 884 test videos, where each video contains at least one action. Each video can furthermore have multiple instances of the same action. Following the results of Experiment 2 we have annotated a point on each action instance for every 10 frames per training video. In total, there are 1,026 action instances in the training set; 29,802 frames have been considered and 16,411 points have been annotated. For the test videos, we are still required to annotate bounding boxes to perform the evaluation. We annotate every 10 frames with a bounding box. On both UCF Sports and UCF 101, using 1 in 10 frames yields practically the same IoU score on the proposals. In total, 31,295 frames have been considered, resulting in 15,835 annotated boxes. The annotations, proposals, and localization results are available at \texttt{\url{http://tinyurl.com/hollywood2tubes}}.
\\\\
\textbf{Results.} Following the experiments on UCF Sports and UCF 101, we apply proposals~\cite{vangemert2015apt} on the videos of the Hollywood2 dataset. In Figure~\ref{fig:hollywood2-recall}, we report the action localization test recalls based on our annotation efforts. Overall, a MABO of 0.47 is achieved.
The recall scores are lowest for actions with a small temporal span, such as \emph{Shaking hands} and \emph{Answer a Phone}. The recall scores are highest for actions such as \emph{Hugging a person} and \emph{Driving a Car}. This is primarily because these actions almost completely fill the frames in the videos and have a long temporal span.
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{images/hollywood2/recall-hollywood2-apt-t200.pdf}
\caption{Recalls (MABO: 0.47).}
\label{fig:hollywood2-recall}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{images/hollywood2/map-hollywood2-apt-t200.pdf}
\caption{Average Precisions.}
\label{fig:hollywood2-map}
\end{subfigure}
\caption{\textbf{Hollywood2Tubes}: Localization results for Hollywood2 actions across all overlap thresholds.
The discrepancy between the recall and Average Precision indicates the complexity of the \emph{Hollywood2Tubes} dataset for action localization.}
\label{fig:hollywood2-res}
\end{figure}
In Figure~\ref{fig:hollywood2-map}, we show the Average Precision scores using our proposal mining with point overlap scores. We observe that a high recall for an action does not necessarily yield a high Average Precision score. For example, the action \emph{Sitting up} yields an above average recall curve, but yields the second lowest Average Precision curve. The reverse holds for the action \emph{Fighting a Person}, which is a top performer in Average Precision. These results provide insight into the complexity of jointly recognizing and localizing the individual actions of \emph{Hollywood2Tubes}. The results of Figure~\ref{fig:hollywood2-res} shows that there is a lot of room for improvement.
In Figure~\ref{fig:hardcases}, we highlight a difficult cases for action localization, which are not present in current localization datasets, adding to the complexity of the dataset. In the Supplementary Materials, we outline additional difficult cases, such as cinematographic effects and switching between cameras within the same scene.
\begin{figure}[t]
\centering
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{images/hollywood2/hardcase-1-dyadic.pdf}
\caption{Interactions.}
\label{fig:hardcase-1}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{images/hollywood2/hardcase-2-context.pdf}
\caption{Context.}
\label{fig:hardcase-2}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{images/hollywood2/hardcase-3-multilabel.pdf}
\caption{Co-occurrence.}
\label{fig:hardcase-3}
\end{subfigure}
\caption{\textbf{Hard scenarios for action localization} using Hollywood2Tubes, not present in current localization challenges. Highlighted are actions involving two or more people, actions partially defined by context, and co-occurring actions within the same video.}
\label{fig:hardcases}
\end{figure}
\subsection{Comparison to the state-of-the-art}
In the fourth experiment, we compare our results using the point annotations to the current state-of-the-art on action localization using box annotations on the UCF Sports, UCF 101, and Hollywood2Tubes datasets.
In Table~\ref{tab:sota}, we provide a comparison to related work on all datasets.
For the UCF 101 and Hollywood2Tubes datasets, we report results with the mean Average Precision. For UCF Sports, we report results with the Area Under the Curve (AUC) score, as the AUC score is the most used evaluation score on the dataset. All reported scores are for an overlap threshold of 0.2.
We furthermore compare our results to two baselines using other forms of cheap annotations. This first baseline is the method of Jain \emph{et al.}\xspace~\cite{jain2015objects2action} which performs zero-shot localization, \emph{i.e.,}\xspace no annotation of the action itself is used, only annotations from other actions. The second baseline is the approach of Cinbis \emph{et al.}\xspace~\cite{cinbis2014multi} using global labels, applied to actions.
\textbf{UCF Sports.} For UCF Sports, we observe that our AUC score is competitive to the current state-of-the-art using full box supervision. Our AUC score of 0.545 is, similar to Experiments 1 and 2, nearly identical to the APT score (0.546)~\cite{vangemert2015apt}. The score is furthermore close to the current state-of-the-art score of 0.559~\cite{gkioxari2015finding,weinzaepfelICCV2015learningToTrack}. The AUC scores for the two baselines without box supervision can not compete with our AUC scores. This result shows that points provide a rich enough source of annotations that are exploited by our proposal mining.
\textbf{UCF 101.} For UCF 101, we again observe similar performance to APT~\cite{vangemert2015apt} and an improvement over the baseline annotation method. The method of Weinzaepfel \emph{et al.}\xspace~\cite{weinzaepfelICCV2015learningToTrack} performs better on this dataset. We attribute this to their strong proposals, which are not unsupervised and require additional annotations.
\textbf{Hollywood2Tubes.} For Hollywood2Tubes, we note that approaches using full box supervision can not be applied, due to the lack of box annotations on the training videos. We can still perform our approach and the baseline method of Cinbis \emph{et al.}\xspace~\cite{cinbis2014multi}. First, observe that the mean Average Precision scores on this dataset are lower than on UCF Sports and UCF 101, highlighting the complexity of the dataset. Second, we observe that the baseline approach using global video labels is outperformed by our approach using points, indicating that points provide a richer source of information for proposal mining than the baselines.
From this experiment, we conclude that our proposal mining using point annotations provides a profitable trade-off between annotation effort and performance for action localization.
\begin{table}[t]
\centering
\scalebox{0.95}{
\begin{tabular}{llrrr}
\toprule
\textbf{Method} & \textbf{Supervision} & \hspace{0.05cm} \textbf{UCF Sports} & \hspace{0.05cm} \textbf{UCF 101} & \hspace{0.05cm} \textbf{Hollywood2Tubes}\\
& & AUC & mAP & mAP\\
\hline
Lan \emph{et al.}\xspace~\cite{lan2011discriminative} & box & 0.380 & - & -\\
Tian \emph{et al.}\xspace~\cite{TianPartCVPR2013} & box & 0.420 & - & -\\
Wang \emph{et al.}\xspace~\cite{wang2014video} & box & 0.470 & - & -\\
Jain \emph{et al.}\xspace~\cite{jain2014action} & box & 0.489 & - & -\\
Chen \emph{et al.}\xspace~\cite{chencorsoICCV2015actiondetectionMotionClustering} & box & 0.528 & - & -\\
van Gemert \emph{et al.}\xspace~\cite{vangemert2015apt} & box & 0.546 & 0.345 & -\\
Soomro \emph{et al.}\xspace~\cite{soomroICCV2015actionLocContextWalk} & box & 0.550 & - & -\\
Gkioxari \emph{et al.}\xspace~\cite{gkioxari2015finding} & box & 0.559 & - & -\\
Weinzaepfel \emph{et al.}\xspace~\cite{weinzaepfelICCV2015learningToTrack} & box & 0.559 & 0.468 & -\\
\hline
Jain \emph{et al.}\xspace~\cite{jain2015objects2action} & zero-shot & 0.232 & - & -\\
Cinbis \emph{et al.}\xspace~\cite{cinbis2014multi}$^{\star}$ & video label & 0.278 & 0.136 & 0.009\\
This work & points & 0.545 & 0.348 & 0.143\\
\bottomrule
\end{tabular}}
\caption{\textbf{State-of-the-art localization results} on the UCF Sports, UCF 101, and Hollywood2Tubes for an overlap threshold of 0.2. Where $^{\star}$ indicates we run the approach of Cinbis \emph{et al.}\xspace~\cite{cinbis2014multi} intended for images on videos. Our approach using point annotations provides a profitable trade-off between annotation effort and performance for action localization.}
\label{tab:sota}
\end{table}
\section*{Supplementary materials}
The supplementary materials for the ECCV paper "Spot On: Action Localization from Pointly-Supervised Proposals" contain the following elements regarding \emph{Hollywood2Tubes}:
\begin{itemize}
\item The annotation protocol for the dataset.
\item Annotation statistics for the train and test sets.
\item Visualization of box annotations for each action.
\end{itemize}
\section*{Annotation protocol}
Below, we outline how each action is specifically annotated using a bounding box. The protocol is the same for the point annotations, but only the center of the box is annotated, rather than the complete box.
\begin{itemize}
\item \textbf{AnswerPhone:} A box is drawn around both the head of the person answering the phone and the hand holding the phone (including the phone itself), from the moment the phone is picked up.
\item \textbf{DriveCar:} A box is drawn around the person in the driver seat, including the upper part of the stear itself. In case of a video clip with of a driving car in the distance, rather than a close-up of the people in the car, the whole car is annotated as the driver can hardly be distinguished.
\item \textbf{Eat:} A single box is drawn around the union of the people who are joinly eating.
\item \textbf{FightPerson:} A box is drawn around both people fighting for the duration of the fight. If only a single person is visible, no annotation is made. In case of a chaotic brawl with more than two people, a single box is drawn around the union of the fight.
\item \textbf{GotOutCar:} A box is drawn around the person starting from the moment that the first body parts exists the car until the person is standing complete outside the car, beyond the car door.
\item \textbf{HandShake:} A box is drawn around the complete arms (the area between the union of the shoulders, ellbows, and hands) of the people shaking hands.
\item \textbf{HugPerson:} A box is drawn around the heads and upper torso (until the waist, if visible) of both hugging people.
\item \textbf{Kiss:} A box is drawn around the heads of both kissing people.
\item \textbf{Run:} A box is drawn around the running person.
\item \textbf{SitDown:} A box is drawn around the complete person from the moment the person starts moving down until the person is complete seated at rest.
\item \textbf{SitUp:} A box is drawn around the complete person from the moment the person starts to move upwards from a laid down position until the person no longer moves upwards..
\item \textbf{StandUp:} Vice versa to SitDown.
\end{itemize}
\begin{figure}[t]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{images/suppl/point-hollywood2-train.png}
\caption{Points (train).}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{images/suppl/box-hollywood2-test.png}
\caption{Boxes (test).}
\end{subfigure}
\caption{Annotation aggregations for the point and box annotations on \emph{Hollywood2Tubes}. The annotations are overall center-oriented, but we do note a bias towards the rule-of-thirds principle, given the higher number of annotations on $\frac{2}{3}$-th the width of the frame.}
\label{fig:stats}
\end{figure}
\begin{table}[t]
\centering
\begin{tabular}{l r r}
\toprule
& \hspace{1cm} Training set & \hspace{1cm} Test set\\
\midrule
Number of videos & 823 & 884\\
Number of action instances & 1,026 & 1,086\\
Numbers of frames evaluated & 29,802 & 31,295\\
Number of annotations & 16,411 & 15,835\\
\bottomrule
\end{tabular}
\caption{Annotation statistics for \emph{Hollywood2Tubes}. The large difference between the number of frames evaluated and the number of annotations is because the actions in Hollwood2 are not trimmed.}
\label{tab:stats-all}
\end{table}
\section*{Annotation statistics}
In Figure~\ref{fig:stats}, we show the aggregated point annotations (training set) and box annotations (test set). The aggregation shows that the localization is center oriented. The heatmap for the box annotations do show the rule-of-thirds principle, given the the higher number of annotations on $\frac{2}{3}$-th the width of the frame.
In Table~\ref{tab:stats-all}, we show a number of statistics on the annotations performed on the dataset.
\section*{Annotation examples}
In Figure~\ref{fig:h2t-examples} we show an example frame of each of the 12 actions, showing the diversity and complexity of the videos for action localization.
\begin{figure}[h]
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/suppl/h2t-answerphone-00638.pdf}
\caption{Answer Phone.}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/suppl/h2t-drivecar-00844.pdf}
\caption{Drive Car.}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/suppl/h2t-eat-00674.pdf}
\caption{Eat.}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/suppl/h2t-fightperson-00405.pdf}
\caption{Fight Person.}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/suppl/h2t-getoutcar-00108.pdf}
\caption{Get out of Car.}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/suppl/h2t-handshake-00074.pdf}
\caption{Hand Shake.}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/suppl/h2t-hug-00427.pdf}
\caption{Hug.}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/suppl/h2t-kiss-00330.pdf}
\caption{Kiss.}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/suppl/h2t-run-00120.pdf}
\caption{Run.}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/suppl/h2t-sitdown-00812.pdf}
\caption{Sit down.}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/suppl/h2t-situp-00719.pdf}
\caption{Sit up.}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{images/suppl/h2t-standup-00668.pdf}
\caption{Stand up.}
\end{subfigure}
\caption{Example box annotations of test videos for \emph{Hollywood2Tubes}.}
\label{fig:h2t-examples}
\end{figure}
| {
"attr-fineweb-edu": 1.335938,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUddk5qdmDLggdCk5V | \section{Introduction}
\begin{table}[t]
\centering
\begin{small}
\begin{tabular}{|p{7cm}|} \toprule
\multicolumn{1}{|c|}{ \cellcolor[gray]{0.9} \textbf{Meaning Representation} } \\
name{[}Browns Cambridge{]}, eatType{[}coffee shop{]},
food{[}Italian{]},
customerRating[average],
area[riverside], familyFriendly[yes],
near[Crowne Plaza Hotel]
\\ \hline \hline
\multicolumn{1}{|c|}{\cellcolor[gray]{0.9} \textbf{E2E Dataset} }\\
\textit{Browns Cambridge is near Crowne Plaza Hotel. Browns Cambridge has a three star rating. Browns Cambridge is a family coffee shop.}\\ \hline
\multicolumn{1}{|c|}{\cellcolor[gray]{0.9} \textbf{Personality: Conscientious}} \\
\textit{Did you say Browns Cambridge? Well, i see, I think that it is a coffee shop, also it has a decent rating, and it is near Crowne Plaza Hotel kid friendly in riverside and an Italian place.}\\
\hline
\multicolumn{1}{|c|}{\cellcolor[gray]{0.9} \textbf{Personality: Disagreeable}} \\
\textit{Come on, I am not sure. I mean Browns Cambridge is an Italian place, also it has a damn average rating. It is near Crowne Plaza Hotel.} \\ \hline
\hline
\multicolumn{1}{|c|}{\cellcolor[gray]{0.9} \textbf{Personality: Unconscientious}} \\
\textit{Oh God I don't know! Browns Cambridge is a coffee shop, also it is family friendly near Crowne Plaza Hotel, also it is an Italian place in riverside, also it has like, a decent rating. } \\ \hline
\end{tabular}
\vspace{-.1in}
\caption{Sample meaning representation with a realization from the E2E Challenge Dataset and
two stylistic personality realizations.}
\label{table:mr-ref-example}
\end{small}
\vspace{-.2in}
\end{table}
Neural encoder-decoder models were originally developed for machine
translation \cite{sutskever2014sequence, Bahdanau_Cho_Bengio_2014},
but they have also been shown to be successful in related natural
language generation (NLG) tasks such as realizing dialogue system
utterances from meaning representations (MRs) as shown for the
restaurant domain in Table~\ref{table:mr-ref-example}
\cite{Dusek2016}. Recent work in neural NLG has shown that stylistic
control is an important problem in its own right: it is needed to
address a well-known limitation of such models, namely that they
reduce the stylistic variation seen in the input, and thus produce
outputs that tend to be dull and repetitive \cite{li2016persona}.
Here we compare different methods for
directly controlling stylistic variation when generating
from MRs, while simultaneously achieving high semantic accuracy.
Tables~\ref{table:mr-ref-example} and~\ref{table:mr-contrast-example}
illustrate the two stylistic benchmark datasets that form the basis of
our experimental setup. Table~\ref{table:mr-ref-example} shows an
example MR with three surface realizations: the E2E realization does
not target a particular personality, while the other two examples vary
stylistically according to linguistic profiles of personality type
\cite{PennebakerKing99,Furnham90,MairesseWalker11}.
Table~\ref{table:mr-contrast-example} shows an example MR with two
surface realizations that vary stylistically according to whether the
discourse contrast relation is used
\cite{NakatsuWhite06,Howcroftetal13}. Both of these benchmarks
provide parallel data that supports experiments that hold constant the
underlying meaning of an utterance, while varying the style of the
output text. In contrast, other tasks that have been used to explore
methods for stylistic control such as machine translation or
summarization (known as text-to-text generation tasks) do not allow
for such a clean separation of meaning from style because the inputs
are themselves surface forms.
\begin{table}[t]
\centering
\begin{small}
\begin{tabular}{|p{7cm}|} \toprule
\multicolumn{1}{|c|}{\cellcolor[gray]{0.9} \textbf{Meaning Representation}} \\
name{[}Brown's Cambridge{]},
food{[}Italian{]},
customerRating[3 out of 5],
familyFriendly[no],
price[moderate]
\\
\hline \hline
\multicolumn{1}{|c|}{\cellcolor[gray]{0.9} \textbf{With Contrast Relation}} \\
\textit{Browns Cambridge is an Italian restaurant with average customer reviews and \textbf{reasonable prices, but it is not child-friendly.}}\\
\hline
\multicolumn{1}{|c|}{\cellcolor[gray]{0.9} \textbf{Without Contrast Relation}} \\
\textit{Browns Cambridge serves Italian food in moderate price range. It is not kid friendly and the customer rating is 3 out of 5. } \\ \hline
\end{tabular}
\vspace{-.1in}
\caption{A sample meaning representation with contrastive and non-contrastive surface realizations.}
\label{table:mr-contrast-example}
\end{small}
\end{table}
We describe three methods of incorporating stylistic information as
\textit{side constraints} into an RNN encoder-decoder model, and test
each method on both the personality and contrast stylistic benchmarks.
We perform a detailed comparative analysis of the strengths and
weaknesses of each method. We measure both semantic fidelity and
stylistic accuracy and quantify the tradeoffs between them. We show
that putting stylistic conditioning in the decoder, instead of in the encoder as in previous work, and eliminating the
semantic re-ranker used in earlier models results in more than 15
points higher BLEU for Personality, with a reduction of semantic error
to near zero. We also report an improvement from .75 to .81 in
controlling contrast and a reduction in semantic error from 16\% to
2\%. To the best of our knowledge, no prior work has conducted a
systematic comparison of these methods using such robust criteria
specifically geared towards controllable stylistic variation. We
delay a detailed review of prior work to
Section~\ref{sec:related-work} when we can compare it to our own.
\section{Models and Variants}
\label{sec:model}
In the recent E2E NLG Challenge shared task, models were tasked with
generating surface forms from structured meaning
representations\cite{Dušek_Novikova_Rieser_2019}. The top performing
models were all RNN encoder-decoder systems.
Our model also follows a standard RNN Encoder--Decoder model
\cite{sutskever2014sequence,Bahdanau_Cho_Bengio_2014} that maps a
source sequence (the input MR) to a target sequence.
\subsection{Model}
Our model represents an MR as a sequence $x = (x_1, x_2, \ldots x_n)$ of slot-value pairs. The generator is tasked with generating a surface realization which is represented as a sequence $y$ of tokens $y_1, y_2, \ldots y_m$. The generation system models the conditional probability $p(y|x)$ of generating the surface realization $y$ from some meaning representation $x$. Thus, by predicting one word at a time, the conditional probability can be decomposed into the conditional probability of the next token in the output sequence:
\begin{equation}
p(y|x) = \prod_{t = 1}^{m} p(y_t| y_1, y_2, \ldots y_{t-1}, x) \; .
\end{equation}
We are interested in exercising greater control over the characteristics of the output sequence by incorporating \textit{side constraints} into the model \cite{Sennrich_Haddow_Birch_2016}. The side constraints $\textbf{c}$ act as an additional condition when predicting each token in the sequence. In this case, the conditional probability of the next token in the output sequence is given by:
\begin{equation}
p(y|x, \textbf{c}) = \prod_{t = 1}^{m} p(y_t| y_1, y_2, \ldots y_{t-1}, x, \textbf{c}) \; .
\end{equation}
In Section \ref{sec:side-constraints} we describe three methods
of computing $p(y|x, \textbf{c})$ .
\paragraph{Encoder.}
The model reads in an MR as a sequence of slot-value pairs. Separate vocabularies for slot-types and slot values are calculated in a pre-processing step. Each slot type and slot value are encoded as one-hot vectors which are accessed through a table look-up operation at run-time. Each slot-value pair is encoded by first concatenating the slot type encoding with the encoding of its specified value.
Then the slot-value pair is encoded with an RNN encoder.
We use a multi-layer bidirectional LSTM \cite{hochreiter1997long}
to encode the input sequence of MR slot-value pairs. The hidden
state $\bar{h_i}$ is represented as the concatenation of the forward
state $\overrightarrow{h_i}$ and backward state $\overleftarrow{h_i}$.
Specifically, $\bar{h_i} = (\overrightarrow{h_i},\overleftarrow{h_i})$ .
\paragraph{Decoder.} The decoder is a uni-directional LSTM.
Attention is implemented as in \cite{luong2015effective}.
We use a global attention where the attention scores between two
vectors $a$ and $b$ are calculated as $a^{T} \textbf{W} \, b$,
where $\textbf{W}$ is a model parameter learned during
training.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{nn4.png}
\caption{Attentional Encoder-Decoder architecture with each of the three side constraint implementations shown. The output sequence X, Y, Z is being generated from an MR represented as an input sequence of attribute value pairs.}
\label{fig:nn-side-constraints}
\end{figure}
\subsection{Side Constraints}
\label{sec:side-constraints}
Recent work has begun to explore methods for stylistic control in
neural language generation, but there has been no systematic attempt
to contrast different methods on the same benchmark tasks and thereby
gain a deeper understanding of which methods work best and why. Here,
we compare and contrast three alternative methods for implementing
side constraints in a standard encoder-decoder architecture. The
first method involves adding slot-value pairs to the input MR, and the
second involves extending the slot-value encoding through a
concatenation operation. In the third method, side constraints are
incorporated into the model by modifying the decoder inputs. The
three side constraint implementation methods are shown simultaneously
in Figure~\ref{fig:nn-side-constraints}. The orange area refers Method
1, the yellow areas corresponds to Method 2, and the red areas
corresponds to Method 3.
\label{sec:method-1}
\paragraph{Method 1: Token Supervision.}
This method provides the simplest way of encoding stylistic
information by
inserting an additional
token that encodes the side constraint into the sequence of tokens
that constitute the MR \cite{Sennrich_Haddow_Birch_2016}. We add a new slot type representing
\texttt{side-constraint} to the vocabulary of slot-types, and
new entries for each of the possible side constraint values to the
vocabulary of slot values.
\label{sec:method-2}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{slot-value-sideconstraint-encoding.png}
\caption{{\small Slot-value encoding extended with constraint.}}
\label{fig:slot-value-constraint}
\end{figure}
\paragraph{Method 2: Token Features.}
This method incorporates side constraints through use of a slot-value
pair feature. First we construct a vector representation $c$ that
contains the side constraint information. Normally the individual
slot-value pair encodings are built by concatenating the slot-type
with the slot-value as with Method 2. We modify
each slot-value pair encoding of the MR by extending it with $c$, as
seen in Figure~\ref{fig:slot-value-constraint}.
\label{sec:method-3}
\paragraph{Method 3:~Decoder Conditioning.}
This method incorporates side constraint information into the
generation process by adding additional inputs to the LSTM decoder.
Traditionally, at the $t$-th time step a LSTM decoder takes two inputs. One
input is the previous ground truth token's embedding $w_{t-1}$, and
the other is a context vector $d_t$ which is an attention-weighted
average of the encoder hidden states. A vector $c$ containing side
constraint information is provided to the decoder as a third
input. Thus at each time step the decoder's hidden state $\Tilde{h}_i$ is
calculated as
\begin{equation}
\Tilde{h}_i = \text{LSTM}([w_{t-1}; d_t; c]) \, .
\end{equation}
\vspace{-.1in}
\begin{table*}[ht]
\centering
\begin{footnotesize}
\begin{tabular}
{ |p{2.4cm}|p{12cm} |} \toprule
\bf Personality & \textbf{Realization} \\ \midrule
Meaning Representation & name[The Eagle], eatType[coffee shop], food[English], priceRange[cheap], customer rating[average], area[riverside], familyFriendly[yes], near[Burger King]
\\ \hline\hline
Agreeable & You want to know more about The Eagle? Yeah, ok it has an average rating, it is a coffee shop and it is an English restaurant in riverside, quite cheap near Burger King and family friendly.
\\ \hline
Disagreeable & Oh god I mean, I thought everybody knew that The Eagle is cheap with an average rating, it's near Burger King, it is an English place, it is a coffee shop and The Eagle is in riverside, also it is family friendly.
\\ \hline
Conscientious & I think that The Eagle is a coffee shop, it has an average rating and it is somewhat cheap in riverside and an English restaurant near Burger King. It is rather kid friendly.
\\ \hline
Unconscientious & Yeah, I don't know. Mmhm ... The Eagle is a coffee shop, The Eagle is cheap, it's kind of in riverside, it is an English place and The Eagle has an average rating. It is kind of near Burger King. \\ \hline
Extravert & The Eagle is a coffee shop, you know, it is an English place, family friendly in riverside and cheap near Burger King and The Eagle has an average rating friend!
\\ \hline
\end{tabular}
\vspace{-.1in}
\caption{Model outputs for each personality style for a fixed Meaning Representation (MR). The model was trained using control Method 3.}
\label{table:personality-realization-examples}
\end{footnotesize}
\end{table*}
\section{Experiments: Varying Personality and Discourse Structure}
\label{sec:personality-experiment}
\label{sec:contrast-experiment}
We perform two sets of experiments using two stylistic benchmark
datasets: one for personality, and one for discourse structure, i.e., contrast. In both cases, our aim is to generate stylized text from
meaning representations (MRs). In the personality experiments, the
generator's goal is to vary the personality style of the output and
accurately realize the MR. The personality type is the side
constraint that conditions model outputs, and is
represented using a 1-hot encoding for the models that use side
constraint Methods 2 and 3. For the sake of comparison, we also train
a model that does not use conditioning ({\sc NoCon}). In the
discourse contrast experiments, the generator's goal is to control
whether the output utterance uses the discourse contrast relation.
The side constraint is a simple boolean: contrast, or no contrast.
The model is tasked with learning 1) which category of items can
potentially be contrasted (e.g., \textit{price} and \textit{rating}
can appear in a contrast relation but \textit{name} can not), and 2)
which values are appropriate to contrast (i.e., items with polar
opposite values).
All models are
implemented using PyTorch
and
OpenNMT-py\footnote{\url{github.com/OpenNMT/OpenNMT-py}}\cite{opennmt}. We use
Dropout \cite{Srivastava_Hinton_Krizhevsky_Sutskever_Salakhutdinov_2014}
of 0.1 between RNN layers. Model parameters are initialized using
Glorot initialization \cite{Glorot_Bengio_2010} and are optimized using stochastic gradient descent
with mini-batches of size 128. Beam search with three beams is used
during inference. We implement multiple models for each experiment using the methods for
stylistic control discussed in Section~\ref{sec:side-constraints}. We
tune model hyper-parameters on a development dataset and select the model of
lowest perplexity to evaluate on a test dataset. All models are trained
using lower-cased and de-lexicalized reference texts. The sample model outputs we
present have been re-capitalized and re-lexicalized using a simple rule based script.
Further details on model implementation, hyper-parameter tuning, and data processing are provided as supplementary material.
\subsection{Benchmark Datasets and Experiments}
\noindent\textbf{Personality Benchmark.} This dataset provides multiple reference outputs for each MR, where the
style of the output varies by personality type
\cite{Orabyetal18}.\footnote{\url{ nlds.soe.ucsc.edu/stylistic-variation-nlg}}
The styles belong to the
Big Five personality traits: agreeable, disagreeable, conscientious,
un-conscientious, and extrovert, each with a stylistically
distinct linguistic profile
\cite{MairesseWalker10,Furnham90}. Example
model outputs for each personality on a fixed MR are
in Table~\ref{table:personality-realization-examples}.
The dataset consists of 88,855 train examples and 1,390 test examples
that are evenly distributed across the five personality types. Each
example consists of a (MR, personality-label, reference-text)
tuple. The dataset was created using the MRs from the E2E Dataset
\cite{e2e_dataset_novikova_duvsek_rieser_2017} and reference texts
synthesized by PERSONAGE \cite{mairesse2010towards}, a statistical
language generator capable of generating utterances that vary in style
according to psycho-linguistic models of personality. The statistical
generator is configured using 36 binary parameters that target
particular linguistic constructions associated with different
personality types. These are split into {\it aggregation operations}
that combine individual propositions into larger sentences,
and {\it pragmatic markers} which typically modify some expression
within a sentence, e.g. {\it tag questions} or {\it in-group markers}.
A subset of these are illustrated in Table~\ref{table:agg-prag}: see
\citet{Orabyetal18} for more detail.
\begin{table}[htb!]
\begin{footnotesize}
\begin{tabular}
{@{} p{1.2in}|p{1.65in} @{}}
\hline
{\bf Attribute} & {\bf Example} \\ \hline\hline
\multicolumn{2}{l}{ \cellcolor[gray]{0.9} {\sc Aggregation Operations}} \\
{\sc "With" cue} & {\it X is in Y, with Z.} \\
{\sc Conjunction} & {\it X is Y and it is Z. \& X is Y, it is Z.} \\
{\sc "Also" cue} & {\it X has Y, also it has Z.} \\
\multicolumn{2}{l}{ \cellcolor[gray]{0.9} {\sc Pragmatic Markers}} \\
{\sc ack\_justification} & \it I see, well \\
{\sc ack\_yeah} & \it yeah\\
{\sc confirmation} &
{\it let's see ....., did you say X? } \\
{\sc down\_kind\_of} & \it kind of \\
{\sc down\_like} & \it like \\
{\sc exclaim} & \it ! \\
{\sc general softener} & \it sort of, somewhat, quite, rather \\
{\sc emphasizer} & \it really, basically, actually, just \\
{\sc tag question} & \it alright?, you see? ok? \\
\hline
\end{tabular}
\end{footnotesize}
\vspace{-.1in}
\centering \caption{\label{table:agg-prag}
Example Aggregation and Pragmatic Operations}
\end{table}
We conduct experiments using two control configurations that differ in
the granularity of control that they provide. We call the first
configuration \textit{course-grained} control, and the model is
conditioned using a single constraint: the personality label. The
second configuration, called \textit{fine-grained} control, conditions
the model using the personality label and Personage's 36 binary
control parameters as illustrated by Table~\ref{table:agg-prag}, which
provide fine-grained information on the desired style of the output
text. The stylistic control parameters are not updated during
training. When operating under fine-grained control, for side
constraint Methods 2 and 3, the 1-hot vector that encodes personality
are extended with dimensions for each of the 36 control
parameters. For Method 1 we insert 36 tokens, one for each parameter,
to the beginning of each input sequence, in addition to the single
token that represents personality label.
\noindent\textbf{Contrast Benchmark.} This dataset provides reference outputs for 1000 MRs, where
the style of the output varies by whether or not it uses the discourse contrast relation.\footnote{\url{ nlds.soe.ucsc.edu/sentence-planning-NLG}}
Contrast training set examples are shown in Table~\ref{table:mr-contrast-example}.
The contrast dataset is based on
15,000 examples from the E2E generation challenge, which consists of
2,919 contrastive examples and 12,079 examples without contrast.\footnote{\url{www.macs.hw.ac.uk/InteractionLab/E2E/}} We
split the dataset into train and development subsets using a 90/10
split ratio. The test data is composed of a set of 500 MRs that
contain attributes that can be contrasted, whose reference outputs use
discourse-contrast
\cite{Reed_Oraby_Walker_2018}.
The test set also contains a set of 500 MRs that were selected from
the E2E development set that do not use discourse-contrast. We crowd-sourced human-generated references for the
contrastive test set, and used the references from
the E2E dataset for the noncontrastive test set.\footnote{We will
make our test and partitions of training data available to the
research community if this paper is accepted.}
\subsection{Results}
For both types of stylistic variation, we evaluate model outputs using
automatic metrics targeting semantic quality, diversity of the
outputs, and the type of stylistic variation the
model is attempting to achieve. We also conduct two human evaluations. In the tables and discussion that
follow, we refer to the models that employ each of the side constraint
methods, e.g., Methods 1, 2, and 3, described in
Section~\ref{sec:side-constraints}, using the monikers M\{1,2,3\}. The
model denoted NoCon refers to a model that uses no side constraint
information. Sample model outputs from the personality experiments are
shown in Table~\ref{table:personality-realization-examples}. The outputs are from the M3 model when operating under the fine grained control setting.
Outputs from model M2 of the contrast experiment are shown in
Table~\ref{table:contrast-model-outputs}.
\subsubsection{Semantic Quality}
\label{sec:sem-quality}
\begin{table}[ht]
\centering
\begin{small}
\begin{tabular}{llllll}
\toprule
Model & BLEU & SER & H & AGG & PRAG \\
\hline
\multicolumn{6}{c}{ \cellcolor[gray]{0.9} \citet{Orabyetal18}} \\
NoCon & 27.74 & - & 7.87 & .56 & .08\\
\textit{coarse} & 34.64 & - & 8.47 & .64 & .48\\
\textit{fine} & 37.66 & - & 8.58 & .71 & .55 \\
\hline
\hline
\multicolumn{6}{c}{ \cellcolor[gray]{0.9} This Work} \\
Train & - & - & 9.34 & - & - \\
NoCon & 38.45 & \textbf{0 } & 7.70 & .44 & .14 \\
\hline
\multicolumn{6}{c}{ \cellcolor[gray]{0.9} \textit{coarse control}} \\
M1 & 49.04 & \underline{0.000} & 8.49 & .57 & .51 \\
M2 & 48.10 & 0.002 & \underline{8.52} &.62 & .50 \\
M3 & \underline{49.06} & 0.009 & 8.50 & .60 & .50 \\
\hline
\multicolumn{6}{c}{ \cellcolor[gray]{0.9} \textit{fine control}} \\
M1 & 55.30 & \underline{0.004} & 8.77 & .82 & .94 \\
M2 & 52.29 & 0.103 & \underline{\textbf{8.80}} & .84 & .93 \\
M3 & \textbf{55.98} & 0.014 & 8.74 & .84 & .93 \\
\bottomrule
\end{tabular}
\vspace{-.1in}
\caption{Automatic evaluation on Personality test set. \textit{course} and \textit{fine} refer to the specificity of the control configuration. \label{table:results-personality}}
\end{small}
\end{table}
First, we measure general similarity between model outputs and gold
standard reference texts using BLEU, calculated with the same
evaluation script\footnote{\url{github.com/tuetschek/e2e-metrics}} as
\citet{Orabyetal18}. For the personality experiment, the scores for each conditioning
method and each control granularity are shown in
Table~\ref{table:results-personality}, along with the results reported by \citet{Orabyetal18}. For the contrast experiment, the scores for
each conditioning method are shown in
Table~\ref{table:results-contrast}, where we refer to the model and results of
\citet{Reed_Oraby_Walker_2018} as \textit{M-Reed}. \citet{Reed_Oraby_Walker_2018} do
not report BLEU or Entropy (H) measures.
We first discuss the baselines from previous work on the same benchmarks.
Interestingly, for Personality, our {\sc NoCon} model gets a huge performance improvement
of more than 11 points in BLEU (27.74 $\rightarrow$ 38.45) over results reported by \citet{oraby2018neural}. We note that while the underlying architecture behind our experiments
is similar to the baseline described by \citet{oraby2018neural}, we experiment
with different parameters and attention mechanisms.
\citet{Reed_Oraby_Walker_2018} and \citet{Orabyetal18} also use
an LSTM encoder-decoder model with attention,
but they both implement their models using the TGen\footnote{\url{github.com/UFAL-DSG/tgen}}\cite{duvsek2016context} framework
with its default model architecture.
TGen uses an early version of TensorFlow with different initialization
methods, and dropout implementation. Moreover, we use a different
one-hot encoding of slots and their values,
and we implement attention as in \citet{luong2015effective}, whereas
TGen uses \citet{bahdanau2014neural} attention by default.
Side constraints are incorporated into the
TGen models in two ways: 1) using a new dialogue act type
to indicate the side constraints, and 2) a feed-forward layer processes
the constraints and, during decoding, attention
is computed over the encoder hidden states and the hidden
state produced by the feed-forward layer. The TGen
system uses beam-search and an additional output re-ranking module.
We now compare the performance of our own model results in Table~\ref{table:results-personality}.
As would be expected, NoCon has the lowest performance overall of all models, with a BLEU of 38.45.
With both coarse control and fine-grained control, M3 and M2 are
the highest and lowest performers, respectively.
For the contrast experiment, M2 and M3 have very similar values for
all rows of Table~\ref{table:results-contrast}. M2 has the highest
BLEU score of 17.32 and M3 has 17.09. M1 is consistently outperformed
by both M2 and M3. All side constraint models outperform NoCon. We
note that the contrast task achieves much lower scores on BLEU. This maybe due to the relatively small number of
contrast examples in the training set, but it is also possible that
this indicates the large variety of ways that contrast can be
expressed, rather than poor model performance. We show in a human
evaluation in Section~\ref{style-quality} that the contrast examples
are fluent and stylistically interesting.
A comparison of our results versus those
reported by \citet{Orabyetal18} are also shown in
Table~\ref{table:results-personality}.
Note that our model has an over
14 point margin of improvement in BLEU score when using coarse control
and a more than 18 point improvement when using fine-grained control. Our models can clearly use the conditioning information more effectively than earlier work.
\begin{table}[ht!]
\begin{small}
\begin{center}
\begin{tabular}{cccc} \toprule
Model & BLEU & SER & H \\
\hline
Train & - & & 10.68 \\ \hline
\multicolumn{4}{c}{\cellcolor[gray]{0.9} Contrast Data} \\
M-Reed & - & .16 & - \\
NoCon & 15.80 & \textbf{.053} & \textbf{8.09} \\
M1 & 16.58 & .055 & 8.08 \\
M2 & \textbf{17.32} & .058 & 8.03 \\
M3 & 17.09 & .058 & 7.93 \\ \hline
\multicolumn{4}{c}{\cellcolor[gray]{0.9} Non Contrast Data} \\
NoCon & 26.58 & .025 & 7.67 \\
M1 & 26.58 & .023 & 7.56 \\
M2 & 26.35 & .017 & 7.68 \\
M3 & 26.04 & .035 & 7.40 \\ \hline
\end{tabular}
\vspace{-.1in}
\end{center}
\end{small}
\caption{Automatic evaluation on Contrast test set. \label{table:results-contrast}}
\end{table}
\noindent
\textbf{Slot Error Rate.}
While the n-gram overlap metrics are able to measure general similarity between gold references and model outputs, they often do not do a good job at measuring semantic accuracy. Slot error rate (SER)\cite{wen2015semantically,Reed_Oraby_Walker_2018} is a metric similar to word error rate that measures how close a given realization adheres to its MR.
SER\footnote{A formal definition of SER is provided in the supplementary materials.}
is calculated
using the slot aligner released\footnote{\url{github.com/jjuraska/slug2slug}} by \citet{Juraskaetal18} to
count the number of attributes (slots) and their values that correctly (and incorrectly) occur in a given surface realization. Please refer to Supplementary Materials, Section \ref{sec:appendix-calc-ser} for the definition of SER.
We evaluate each model using SER with results in Tables~\ref{table:results-personality} and \ref{table:results-contrast}. We first note that all the SERs for both tasks are extremely low
and that only M2 under fine control performs worse with an SER of .10.
The models are clearly learning to realize the intended MRs.
M1 has the best SER scores in all experiment conditions.
In the contrast experiment, M2 and M3 are practically equivalent.
\begin{table}[h!t]
\centering
\begin{small}
\begin{tabular}{lll} \toprule
Model & Acc & Contrast Attempts \\ \hline
M-Reed & .75 & 422 \\ \hline
M1 & .74 & 437 \\
M2 & .79 & 485 \\
M3 & .81 & 474 \\ \hline
\end{tabular}
\vspace{-.1in}
\end{small}
\caption{Contrast accuracy out of 500 examples.} \label{table:accuracy-results-contrast}
\end{table}
\begin{table*}[th]
\centering
\begin{small}
\begin{tabular}{p{7cm}| p{8cm}} \toprule
\bf Meaning Representation & \textbf{Realization} \\ \midrule
name[Aromi], eatType[restaurant], rating[low], familyFriendly[yes]
& \textit{Aromi is a \textbf{family friendly restaurant but the customer rating is low}.}
\\ \hline
name[Fitzbillies], cuisine[English],
price[more than \$30], eatType[pub], familyFriendly[yes]
& \textit{Fitzbillies is a pub that serves English food,
\textbf{is children friendly, but the price range is more than \$30}.}
\\ \hline
name[Clowns], price[more than \$30], rating[high], familyFriendly[no], near[Clare Hall]
& \textit{Clowns is near Clare Hall. It has a \textbf{high customer rating but is not child friendly}.}
\\ \hline
name[Cotto], cuisine[English], location[riverside], price[high], eatType[coffee shop], rating[5 out of 5], near[The Portland Arms]
& \textit{Cotto is a English coffee shop near The Portland Arms in the riverside. It has \textbf{a high price range but a customer rating of 5 out of 5}.}
\\ \bottomrule
\end{tabular}
\vspace{-.1in}
\caption{Sample outputs from model M2 with contrast relation in bold.}
\label{table:contrast-model-outputs}
\end{small}
\end{table*}
\subsubsection{Quality in Variation}
\label{style-quality}
In the previous section we tested the ability of the side
constraint models to produce semantically accurate outputs. In this
section we evaluate the extent to which the side constraint models
produce stylistically varied texts. We evaluate variation using two
measures: 1) Entropy, and 2) counts on model
outputs for particular stylistic targets.
\noindent\textbf{Entropy.} Our goal is NLG
models that produce stylistically rich, diverse outputs, but we expect that variation in the training
data will be averaged out during model training. We quantify
the amount of variation in the training set, and also in the output
references from the test set MRs using
Entropy\footnote{A formal definition of our Entropy calculation is provided with the supplementary materials.}, $H$, where a larger
entropy value indicates a larger amount of linguistic variation
preserved in the test outputs.
The results are shown in the $H$ column of
Tables~\ref{table:results-personality} and
\ref{table:results-contrast}. For the personality experiment, the
training corpus has 9.34 entropy and none of the models are able to
match its variability. When using fine-grained control M2 does the
best with 8.52 but all side constraint models are within 0.03. When
using coarse control M2 has the highest entropy with 8.80.
Our models with fine control outperform
\citet{Orabyetal18} in terms of entropy.
For the contrast experiment,
NoCon has the highest entropy at 8.09, but
the differences are small.
\noindent\textbf{Counts of Stylistic Constructions.} Entropy measures
variation in the corpus as a whole, but we can also examine the
model's ability to vary its outputs in agreement with the stylistic
control parameters.
Contrast accuracy measures the ratio of valid contrast realizations to
the number of contrasts attempted by the
model. We determine valid contrasts using the presence of polar
opposite values in the MR and then inspecting realization of
those values in the model output.
Table~\ref{table:accuracy-results-contrast} shows the
results. The row labeled M-Reed refers to the results reported by
\citet{Reed_Oraby_Walker_2018}. NoCon rarely attempts contrast because there is no way to motivate it to do so, and it therefore produces no contrast.
Contrast attempts are out of 500 and
M2 has the most at 485. In terms of contrast accuracy M3 is the best
with 81\%.
When comparing our model performance to M-Reed, models M\{1,2,3\}
make more contrast attempts. M1 and M-Reed have similar
contrast accuracy with 74\% and 75\%, respectively. The higher
performance of our models is particularly impressive since the M-Reed
models see roughly 7k contrast examples during training, which is
twice the amount that our models see.
For personality, we examine each model's ability to vary its outputs
in agreement with the stylistic control parameters by measuring
correlations between model outputs and test reference texts in the use
of the aggregation operations and pragmatic
markers, two types of linguistic constructions illustrated
in Table~\ref{table:agg-prag}, and associated with each personality
type.
The results for these linguistic constructions over all
personality types are shown in the last two columns (Agg, Prag) of
Table~\ref{table:results-personality}.
The supplementary material
provides details for each personality. Our results demonstrate a
very large increase in the correlation of these markers between model
outputs and reference texts compared to previous work, and also
further demonstates the benefits of fine-grained control, where we
achieve correlations to the reference texts as high as .94 for
pragmatic markers and as high as .84 for aggregation operations.
\noindent{\bf Methods Comparison.}
The results in Tables~\ref{table:results-personality} and
\ref{table:accuracy-results-contrast} reveal a general trend where
model performance in terms of BLEU and entropy increases as
more information is given to the model as side constraints. At the
same time, the slot error rates are somewhat higher,
indicating the difficulty of simultaneously achieving
both high semantic and stylistic fidelity. Our conclusion is
that Method 3 performs the best at controlling text style, but only when it has access to a large training dataset, and Method 2 performs better in
situations where training data is limited.
\noindent{\bf Human evaluation.}
We perform human evaluation of the quality of outputs for the M3 model with a random sample of 50 surface realizations for each personality, and 50 each for contrast and non-contrast outputs for a total of 350 examples. Three annotators on Mechanical Turk rate each output for both interestingness and fluency (accounting for both grammaticality and naturalness) using a 1-5 Likert scale.
Human evaluation results are shown in Table~\ref{table:human-eval-personality} for the personality experiment and Table~\ref{table:human-eval-contrast} for contrast. The tables show average annotator rating in each category. For the personality outputs, each personality has similar fluency ratings with Conscientious slightly higher. The model outputs for the contrast relation have higher average ratings for Fluency than the non-contrastive realizations. For interestingness, we compare both the personality styles and the contrastive style to the basic style without contrast. The results show that non-contrast (3.07), the vanilla style, is judged as significantly less interesting than the personality styles (ranging from 3.39 to 3.51) or the use of discourse contrast (3.45) (p-values all less than .01).
\begin{table}[ht]
\centering
\begin{small}
\begin{tabular}{lllllll} \toprule
& Con.
& Dis.
& Agr.
& Ext.
& Unc.
& avg \\ \hline
Fluent
& 3.77
& 3.38
& 3.53
& 3.38
& 3.35
& 3.48 \\
Interest
& 3.39
& 3.40
& 3.51
& 3.46
& 3.45
& 3.44 \\
\bottomrule
\end{tabular}
\vspace{-.1in}
\end{small}
\caption{Human evaluation results for personality.} \label{table:human-eval-personality}
\vspace{-.2in}
\end{table}
\begin{table}[ht]
\centering
\begin{small}
\begin{tabular}{lll} \toprule
& Non-contrast
& Contrast \\ \midrule
Fluent
& 4.21
& 4.38 \\
Interest
& 3.07
& 3.45 \\
\bottomrule
\end{tabular}
\vspace{-.1in}
\end{small}
\caption{Human evaluation results for discourse contrast.} \label{table:human-eval-contrast}
\end{table}
\section{Related Work}
\label{sec:related-work}
Stylistic control is important as a way to address a well-known
limitation of vanilla neural NLG models, namely that they reduce the
stylistic variation seen in the input, and thus produce outputs that
tend to be dull and repetitive \cite{li2016persona}. The majority of
other work on stylistic control has been done in a text-to-text
setting where MRs and corpora with fixed meaning and varying style are
not available
\cite{Fan_Grangier_Auli_2017,Iyyer_Wieting_Gimpel_Zettlemoyer_2018,
Wiseman_Shieber_Rush_2018, Ficler_Goldberg_2017}. Sometimes
variation is evaluated in terms of model performance in some other
task, such as machine translation or summarization.
\citet{Herzig17} also control personality in the context of
text-2-text generation in customer care dialogues.
\citet{Kikuchi_Neubig_Sasano_Takamura_Okumura_2016} control output
sequence length by adding a remaining-length encoding as extra input
to the decoder. \citet{Sennrich_Haddow_Birch_2016} control linguistic
honorifics in the target language by adding a special social formality
token to the end of the source
text. \citet{hu_controlled_generation_17} control sentiment and tense
(past, present, future) in text2text generation of movie reviews.
\citet{Ficler_Goldberg_2017} describe a conditioned language model
that controls variation in the stylistic properties of generated movie
reviews.
Our work builds directly on the approach and benchmark datasets of
\citet{Reed_Oraby_Walker_2018} and \citet{Orabyetal18}. Here we
compare directly to the results of \citet{Orabyetal18}, who were the
first to show show that a sequence-to-sequence model can generate
utterances from MRs that manifest a personality type.
\citet{Reed_Oraby_Walker_2018} also develop a neural model for a
controllable sentence planning task and run an experiment similar to
our contrast experiment. Here, we experiment extensively with
different control methods and present large performance improvements
on both tasks.
\section{Conclusion}
\label{sec:conclusion}
We present three different models for stylistic control of an attentional encoder-decoder model that generates restaurant descriptions from structured semantic representations using two stylistic benchmark datasets: one for personality variation and the other for variation in discourse contrast. We show that the best models can simultaneously control the variation in style while maintaining semantic fidelity to a meaning representation. Our experiments suggest that overall, incorporating style information into the decoder performs best and we report a large performance improvement on both benchmark tasks, over a large range of metrics specifically designed to measure semantic fidelity along with stylistic variation. A human evaluation shows that the outputs of the best models are judged as fluent and coherent and that the stylistically controlled outputs are rated significantly more interesting than more vanilla outputs.
| {
"attr-fineweb-edu": 1.954102,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdu45qdmDNlI6-sNe | \section{INTRODUCTION}
The ski rental problem ($\mathtt{SR}$) is a dilemma faced by a consumer, who is uncertain about how many days she will ski and has to trade off between buying and renting skis: once she buys the skis, she will enjoy the remaining days rent-free, but before that she must pay the daily renting cost. The literature is interested in investigating the {\em online optimal strategy} of the consumer. That is, a strategy that yields the lowest competitive ratio \emph{without having any information of the future} (as is standard in the literature, competitive ratio is defined as the the ratio between the cost yielded by the consumer's strategy and the cost yielded by the optimal strategy of a prophet, who foresees how many days the trip will last and design the optimal strategy accordingly). The ski rental problem and its variants constitute an important part of the online algorithm design literature from both theoretical and applied perspectives~\cite{fleischer2001bahncard,karlin2001dynamic,lin2011dynamic,lotker2008rent,lu2012simple,wang2013reserve}.
In this paper, we consider the \emph{multi-shop ski rental problem}($\texttt{MSR}$), in which the consumer faces multiple shops that offer different renting and buying prices. She must choose one shop immediately after she arrives at the ski field and must rent or buy the skis \emph{in that particular shop} since then. In other words, once she has chosen a shop, the only decision variable is when to buy the skis. Beyond the basic setting, we also propose three important extensions of \texttt{MSR} as below:
\begin{itemize}
\item $\texttt{MSR}$ \emph{with switching cost} ($\texttt{MSR-S}$): The consumer is allowed to switch from one shop to another and each switching costs her some constant amount of money.
\item $\texttt{MSR}$ \emph{with entry fee} ($\texttt{MSR-E}$): Each shop requires some entry fee and the consumer \emph{cannot} switch shops.
\item $\texttt{MSR}$ \emph{with entry fee and switching} ($\texttt{MSR-ES}$): The consumer is able to switch from one shop to another, and she pays the entry fee as long as she enters any shop\footnote{For example, if she switches from shop 1 to shop 2, and then switches back to shop 1, she pays the entry fee of shop 1 twice and the entry fee of shop 2 once.}.
\end{itemize}
In all the settings above, the consumer's objective is to minimize the competitive ratio. In \texttt{MSR} and \texttt{MSR-E}, she has to consider two questions \emph{at the very beginning}: (1) where should she rent or buy the skis (place), and (2) when should she buy the skis (timing)? While \texttt{MSR-S} and \texttt{MSR-ES} allow the consumer to switch shops and are thus more fine-grained than the previous two, in the sense that she is able to decide where to rent or buy the skis \emph{at any time}. For example, it is among her options to rent in shop 1 on day 1, switch to shop 2 from day 2, and finally switch to shop 3 and then buys the skis.
The multi-shop ski rental problem naturally extends the ski rental problem and allows heterogeneity in consumer's options, a desirable feature that makes the ski rental problem a more general modeling framework for online algorithm design.
Below, we present a few real world scenarios that can be modeled with the multi-shop ski rental problem.
\textbf{1. Scheduling in distributed computing:} A file is replicated and stored in different machines in the cluster. Some node undertaking some computing job needs data in the file during the execution. The node can either request the corresponding data block of the file from some selected machine whenever it needs to, which incurs some delay, or it can simply ask that machine to transmit the whole file beforehand, at the sacrifice of a longer delay at the beginning without any further waiting. When selecting the replicating machine, the scheduling node needs to consider the current bandwidth, read latency, etc. In this application, each replicating machine is considered as a shop, and renting corresponds to requesting for the data block on-demand while buying means to fetch the whole file beforehand.
\textbf{2. Cost management in IaaS cloud:} Multiple \texttt{IaaS} cloud vendors, such as Amazon EC2~\cite{EC2}, ElasticHosts~\cite{ElasticHosts} and Microsoft Windows Azure~\cite{Azure}, offer different price options, which can be classified into two commitment levels: users pay for \emph{on-demand} server instances at an hourly rate or make a one-time, upfront payment to host each instances for some duration (e.g., monthly or annually), during which users either use the instances for free~\cite{ElasticHosts}, or enjoy a discount on renting the instance~\cite{EC2}.
Consider an example in Table 1. Table 1 lists the pricing options for the instances with identical configurations offered by Amazon EC2 and ElasticHosts. Each pricing option can be considered as a shop in the multi-shop ski rental problem, where in the 1(3) year(s) term contract in Amazon EC2, the entry fee is the upfront payment and the hourly price is the renting price.
\begin{table}[hbt]
\begin{tabular}{|c|c|c|c|}
\hline
Vendor& Option & Upfront(\$) & Hourly(\$)\\ \hline
& On-Demand & 0& 0.145\\ \cline{2-4}
Amazon & 1 yr Term & 161 & 0.09 \\ \cline{2-4}
& 3 yr Term & 243 & 0.079 \\ \hline
ElasticHosts & 1 mo Term & 97.60 & 0 \\ \cline{2-4}
& 1 yr Term & 976.04 & 0 \\ \hline
\end{tabular}
\caption{Pricing Options of the `same' instance in Amazon EC2 (c1.medium) and ElasticHosts (2400MHz cpu, 1792MHz memory, 350Gb storage and no data transfer).}
\end{table}
\textbf{3. Purchase decisions}: A company offering high-resolution aerial or satellite map service chooses between Astrium~\cite{Astrium} and DigitalGlobe~\cite{DigitalGlobe}. It can either subscribe imagery from one company or exclusively occupy the service by `purchasing' one satellite like what Google has done~\cite{News}. Similar applications include some person purchasing a SIM card from different telecommunication companies.
\subsection{Related Work}
The ski rental problem is first considered by Karlin \emph{et al.} \cite{karlin1988competitive}, and then studied by Karlin's seminal paper~\cite{karlin1994competitive} which proposes a randomized algorithm and gives a $\frac{e}{e-1}$ competitive ratio. Later researchers propose a few variants, including the Bahncard problem \cite{fleischer2001bahncard} and the TCP acknowledgment problem~\cite{karlin2001dynamic}. A more recent work~\cite{khanafer2013constrained} analyzes the case in which the first or the second moment of the skiing days are known and gives an optimal online solution. However, all the aforementioned works deal with the case where a single consumer rents or buys the skis in one single shop. In their problems, the consumer only needs to decide when to buy. While in the multi-shop ski rental problem, the consumer has to make a two-fold decision (time and place). Closest to our work is the work by Lotker \emph{et al.}~\cite{lotker2008rent}, which considers the case where the consumer has multiple options in one shop, i.e., the multi-slop problem and their problem can be regarded as a special case of our problem by setting all the buying prices sufficiently large.
Research on ``multiple consumers in one single shop'' have been conducted from applied perspectives~\cite{lin2011dynamic,lu2012simple,wang2013reserve}. Lin \emph{et al.}~\cite{lin2011dynamic} investigate a dynamical `right-sizing' strategy by turning off servers during periods of low load. Their model and lazy capacity provisioning algorithms closely tie to the ski rental problem. Lu \emph{et al.}~\cite{lu2012simple} derive the ``dynamic provisioning techniques'' to turn on or off the servers to minimize the energy consumption. They dispatch the servers so that each individual server is reduced to a standard ski rental problem. Wei \emph{et al.}~\cite{wang2013reserve} propose an online algorithm to serve the time-varying demands at the minimum cost in \texttt{IaaS} cloud given one price option, in which the `consumers' (servers) may be related to each other.
Another line of work \cite{bodenstein2011strategic,hong2011dynamic} focusing on minimizing the cost in data centers or other cloud services assumes that the long-term workloads are stationary and thus can be predicted, and Guenter \emph{et al.}~\cite{guenter2011managing} consider the cases of short predictions.
However, for many real-world applications, future workloads can exhibit non-stationarity \cite{singh2010autonomic}. Other researchers~\cite{lin2011dynamic,lu2012simple,wang2013data,wang2013reserve} that require \emph{no priori} knowledge of the future workload minimize the cost \emph{given one option is selected}. Our paper is orthogonal to theirs since we focus on how to select a price option.
\subsection{Our Contributions}
In this paper, we consider the multi-shop ski-rental problem and its extensions, in which there are multiple shops and the consumer must make two-fold decisions (time and place) to minimize the competitive ratio. We model each problem using a zero-sum game played by the consumer and nature. We simplify the strategy space of the consumer via \emph{ removal of strictly dominated strategies} and derive the form of the optimal mixed strategy of the consumer. We summarize the key contributions as follows:
\begin{enumerate}
\item[1.] For each of the problems, we prove that under the optimal mixed strategy of the consumer, the consumer only assigns positive buying probability to \emph{exactly one} shop at any time. As the buying time increases, she follows the shop order in which the ratio between buying price and renting price is increasing. This order also holds in \texttt{MSR-E} and \texttt{MSR-ES}, where entry fee is involved.
\item[2.] We derive a novel, easy-to-implement \emph{linear time} algorithm for computing the optimal strategy of the consumer, which drastically reduces the complexity of computing the solution to \texttt{MSR}.
\item[3.] For \texttt{MSR-S}, we prove that under the optimal mixed strategy, the consumer only needs to consider switching to another shop at the buying time, i.e., she will never switch to another shop and continue renting. Moreover, we show that \texttt{MSR-S} can be reduced to an equivalent \texttt{MSR} problem with modified buying prices.
\item[4.] For \texttt{MSR-ES}, we prove that under the optimal mixed strategy, the consumer may switch to another shop either during the renting period or at the buying time, but she only follows some particular order of switching. Moreover, the number of times of switching is no more $n$ where $n$ is the number of shops.
\item[5.] We characterize any action of the consumer in \texttt{MSR-ES} by proving that the action can be decoupled into a sequence of operations. We further show that each operation can be viewed as a virtual shop in \texttt{MSR-E} and in total, we create $O(n^2)$ `virtual' shops of \texttt{MSR-E}. Therefore, \texttt{MSR-ES} can be reduced to \texttt{MSR-E} with minor modifications.
\end{enumerate}
\section{BASIC PROBLEM}
In the \emph{multi-shop ski rental problem} (also \texttt{MSR}), a person goes skiing for an anbiguous time period. There are multiple shops providing skies either for rental or for buying. The person must choose one shop \emph{as soon as} she arrives at the ski field, and she can decide whether or not to buy the skis in that particular shop at any time\footnote{ In this paper, we focus on the continuous time model.}. Note that she cannot change the shop once she chooses one. The objective is to minimize the worst-case ratio between the amount she actually pays and the money she would have paid if she knew the duration of skiing in advance. We assume that there are $n$ shops in total, denoted by $[n]\triangleq\{1,2,3,\cdots,n\}$. Each shop $j$ offers skis at a renting price of $r_j$ dollars per unit time and at a buying price of $b_j$ dollars. This problem is a natural extension of the classic ski rental problem (\texttt{SR}) and it is exactly \texttt{SR} when $n=1$.
In \texttt{MSR}, it is clear that if there is a shop of which the rental and buying prices are both larger than those of another shop, it is always suboptimal to choose this shop. We assume that
\begin{align}
0<&~r_1<r_2<\cdots<r_n \nonumber\\
&~b_1>b_2>\cdots>b_n>0 \nonumber
\end{align}
We apply a game-theoretic approach for solving our problem. For the case of expressing the formulation, we assume that how long the consumer skis is determined by a player called \emph{nature}. Therefore, there are two parties in the problem, and we focus on the optimal strategy of the consumer.
In the remainder of this section, we first formulate our problem as a zero-sum game, and simplify the strategy space in Lemma~\ref{lemma:MSRstspace}. Then, we combine Lemma \ref{lemma:MSRconstant}-\ref{lemma:AlphaRelation}, and fully characterize the optimal strategy of the consumer in Theorem~\ref{theorem:MSR}. We show that optimally the consumer assigns positive buying probability to \emph{exactly one} shop at any time. Moreover, the possible times the consumer buys the skis in a shop constitute a continuous interval.
Thus, we can partition the optimal strategy of the consumer into different sub-intervals which relate to different shops, and the problem is reduced to how to find the optimal breakpoints. Based on Lemma~\ref{lemma:dnconcave} and~\ref{lemma:GConcave}, we develop a linear time algorithm for computing the optimal breakpoints and prove its correctness in Theorem~\ref{theorem:MSRalg}.
\subsection{Formulation}
We first analyze the action set for both players in the game. For the consumer, we denote by $j$ the index of the shop in which she rents or buys the skis. Let $x$ be the time when she chooses to buy the skis, i.e., the consumer will rent the skis before $x$ and buy at $x$ if nature has not yet stopped her. The action of the consumer is thus represented by a pair $(j,x)$. Denote by $\Psi_c$ the action set of the consumer:
\begin{displaymath}
\Psi_c \triangleq \{(j,x): j \in [n], x\in [0,+\infty)\cup\{+\infty\} \}
\end{displaymath}
where $x =+\infty$ means that the consumer always rents and never buys. Next, let $y$ denote the time when nature stops the consumer from skiing. Thus, the action set of nature is
\begin{displaymath}
\Psi_n \triangleq \{y: y \in (0,+\infty)\cup\{+\infty\} \}
\end{displaymath}
where $y =+\infty$ means that nature always allows the consumer to keep skiing.
If $y=x$, we regard it as the case that right after the consumer buys the skis, nature stops her.
Given the strategy profile $\langle(j,x),y\rangle$, let $c_j(x,y) \geq 0$ denote the cost paid by the consumer:
\begin{displaymath}
c_j(x,y) \triangleq
\begin{cases}
r_j y, & y < x\\
r_j x + b_j, & y \geq x
\end{cases}
\end{displaymath}
Now we define the strategy space for the consumer and nature. Let $\mathbf{p} \triangleq (p_1,\cdots,p_n)$ be a mixed strategy represented by a vector of probability density functions. $p_j(x)$ is the density assigned to the strategy $(j,x)$ for any $j = 1,\cdots, n$ and $x\in [0,+\infty)\cup\{+\infty\}$. In this paper, we assume that for each point, either probability density function exists or it is probability mass.\footnote{In fact, our results can be extended to the case where in the strategy space the cumulative distribution function is not absolutely continuous and thus no probability density function exists.} If $p_j(x)$ is probability mass, we regard $p_j(x)$ as $+\infty$ and define $p_{j,x}\triangleq\int_{x^-}^{x} p_j(t)dt$ satisfying $p_{j,x}\in(0,1]$. The strategy space $\calP$ of the consumer is as follows:
\footnote{For convenience, we denote by $\int_{a}^{b}f(x)dx$ ($a<b$) the integral over $(a,b]$, except that when $a=0$, the integral is over $[0,b]$.}
\begin{eqnarray*}
\calP = \Bigg\{\mathbf{p}: && \sum_{j=1}^n \int_{0}^\infty p_j(x) \ud x = 1,\\
&& p_j(x) \ge 0, \forall x\in [0,+\infty)\cup\{+\infty\}, \forall j\in [n] \Bigg\}
\end{eqnarray*}
Similarly, define $q(y)$ to be the probability density of nature choosing $y$ and the strategy space $\calQ$ of nature is given by
\begin{displaymath}
\calQ =\Bigg\{\mathbf{q}: \int_0^\infty q(y) \ud y = 1, q(y) \geq 0, \forall y \in (0,+\infty)\cup\{+\infty\} \Bigg\}
\end{displaymath}
When the consumer chooses the mixed strategy $\mathbf{p}$ and nature chooses the stopping time $y$, the expected cost to the consumer is:
\begin{displaymath}
C(\mathbf{p},y)\triangleq\sum_{j=1}^n C_j(p_j,y)
\end{displaymath}
in which
\begin{eqnarray*}
C_j(p_j,y)&\triangleq&\int_{0}^{\infty}c_j(x,y)p_j(x)\ud x\\
&=&\int_{0}^{y}(r_jx+b_j)p_j(x)\ud x+\int_{y}^{\infty}yr_j p_j(x)\ud x
\end{eqnarray*}
is the expected payment to shop $j$ for all $j\in [n]$. Given the strategy profile $\langle\mathbf{p}, \mathbf{q}\rangle$, the competitive ratio is defined as:
\begin{eqnarray} \label{def:Ratio}
R(\mathbf{p}, \mathbf{q}) &\triangleq& \int_0^\infty \frac{C(\mathbf{p}, y)}{\mathrm{OPT}(y)} q(y) \ud y
\end{eqnarray}
Here $\mathrm{OPT}(y)$ is the optimal offline cost and can be seen to have the following form:
\begin{equation} \label{offlineOpt: MSR}
\mathrm{OPT}(y) =
\begin{cases}
r_1 y, & y \in (0,B]\\
b_n, & y > B
\end{cases}
\end{equation}
where $B$ is defined as $B\triangleq\frac{b_n}{r_1}$.
Note that $B$ is the dividing line between the minimum buying cost and the minimum renting cost. When $y<B$, the offline optimal is always to rent at the first shop, and when $y>B$, the offline optimal is to buy the skis at the last shop. We will show that $B$ determines the effective action sets of the consumer and nature in section~\ref{sec:simplify} .
The objective of the consumer is to minimize the worst-case competitive ratio, i.e., to choose a strategy $\mathbf{p} \in \calP$ that solves the problem
\begin{eqnarray*}
&\mathrm{minimize}& \max_{y>0} \left\{\frac{C(\mathbf{p},y)}{\mathrm{OPT}(y)} \right\} \\
&\textrm{subject to}& \mathbf{p} \in \calP
\end{eqnarray*}
which is equivalent to the following:
\begin{eqnarray}
&\mathrm{minimize}& \lambda \label{problem:MSR1}\\
&\textrm{subject to}& \frac{C(\mathbf{p}, y)}{r_1 y} \leq \lambda \nonumber\\
&& \sum_{j=1}^n \int_{0}^\infty p_j(x) \ud x = 1 \nonumber\\
&& p_j(x) \geq 0 \ \ \forall x\in [0,+\infty)\cup\{+\infty\}\nonumber\\
&& \forall y\in (0,+\infty)\cup\{+\infty\}, \forall j \in [n] \nonumber
\end{eqnarray}
\subsubsection{Simplifying the Zero-sum Game}\label{sec:simplify}
In this section, we show that the game can be greatly simplified and the action set for both the consumer and nature can be reduced. Specifically, nature prefers the strategy $y = +\infty$ to any other strategy $y' > B$. For the consumer, for any $j \in [n]$, she prefers the strategy $(j,B)$ to any other strategy $(j,x')$ where $x' > B$.
\begin{lemma}
\label{lemma:MSRstspace}
For nature, any strategy $y \in [B,+\infty)$ is dominated. While for the consumer, any strategy $(j,x)$ is dominated, in which $x \in (B,+\infty)\cup\{+\infty\}, \forall j \in [n]$.
\end{lemma}
\begin{proof}
Recall the cost $c_j(x,y)$ is defined as follows:
\begin{displaymath}
c_j(x,y) =
\begin{cases}
r_j y, & y < x\\
r_j x + b_j, & y \geq x
\end{cases}
\end{displaymath}
Thus for any fixed $(j,x)$, $c_j(x,y)$ is a non-decreasing function of $y$. Further, from (\ref{offlineOpt: MSR}), we can see that the offline optimal cost is unchanged when $y\geq B$. Thus, for any $y \geq B$ it holds that
\begin{displaymath}
\frac{c_j(x,y)}{b_n} \leq \lim_{y\rightarrow +\infty}\frac{c_j(x,y)}{b_n} = \frac{r_j x + b_j}{b_n}
\end{displaymath}
Therefore, any strategy of nature that includes $y \ge B$ is dominated by the strategy of never stopping the consumer.
Now for the consumer, for any shop $j \in \{1,\cdots, n\}$, and any $x' \in (B,+\infty)\cup\{+\infty\}$, it holds that
\begin{displaymath}
c_j(B,y) - c_j(x',y) \leq 0, \quad \forall y \in (0,B)\cup \{ +\infty\}
\end{displaymath}
Therefore, any strategy of the consumer that includes buying at time $x'$ in any shop is dominated by the strategy of buying at $B$ in the same shop.
\end{proof}
From this lemma, the consumer's buying time is restricted in $[0,B]$. Note that for any $(j,x)$ in which $x \in [0,B]$, it holds that
\begin{displaymath}
\frac{c_j(x,B)}{\mathrm{OPT}(B)} = \frac{c_j(x,+\infty)}{\mathrm{OPT}(+\infty)}
\end{displaymath}
Therefore, the action set of nature $\Psi_n$ can be reduced to $\Psi_n = \{y \in (0,B]\}$.
Similarly, in the strategy space of the consumer $\calP$, nature $\calQ$, the expected cost $C(\mathtt{p},y)$ and the competitive ratio $R(\mathbf{p,q})$, we can replace $+\infty$ by $B$.
\emph{Comments on $B$}: recall that the boundary $B$ is defined as $\frac{\min\{b_i\}}{\min\{r_i\}} = \frac{b_n}{r_1}$ in \texttt{MSR}, while this value is $\frac{b_j}{r_j}$ if shop $j$ is the only shop in \texttt{SR}. For instance, if only shop $n$ appears in \texttt{SR}, then the consumer will never consider to buy at any time $x > \frac{b_n}{r_n}$. However, in \texttt{MSR}, the consumer may want to put some positive possibility to the strategy of buying at time $x>\frac{b_n}{r_n}$ in shop $n$ (since $r_1 < r_n$). The difference between these two cases is due to the fact that in \texttt{MSR}, the consumer has the global information of all the shops and the offline optimal is always to rent at shop 1 at the cost of $r_1$ per unit time until the total cost reaches the minimum buying price $b_n$, whereas in \texttt{SR}, the consumer always rents at the cost of $r_n\geq r_1$ per unit time until $b_n$.
With the above results, problem (\ref{problem:MSR1}) can now be reduced to the following:
\begin{align}
\mathrm{minimize}~& ~~~~~~~~~~\lambda \label{problem:MSR2}\\
\textrm{subject to}& ~~~~\frac{C(\mathbf{p}, y)}{r_1 y} \leq \lambda \tag{\ref{problem:MSR2}a}\\
& ~~~~\sum_{j=1}^n \int_{0}^B p_j(x) \ud x = 1 \tag{\ref{problem:MSR2}b}\\
& ~~~~p_j(x) \geq 0 \tag{\ref{problem:MSR2}c}\\
& ~~~~\forall x \in [0,B], \forall y \in (0,B], \forall j \in [n] \tag{\ref{problem:MSR2}d}
\end{align}
We will show that the optimal strategy of the consumer results in exact equality in (\ref{problem:MSR2}a) in the next subsection.
\subsection{Optimal Strategy of the Consumer}
In this subsection, we look into the optimal solution $\mathbf p^*$ for (\ref{problem:MSR2}). In short, $\mathbf p^*$ yields the same expected utility for nature whenever nature chooses to stop. In other words, given $\mathbf p^*$, any pure strategy of nature yields the same utility for both the consumer and nature. Moreover, at any time $x$, the consumer assigns positive buying probability to exactly one of the shops, say shop $j$, and $j$ is decreasing as $x$ increases. Finally, we can see that for any shop $j$ and the time interval in which the consumer chooses to buy at shop $j$, the density function $p_j(x)$ is $\alpha_j e^{r_j/b_j x}$ where $\alpha_j$ is some constant to be specified later.
We now state our first theorem that summarizes the structure of the optimal strategy.
\begin{theorem}
\label{theorem:MSR}
The optimal solution $\mathbf p^*$ satisfies the following properties:
\begin{itemize}
\item[(a)] There exists a constant $\lambda$, such that $\forall y \in (0,B]$,
\begin{displaymath}
\frac{C(\mathbf{p}^*,y)}{r_1 y} = \lambda
\end{displaymath}
\item[(b)] There exist $n+1$ breakpoints: $d_1,d_2,\cdots,d_{n+1}$, such that $B=d_1\ge d_2\ge\cdots\ge d_n\ge d_{n+1}=0$, and $\forall j\in [n]$, we have
\begin{equation*}
p_j^*(x)=
\begin{cases}
\alpha_j e^{r_jx/b_j},&x \in (d_{j+1},d_j)\\
0,& otherwise
\end{cases}
\end{equation*}
in which $\alpha_j$ satisfies that
\begin{displaymath}
\alpha_j b_j e^{r_jd_{j}/b_j} = \alpha_{j-1} b_{j-1} e^{r_{j-1}d_{j}/b_{j-1}} \quad \forall j = 2,\cdots, n
\end{displaymath}
\end{itemize}
\end{theorem}
In the following, We will prove property (a) by Lemma~\ref{lemma:MSRconstant}, property (b) by Lemma~\ref{lemma:MSRfinite}-\ref{lemma:AlphaRelation}. All proof details can be found in Appendix A.
\begin{lemma}
\label{lemma:MSRconstant}
$\forall y \in (0,B]$, $\mathbf{p}^*$ satisfies that
\begin{equation}
\frac{C(\mathbf{p}^*,y)}{r_1 y} = \lambda \label{ratioRelation:MSR2}
\end{equation}
\end{lemma}
From the above lemma, the problem (\ref{problem:MSR2}) is thus equivalent to the following:
\begin{eqnarray}
&\mathrm{minimize}& \lambda \label{problem:MSR3}\\
&\textrm{subject to}& (\ref{ratioRelation:MSR2}),(\ref{problem:MSR2}b),(\ref{problem:MSR2}c),(\ref{problem:MSR2}d) \nonumber
\end{eqnarray}
Here are some intuitions of \texttt{MSR}: In the extreme case where the buying time $x$ is sufficiently small, the consumer will prefer shop $n$ than any other shops since $b_n$ is the minimum buying price. As $x$ increases, the renting cost weights more and the skier gradually chooses the shop with lower rent yet higher buying price. In the other extreme case when the skier decides to buy at time $x$ close to $B$, shop 1 may be the best place since it has the lowest rent. Thus,in the optimal strategy, the interval $[0,B]$ may be partitioned into several sub-intervals. In each interval, the consumer only chooses to buy at one and only one shop. The following two lemmas formally show that the above intuitions are indeed the case.
\begin{lemma}
\label{lemma:MSRfinite}
$\forall j \in [n]$, we have $p^*_j(0)<+\infty$, and $\forall x \in (0,B]$, $p^*_j(x)< \frac{2b_1r_1}{b_n^2}$.
\end{lemma}
\begin{lemma}
\label{lemma:moveP MSR}
In the optimal strategy $\mathbf{p^*}$, there exists $n+1$ breakpoints $B=d_{1}\ge d_{2}\ge\cdots\ge d_{n+1} = 0$, which partition $[0,B]$ into $n$ sub-intervals, such that $\forall j=1,\cdots,n$, $\forall x \in (d_{j+1},d_{j})$, $p_j^*(x)>0 $ and $p_i^*(x)=0$ for any $i\neq j$.
\end{lemma}
\begin{proof} (sketch)
It suffices to show that $\forall x\in (0,B)$, $\forall \epsilon>0$, if there exists some $j$ such that $\int_{x-\epsilon}^{x}p_j^*(t)\ud t> 0$, then $\forall j'>j, x'\ge x$, we must have $\int_{x'}^{B} p_{j'}^*(t)\ud t=0$. We use reductio ad absurdum to prove this proposition.
We first show that if there exists some $j'>j, x'>x, \epsilon>0$ such that $\int_{x-\epsilon}^{x}p_j^*(t)\ud t>0$, $\int_{x'}^{x'+\epsilon}p_{j'}^*(t)\ud t>0$, then there exist 2 intervals $(x_1,x_1+\theta)\subseteq(x-\epsilon,x)$ and $(x_2,x_2+\theta)\subseteq(x',x'+\epsilon)$, such that $$\int_{0}^{\epsilon_0} \min\{p_j^*(x_1+\theta),p_{j'}^*(x_2+\theta)\} \ud \theta>0$$
We next move some suitable buying probabilities of $p_{j'}^*$ from $(x_2,x_2+\theta)$ to $(x_1,x_1+\theta)$ for shop $j'$, and correspondingly move some purchase probabilities of $p_j^*$ from $(x_1,x_1+\theta)$ to $(x_2,x_2+\theta)$ for shop $j$. Then we obtain a new strategy $\mathbf{p^1}$. We show that $\forall y\in(0,B]$, $\mathbf{p^1}$ is no worse than $\mathbf{p^*}$, and $\forall y\in(x_1,B]$, $\mathbf{p^1}$ is strictly better than $\mathbf{p^*}$, which makes a contradiction.
\end{proof}
\vspace{-2mm}
The lemma explicitly specifies the order of the shops in the optimal strategy: as $x$ increases, the index of the shop where the consumer assigns positive density decreases. Based on this lemma, for any $j \in [n], x \in (d_{j+1},d_j)$, multiplying both sides of (\ref{ratioRelation:MSR2}) by $r_1 y$, and taking twice derivatives, we have
\begin{equation} \label{Pdiffequation1}
b_i\frac{\ud p_j^*(x)}{dx}=r_j p_j^*(x)\quad \forall x \in (d_{j+1},d_{j})
\end{equation}
Solving this differentiable equation, we obtain the optimal solutions as follows\footnote{Because $p_j(x)$ is finite, we say $p_j(d_i) = 0$ for all $i,j\in N$, which does not affect the expected cost at all.}:
\begin{equation}\label{PFormSolution}
p_j^*(x)=
\begin{cases}
\alpha_j e^{r_jx/b_j},& x \in (d_{j+1},d_j)\\
0,& otherwise
\end{cases}
\end{equation}
where $\alpha_j$ is some constant. The relationship between $\alpha_j$ and $\alpha_{j-1}$ is described in the following lemma:
\begin{lemma}
\label{lemma:AlphaRelation}
\begin{equation} \label{alphaRelation1}
\alpha_j b_j e^{r_jd_{j}/b_j} = \alpha_{j-1} b_{j-1} e^{r_{j-1}d_{j}/b_{j-1}} \quad \forall j = 2,\cdots, n
\end{equation}
\end{lemma}
\subsection{Computing the Optimal Strategy}\label{sec:computing}
In this section we propose a linear time algorithm to compute the optimal strategy for the consumer. First we show the relationship between the competitive ratio $\lambda$ and $\alpha_1$ by the following lemma:
\begin{lemma}
\label{lemma:MSRconratio}
For any strategy $\mathbf{p}$ which satisfies property (b) in Theorem~\ref{theorem:MSR}, it holds that
\begin{displaymath}
\frac{C(\mathbf p,y)}{r_1 y} = \alpha_1 \frac{b_1}{r_1} e^{\frac{r_1}{b_1}B},\quad\forall y\in(0,B]
\end{displaymath}
\end{lemma}
From the above lemma, we know that minimizing $\lambda$ is equivalent to minimizing $\alpha_1$. Therefore, problem (\ref{problem:MSR3}) is now equivalent to the following:
\begin{eqnarray}
&\mathrm{minimize}& \alpha_1 \\
&\textrm{subject to}& \sum_{j=1}^n \alpha_j \frac{b_j}{r_j}\left(e^{\frac{r_j}{b_j}d_{j-1}}-e^{\frac{r_j}{b_j}d_j}\right) = 1\label{PalphaNormolise}\\
&& \alpha_j e^{r_j d_{j}/b_j} = \alpha_{j-1} e^{r_{j-1}d_{j}/b_{j-1}}, \forall j = 2,\cdots, n \nonumber\\
&& \alpha_j > 0,\quad \forall j\in [n]\nonumber\\
&& B=d_1\ge d_2\ge\cdots\ge d_n\ge d_{n+1}=0 \nonumber
\end{eqnarray}
where (\ref{PalphaNormolise}) is computed directly from (\ref{problem:MSR2}b).
In Theorem~\ref{theorem:MSR}, if we know $(d_1,d_2,\cdots,d_{n+1})$, then we can see that $\alpha_j$ is proportional to $\alpha_1$. Therefore, we can get a constant $\Omega_j$ such that $\Omega_j\alpha_1=\int_{d_{j+1}}^{d_j}p_j(x)dx$ since the breakpoints are known. Finally we can get a constant $\Omega=\sum_{j=1}^{n}\Omega_j$ such that $\Omega\alpha_1=\sum_{j=1}^{n}(\int_{0}^{B}p_j(x)dx)$. Using the fact that $\sum_{j=1}^{n}(\int_{0}^{B}p_j(x)dx)=1$, we can easily solve $\alpha_1$, all the $\alpha_j$ and the whole problem.
Therefore, the computation of $\mathbf{p}$ reduces to computing $\{d_1, d_2,\cdots,d_{n+1}\}$. Notice that $d_1\equiv B, d_{n+1}\equiv 0$.
In this case, we treat this problem from another prospective. We first fix $\alpha_1$ to be 1. After that, without considering the constraint (\ref{PalphaNormolise}), we compute the optimal breakpoints $(d_1, d_2,\cdots,d_{n+1})$ to maximize $\sum_{j=1}^{n} (\int_{0}^{B} p_j(x)dx)$. Denote the optimal value of this problem as $\Omega$. We then normalize all the probability functions, i.e., reset all the $\alpha_j$ to be $\alpha_j/\Omega$. By Lemma~\ref{lemma:AlphaRelation}, we know the ratio $\lambda$ is proportional to $\alpha_1$, which is fixed at first and normalized at last. Hence, maximizing $\Omega$ is equivalent to minimizing $\lambda$. Notice that all the probability functions in the remainder of section~\ref{sec:computing} is unnormalized when $\alpha_1=1$.
In the following 2 sections~\ref{sec:compdn} and \ref{sec:compdj}, we show some intuitions and ideas of our algorithm about how to compute the breakpoints. In Section~\ref{sec:msralg}, we formally propose our algorithm and prove the optimality and complexity of our algorithm.
\subsubsection{Computing $d_n$}\label{sec:compdn}
To facilitate further calculations, we denote $P_{j}$ to be the probability sum of shop $j$ to shop $n$, i.e.,
$$P_{j}\triangleq\sum_{\tau=j}^{n} (\int_{0}^{B} p_\tau(x)\ud x)=\sum_{\tau=j}^{n} (\int_{0}^{d_j} p_\tau(x)\ud x)$$
Now we just need to maximize $P_1$ since by definition $P_1=\Omega$. To compute some breakpoint $d_j$, we assume that all the breakpoints $\{d_i:i\ne j\}$ are fixed. Since breakpoints $d_1,d_2,\cdots, d_{j-1}$ are fixed, parameters $\alpha_1,\alpha_2,\cdots,\alpha_{j-1}$ are constants. Therefore, $\sum_{\tau=1}^{j-2} (\int_{0}^{B}p_\tau(x) dx)$, part of the probability sum, is a constant and we just need to maximize the rest of the sum which is $P_{j-1}$.
First we consider how to compute $\arg\max_{d_n} P_{n-1}(d_n)$ when given $d_1,\cdots, d_{n-1}$, where
$$P_{n-1}(d_n)=\alpha_n \int_{0}^{d_n} e^\frac{r_nx}{b_n}dx+\alpha_{n-1}\int_{d_n}^{d_{n-1}} e^\frac{r_{n-1}x}{b_{n-1}}dx$$
Notice that $\alpha_{n-1}$ is a constant but $\alpha_n$ depends on $d_n$. From Lemma~\ref{lemma:AlphaRelation} we know that:
$$\alpha_n=\alpha_{n-1} b_{n-1} e^{(r_{n-1}/b_{n-1}-r_n/b_n)d_n}/b_n$$
The following lemma shows the concavity of $P_{n-1}(d_n)$:
\begin{lemma}
\label{lemma:dnconcave}
$P_{n-1}(d_n)$ is a strictly concave function.
\end{lemma}
Notice that $P'_{n-1}(d_n)>0$ when $d_n=0$. This implies that: if $d_n<d_{n-1}$, we must have $P'_{n-1}(d_n)=0$ since it is concave. Otherwise, $d_{n-1}=d_n$ which means $\forall x,p_{n-1}(x)=0$, i.e., shop $n-1$ does not exist. Thus we can delete shop $n-1$ and view shop $n-2$ as shop $n-1$. Similarly, if $d_n<d_{n-2}$, $P'_{n-2}(d_n)=0$; otherwise delete shop $n-2$ and treat shop $n-3$ as shop $n-1$. Repeat this procedure until we find some shop $k$, such that $d_n=d_{n-1}=\cdots=d_{k+1}<d_k$. Then $d_n$ should be the maximal point because of the concavity derived by Lemma~\ref{lemma:dnconcave}, i.e.,
$$d_n=\frac{b_n}{r_n}\ln(\frac{b_{k}r_n-b_nr_{k}}{b_n(r_n-r_{k})})$$
Notice that $d_n$ is always positive.
\subsubsection{Computing $d_j$}\label{sec:compdj}
Notice that $d_n$ is unrelated to $d_{n-1}$ if $d_n<d_{n-1}$. Therefore, we can work out all the breakpoints $d_j$ in descending order of the subscript of d. Here we show how to obtain $d_j$ after $d_n,d_{n-1},\cdots,d_{j+1}$.
If $j=n$, we just temporarily take
$$d_n=\frac{b_n}{r_n}\ln(\frac{b_{n-1}r_n-b_nr_{n-1}}{b_n(r_n-r_{n-1})})$$
If $j\ne n$, our target becomes $\arg\max_{d_j} P_{j-1}(d_j)$. According to the definition, we have
$$P_{j-1}(d_j)=\alpha_j (D_j+\int_{0}^{d_j} e^\frac{r_jx}{b_j}dx)+\alpha_{j-1}\int_{d_j}^{d_{j-1}} e^\frac{r_{j-1}x}{b_{j-1}}dx$$
where$$D_j\triangleq -\int_{0}^{d_{j+1}}e^{r_jx/b_j}dx+\sum_{\tau=j+1}^{n}\frac{\alpha_\tau}{\alpha_j}\int_{d_{\tau+1}}^{d_{\tau}}e^{r_{\tau}x/b_{\tau}}dx\ge 0$$
Notice that the breakpoints $d_n,d_{n-1},\cdots,d_{j+1}$ are fixed and we can compute $\alpha_\tau/\alpha_j$ by the following equations which is derived from Lemma~\ref{lemma:AlphaRelation}:
$$\alpha_\tau=\alpha_{\tau-1} b_{\tau-1} e^{(r_{\tau-1}/b_{\tau-1}-r_\tau/b_\tau)d_\tau}/b_\tau,\forall\tau\in[n]\backslash[j]$$
Therefore, $D_j$ is a constant.
It can be seen that we can compute $D_j$ recursively, i.e.,
$$D_j= \frac{\alpha_{j+1}}{\alpha_{j}}(D_{j+1}+\int_{0}^{d_{j+1}}e^{\frac{r_{j+1}}{b_{j+1}}x}\ud x -\int_{0}^{d_{j+1}}e^{\frac{r_j}{b_j}x}\ud x)$$
Also note that $\alpha_{j-1}$ is a constant but $\alpha_j$ depends on $d_j$:
$$\alpha_j=\alpha_{j-1} b_{j-1} e^{(r_{j-1}/b_{j-1}-r_j/b_j)d_j}/b_j$$
The following lemma shows that $P_{j-1}(d_j)$ is a quasi-concave function:
\begin{lemma}
\label{lemma:GConcave}
If $D_jr_j/b_j\ge 1$, we always have $P'_{j-1}(d_j)<0$; if $D_jr_j/b_j< 1$, $P''_{j-1}(d_j)<0$, i.e., $P_{j-1}(d_j)$ is strictly concave.
\end{lemma}
Similarly with the computation of $d_n$, if $D_jr_j/b_j>1$, then we always have $P'_{j-1}(d_j)<0$ and the optimal $d_j$ is $d_{j+1}$. Hence we delete shop $j$ and treat shop $j-1$ as shop $j$. Then we need to recompute $d_{j+1}$ and let $d_j=d_{j+1}$; otherwise it is concave and we temporarily get the maximal point:
$$d_j=\frac{b_j}{r_j}\ln(\frac{(b_{j-1}r_j-b_jr_{j-1})(1-D_jr_j/b_j)}{b_j(r_j-r_{j-1})})$$
Here if the temporary $d_j$ is no larger than $d_{j+1}$, it means that the optimal solution is $d_{j+1}$ because of the constraints $d_{j+1}\le d_j\le d_{j-1}$. So we have $d_j=d_{j+1}$ which means that $\forall x,p_j(x)=0$. Therefore, we delete shop $j$ and treat shop $j-1$ as shop $j$. Then recompute $d_{j+1}$ and temporarily skip $d_j$. At last we set $d_j=d_{j+1}$.
\subsubsection{A Linear Time Algorithm}\label{sec:msralg}
Now we are ready to show our algorithm for computing the optimal strategy of the consumer.
\begin{theorem}
\label{theorem:MSRalg}
There is an algorithm for computing the unique optimal strategy of the consumer. The time and space complexity of the algorithm are linear.
\end{theorem}
We first show how to construct our algorithm, and analyze the correctness and the complexity of our algorithm later.
Since delete operations may be executed frequently in the algorithm, we use a linked list to store the shop info. Each shop is an element in this linked list and the shop index decreases when we traverse from the head to the tail. So the head is shop $n$ and the tail is shop $1$. Considering that the shops appear in the form of linked list in the algorithm, we rewrite some equations we may use in the algorithm:
\begin{eqnarray}
\label{CD}
D_j&=& \frac{\alpha_{prev[j]}}{\alpha_{j}}(D_{prev[j]}+\int_{0}^{d_{prev[j]}}\exp(\frac{r_{prev[j]}x}{b_{prev[j]}})dx)\nonumber\\ &&-\int_{0}^{d_{prev[j]}}\exp(\frac{r_jx}{b_j})dx \nonumber \\
&=&\frac{\alpha_{prev[j]}}{\alpha_{j}}D_{prev[j]}-\frac{b_j}{r_j}(\exp(\frac{r_j d_{prev[j]}}{b_j})-1) \nonumber\\
&&+\frac{\alpha_{prev[j]}b_{prev[j]}}{\alpha_{j}r_{prev[j]}}(\exp(\frac{r_{prev[j]}d_{prev[j]}}{b_{prev[j]}})-1)
\end{eqnarray}
Here $\frac{\alpha_{prev[j]}}{\alpha_{j}}$ is represented as follow:
$$\frac{\alpha_{prev[j]}}{\alpha_{j}} =\frac{b_j}{b_{prev[j]}}e^{(\frac{r_j}{b_j} -\frac{r_{prev[j]}}{b_{prev[j]}})d_{prev[j]}}$$
\begin{eqnarray}
\label{BP}
d_j=\frac{b_j}{r_j} \ln(\frac{(b_{next[j]}r_j-b_jr_{next[j]})(1-D_jr_j/b_j)}{b_j(r_j-r_{next[j]})})
\end{eqnarray}
Here is the pseudocode of our algorithm.:
\begin{algorithm}[htb]
\caption{MSR Algorithm}
\label{alg:CompNoEF}
\begin{algorithmic}[1]
\STATE $D_n\leftarrow 0$;
\FOR {$j\leftarrow 1 \text{ to } n$}
\STATE $next[j]\leftarrow j-1$;
\STATE $prev[j]\leftarrow j+1$;
\ENDFOR
\FOR {$j\leftarrow n \text{ to } 2$}
\STATE $ComputingBP(j)$;
\ENDFOR
\FOR {$j\leftarrow n \text{ to } 2$}
\IF {$d_j\ne$"decide later"}
\IF {$d_j>B$}
\STATE $d_j\leftarrow B$;
\ENDIF
\ELSE
\STATE $d_j\leftarrow d_{j+1}$;
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[htb]
\caption{Function $ComputingBP(j)$}
\begin{algorithmic}[1]
\IF {$j\ne n$}
\STATE Update $D_j$ according to (\ref{CD});
\ENDIF
\IF {$D_j\ge b_j/r_j$}
\STATE $d_j\leftarrow$"decide later";
\STATE $next[prev[j]]\leftarrow next[j]$;
\STATE $prev[next[j]]\leftarrow prev[j]$;
\STATE $ComputingBP(prev[j])$;
\ELSE
\STATE Compute $d_j$ according to (\ref{BP});
\IF {$d_j\le d_{j+1}$}
\STATE $d_j\leftarrow$"decide later";
\STATE $next[prev[j]]\leftarrow next[j]$;
\STATE $prev[next[j]]\leftarrow prev[j]$;
\STATE $ComputingBP(prev[j])$;
\ENDIF
\ENDIF
\end{algorithmic}
\end{algorithm}
Though we may revise those breakpoints for many times when running the algorithm, it will still lead to the exact optimal solution at the end. Since the feasible solution of $(d_2,d_3,\cdots,d_n)$ is convex and functions $P_{j-1}(\cdot)$ are always concave, we have the following properties for the optimal solution:
If $d_{j-1}>d_j$, $P'_{j-1}(d_j)\le 0$;
if $d_{j+1}<d_j$, $P'_{j-1}(d_j)\ge 0$.
So in our computation method, we delete a shop when and only when the shop should be deleted in the optimal solution. Notice that Line $5,6,7$ and Line $15,16,17$ are what we actually do when we say we delete shop $j$. We say a shop is \emph{alive} if it has not been deleted. Based on the following lemma, we rigorously prove the correctness and complexity of this algorithm.
\begin{lemma}
After an invocation of $ComputingBP(j)$ is completed, the temporary breakpoints, whose indexes are less than or equal to $j$, $\mathbf{td}=(td_n,td_{n-1},\cdots,td_j)$ are identical with the optimal solution $\mathbf{d^*}=(d_n^*,d_{n-1}^*,\cdots,d_j^*)$ if $d_j^*<d_{next[j]}^*$. And all the deletions are correct, i.e., once a shop $j$ is deleted in the algorithm, $d_j^*$ must be equal to $d_{j+1}^*$.
\label{lemma:invoc}
\end{lemma}
Here $\mathbf{td}=(td_n,td_{n-1},\cdots,td_j)$ are the temporary values of $d_n,d_{n-1},\cdots,d_j$ just after this invocation, $d_n^*,d_{n-1}^*,\cdots,d_2^*$ are the optimal breakpoints, and $next[\cdot]$ and $prev[\cdot]$ denote the current state of the linked list, not the eventual result.
\begin{proof} (Theorem~\ref{theorem:MSRalg})
We first show the correctness of the algorithm. According to Lemma~\ref{lemma:invoc}, we know that all the deletions are correct. Also, we know that $\forall j_1,j_2$ such that $1<j_1<j_2$, and that shop $j_1$ and shop $j_2$ are alive, $td_{j_1}>td_{j_2}$ when the algorithm terminates. There are 2 cases:
Case 1: $B=d_1^*>d_{prev[1]}^*$, the final solution is the unique optimal solution by Lemma~\ref{lemma:invoc}.
Case 2: $B=d_1^*=d_{prev[1]}^*$. Similar to the proof of Case 1 in Lemma~\ref{lemma:invoc}, the solution of the alive breakpoints $\mathbf{td^*}=(td_n^*,td_{next[n]}^*,\cdots,td_{prev[1]}^*,d_1^*)$, satisfying that $\forall \tau\in[n], td_\tau^*=\min\{td_\tau, d_{1}^*\}$, is the unique optimal solution.
Next, we analyze the complexity of the algorithm. Obviously, the space complexity is $O(n)$. For the time complexity, we just need to prove that this algorithm invokes the function $ComputingBP$ for $O(n)$ times. The main function invokes $ComputingBP$ for $O(n)$ times. And notice that a shop is deleted once $ComputingBP$ is invoked recursively by $ComputingBP$. The algorithm can delete $O(n)$ shops at most, therefore the total number of the invocations is $O(n)$.
\end{proof}
\subsection{Optimal Strategy of Nature}
Now we briefly show the optimal strategy $\mathbf{q^*}$ of nature in the following theorem:
\begin{theorem}
\label{theorem:nature}
Suppose that the probability for nature of choosing $y=B$ (actually $y=+\infty$) is $q_B^*$. The optimal strategy $q^*$ of nature satisfies the following properties:
\begin{enumerate}
\item[(a)] $\forall j>1$, we have $\int_{d_j^{*-}}^{d_j^{*+}}q^*(y)\ud y=0$.
\item[(b)] $q^*(y)=\beta_j y e^{-\frac{r_j}{b_j}y},\quad\quad y\in(d_{j+1}^*,d_j^*)$.
\item[(c)] $\beta_1=(q_B r_1/B b_1)e^{\frac{r_1}{b_1}B}$.
\item[(d)] $\frac{b_j}{r_j}\beta_j e^{-\frac{r_j}{b_j}d_j^*}=\frac{b_{j-1}}{r_{j-1}} \beta_{j-1} e^{-\frac{r_{j-1}}{b_{j-1}}d_j^*},\quad \quad j=2,3,\ldots,n$.
\end{enumerate}
\end{theorem}
Recall that $\mathbf{d^*}$ is computed by Algorithm~\ref{alg:CompNoEF}. Thus $\forall j\in[n]$, we can compute $\beta_j/q_B$ and we can obtain $1/q_B$ by computing the sum $\sum_{\tau=1}^{n}\int_{d_\tau}^{d_{\tau+1}}q(y)/q_B \ud y$. Since the sum of probability is 1, i.e., $\int_{0}^{B} q(y) \ud y=1$, it is not hard to work out $\mathbf{q}^*$ after normalization.
\subsection{Including Switching Cost}
In this subsection, we consider an extension, the \emph{multi-shop ski rental with switching cost problem}(also \texttt{MSR-S}), in which we allow the consumer switches from one shop to another, but with some extra fee to be paid. At any time, suppose the consumer chooses to switch from shop $i$ to shop $j$, she has to pay for an extra switching cost $c_{ij}\ge 0$. If there exists some $i \neq j$, such that $c_{ij} = +\infty$, then the consumer cannot \emph{directly} switch from shop $i$ to shop $j$.
Consider the following 2 cases:
\begin{itemize}
\item If the consumer is allowed to switch from shop to shop freely, i.e. the switching cost is always 0, she will optimally rent at the shop with the lowest renting price and buy at the shop with the lowest buying price. All that she concerns is when to buy the skies. Thus, this problem (\texttt{MSR-S}) is reduced to the basic ski rental problem (\texttt{SR}).
\item If the switching cost is always $+\infty$, she will never switch to another shop and the \texttt{MSR-S} becomes \texttt{MSR}.
\end{itemize}
We will prove that the consumer never switches between shops even in \texttt{MSR-S} later.
The settings of \texttt{MSR-S} can be viewed as a directed graph $G=(V,A)$, where $V = \{1,\cdots,n\}$, and $A = \{(i,j): c_{ij}<+\infty\}$. Each arc $(i,j)\in A$ has a cost $c_{ij}$. We define a path $\mathbf{p} \subseteq G$ as a sequence of arcs. Define the cost of $\mathbf{p}$ as the summation of the costs of all arcs on $\mathbf{p}$. Note that if the consumer is allowed to switch from shop $i$ to shop $j$ $(i\neq j)$, there must be a path $\mathbf{p}$ which starts at $i$ and ends at $j$.
It is clear that if the consumer decides to switch from shop $i$ to shop $j$ ($(i,j) \in A$) at any time, she will choose the shortest path from $i$ to $j$ in the graph $G$. Denote the cost of the shortest path from $i$ to $j$ by $c_{ij}^*$. We obtain that $c_{ij}^* \leq c_{ij}$, for any $(i,j) \in A$. Moreover, for any different $i, j , k\in V$ such that $(i,j), (j,k), (i,k) \in A$, we have:
\begin{equation} \label{relation:MSR-S}
c_{ik}^* \leq c_{ij}^*+c_{jk}^*
\end{equation}
Comparing to \texttt{MSR}, \texttt{MSR-S} has a much richer action set for the consumer, where the consumer is able to choose where to rent and for how long to rent at that shop.
Although we allow the consumer to switch from shop to shop as many times as she wants, the following lemma shows that the consumer will never choose to switch to another shop and continue renting, i.e. the only moment that the consumer will switch is exactly when she buys the skis.
\begin{lemma} \label{lemma:MSR-S}
Any strategy which includes switching from one shop to another shop and continuely renting the skis is dominated.
\end{lemma}
This lemma significantly reduces the action set of the consumer to the same one as \texttt{MSR}. Thus, if the consumer considers to switch, she must switch to another shop at the buying time. Further, she chooses a shop such that the sum of the buying cost and the switching cost (if any) is minimized. Once the consumer decides to switch for buying, she will switch at most once, since the buying cost only increases otherwise. Therefore, for any strategy, suppose $s$ is the shop in which the consumer rents the skis right before the buying time, we define the buying price of $s$ as follows:
\begin{displaymath}
b_s' = \min\{b_s, b_{j}+c_{sj}, \forall j \neq s\}
\end{displaymath}
Observe that once $s$ is settled, $b_s'$ is settled. As a result, \texttt{MSR-S} is reduced to \texttt{MSR}, in which for any shop $j$, the rent is still $r_j$ per unit time while the buying price $b'_j$ is $\min \{b_j, \min_{i\neq j}\{b_{i}+c_{ji}\}\}$.
\section{Ski rental with ENTRY FEE}
In this section, we discuss another extension of \texttt{MSR}, the \emph{multi-shop ski rental with entry fee included problem} (\texttt{MSR-E}). In this problem, all the settings are the same to those of \texttt{MSR}, except that each shop has an entry fee. Once the consumer enters a shop, she pays for the entry fee of this shop and cannot switch to another shop. Our goal is to minimize the worst case competitive ratio. Notice that \texttt{MSR} can be viewed as a special case of \texttt{MSR-E} in which the entry fee of each shop is zero.
We introduce this problem not only as an extension of \texttt{MSR}, but more importantly, as a necessary step to solve a more general extension, the (\texttt{MSR-ES}) problem in next section. We will show that \texttt{MSR-ES} can be converted into \texttt{MSR-E} with minor modifications.
\subsection{Single Shop Ski Rental with Entry Fee}
We start by briefly introducing the special case of \texttt{MSR-E} when $n=1$. The entry fee, renting price and buying price are supposed to be $a\ge 0$, $r>0$ and $b>0$. Without loss of generality, we assume that $r=1$.
It can be verified that
\vspace{-1mm}
\begin{enumerate}
\item[(i)] Using dominance, the buying time of the consumer $x\in[0,b]$, and the stopping time chosen by nature $y\in(0,b]$.
\vspace{-2mm}
\item[(ii)] For all $ y\in(0,b]$, the ratio is a constant if the consumer chooses the optimal mixed strategy.
\vspace{-2mm}
\item[(iii)] No probability mass appears in $(0,b]$.
\end{enumerate}
By calculation, we obtain the following optimal mixed strategy:
\vspace{-2mm}
\begin{itemize}
\item The probability that the consumer buys at time $x=0$ is
$p_0=a/((a+b)e-b)$.
\vspace{-2mm}
\item The probability density function that the consumer buys at time $x\in(0,b]$ is
$p(x)=\frac{\exp(x/b)}{b(e-\frac{b}{a+b})}$.
\vspace{-2mm}
\item The competitive ratio is $\frac{e}{e-\frac{b}{a+b}}$.
\end{itemize}
Note that the biggest difference from \text{MSR} is that $p_0$ may be probability mass, which means that the consumer may have non-zero probability to buy at the initial time.
\subsection{Analysis of MSR-E}
In this problem, assume that there are $n$ shops in total. For any shop $j\in [n]$, the entry fee, renting price and buying price of shop $j$ are $a_j\ge 0$, $r_j>0$ and $b_j>0$, respectively. Similar to the procedures in \texttt{MSR}, we use a tuple $(j,x)$ in which $j\in[n],x\in[0,+\infty)\cup\{+\infty\}$ to denote an action for the consumer and a number $y\in(0,+\infty)\cup\{+\infty\}$ for nature.
Without loss of generality, in this problem, we assume that
\begin{itemize}
\item $r_1\le r_2\le\cdots\le r_n$;
\item $\forall i,j,~a_i<a_j+b_j$;
\item $\forall i<j$, $a_i>a_j$ or $a_i+b_i>a_j+b_j$.
\end{itemize}
The second condition is because that shop $i$ is dominated by shop $j$ if $a_i\ge a_j+b_j$. For the third condition, we know $r_i\le r_j$ since $i<j$. So shop $j$ is dominated by shop $i$ if we also have $a_i\le a_j$ and $a_i+b_i\le a_j+b_j$.
Denote $B$ as follows:
\begin{eqnarray*}
&\mathrm{minmize}& B\\
&\mathrm{subject~to}& \forall i,~a_i+Br_i\ge\min_j(a_j+b_j)
\end{eqnarray*}
Similar to Lemma~\ref{lemma:MSRstspace} in \texttt{MSR}, we reduce the action sets for both players by the following lemma:
\begin{lemma}
\label{lemma:EFstspace}
For nature, any action $y \in [B,+\infty)$ is dominated. For the consumer, any action $(j,x)$ is dominated, in which $x \in (B,+\infty)\cup\{+\infty\}, j \in [n]$.
\end{lemma}
Similar to \texttt{MSR}, the consumer's action set is reduced to buying time $x\in [0,B]$, and nature's action set is reduced to $\{y\in(0,B]\}$.
The strategy spaces for both the consumer and the nature are identical to those in \texttt{MSR}. Similarly, we use $\mathbf{p}$ to denote a mixed strategy of the consumer and $\mathbf{p^*}$ to denote the optimal mixed strategy. If the consumer chooses mixed strategy $\mathbf{p}$ and nature chooses $y$, we denote the cost function as follows:
\begin{eqnarray}
C(\mathbf{p},y) &=& \sum_{j\in[n]}\bigg(\int_0^y (a_j+r_j x + b_j) p_j(x) \ud x \nonumber\\
&& + \int_y^B (a_j+r_j y) p_j(x) \ud x\bigg)
\end{eqnarray}
We define $\mathrm{OPT}(y)$ as the offline optimal strategy when nature chooses the action $y$, i.e.
$$\mathrm{OPT}(y)=\min_j\{a_j+r_jy\},\quad y\in(0,B]$$
By \cite{decomputational}, we can compute the function $\mathrm{OPT}(y)$ in linear time. The objective of the consumer is $\min_\mathbf{p}\max_y\frac{C(\mathbf{p},y)}{\mathrm{OPT}(y)}$.
Similar to \texttt{MSR}, we give the following lemmas:
\begin{lemma}\label{lemma:MSR-E constant}
For the optimal strategy $p^*$ of the consumer, $\frac{C(\mathbf{p^*},y)}{\mathrm{OPT}(y)}$ is a constant for any $ y \in (0,B],$.
\end{lemma}
\begin{lemma} \label{lemma:MSR-E finite}
$\forall x \in (0,B], p^*_j(x) < +\infty$.
\end{lemma}
The problem is formalized as follows:
\begin{align}
\textrm{minimize}& \quad\quad\quad\lambda \label{problem:MSREF1}\\
\textrm{subject to}& \quad \frac{C(\mathbf{p}, y)}{\mathrm{OPT}(y)} = \lambda,\quad\forall y \in (0,B]\tag{\ref{problem:MSREF1}a} \\
& \quad \sum_{j=1}^n \int_{0}^B p_j(x) \ud x = 1 \tag{\ref{problem:MSREF1}b}\\
& \quad p_j(x) \geq 0,\quad \forall x \in [0,B] \tag{\ref{problem:MSREF1}c}
\end{align}
Note that there may be probability mass at $x=0$. For Problem~\ref{problem:MSREF1}, we find that the optimal mixed strategy $\mathbf{p^*}$ of the consumer is also segmented.
\begin{lemma} \label{lemma:moveP MSR-E}
In the optimal mixed strategy $\mathbf{p}^*$, there exists $n+1$ breakpoints $B=d_{1}\ge d_{2}\ge\cdots\ge d_{n+1} = 0$, which partition $[0,B]$ into $n$ sub-intervals, such that $\forall j=1,\cdots,n$, $\forall x \in (d_{j+1},d_{j})$, $p_{j}^*(x) > 0$ and $p_{i}^*(x) = 0$ for any $i \neq j$.
\end{lemma}
\noindent \textbf{Remark:} Though we prove the form of the optimal solution to \texttt{MSR-E}, computing the analytical solution is very challenging. The point mass at $x=0$ makes this problem much more difficult than \texttt{MSR}, because one needs to guarantee the nonnegativity of this probability mass. Moreover, the non differentiable points of $\mathrm{OPT}(y)$, which we call the offline breakpoints, complicate the probability density function form in each segment $(d_{j+1},d_j)$. In fact, we can obtain the exact analytic optimal solution when $n=2$.
\section{ski rental with ENTRY FEE AND SWITCHING}
Now we introduce the last extension of \texttt{MSR}, the \emph{entry fee included, switching allowed problem} (\texttt{MSR-ES}). All the settings are identical to those of \texttt{MSR-E}, except that the consumer is allowed to switch at any time. When a consumer enters or switches to a shop, she pays the entry fee of the shop. For instance, if a consumer enters shop 1 at first, then switches from shop 1 to shop 2 and returns to shop 1 at last, she pays the entry fee of shop 1 twice and the entry fee of shop 2 once.
Similar to \texttt{MSR-E}, there exist $n$ shops. The entry fee, renting price, buying price of shop $j$ are denoted by $a_j\ge 0, r_j>0, b_j>0$, respectively. Without loss of generality, we assume that
\begin{itemize}
\item $r_1\le r_2\le\cdots\le r_n$;
\item $\forall i,j,~a_i<a_j+b_j$;
\item $\forall i<j$, $a_i>a_j$ or $a_i+b_i>a_j+b_j$.
\end{itemize}
As in the \texttt{MSR-S} case, if there exists $ i,j\in [n]$ such that $b_i> a_j+b_j$, then instead of buying in shop $i$, the consumer will switch from shop $i$ to shop $j$ to buy skis \footnote{This phenomenon is called ``switching for buying''.}. This is equivalent to setting $b_i$ to be $\min_{j\neq i}\{ a_j+b_j\}$. Therefore, without loss of generality, we assume that
\begin{itemize}
\item $\forall i,j,~b_i\le a_j+b_j$.
\end{itemize}
In the remainder of this section, we first define the action set and formulate our problem. Then, we show that the strategy space can be reduced by Lemma~\ref{lemma:EAdominate}, \ref{lemma:reduceSpace} and \ref{lemma:EAconstant}. In Lemma~\ref{lemma:EAf2p}, we show that for each switching operation, the operations of the consumer before or after the switching is not important. The only things we care about are when the switching happens, and which shops the switching operation relates to. Thus, we can construct a virtual shop for each switching operation, and (nearly) reduce this \texttt{MSR-ES} problem to \texttt{MSR-E}. Finally, we show that \texttt{MSR-ES} has the similar nice properties as \texttt{MSR-E} which we have known in Lemma~\ref{lemma:MSR-E finite} and \ref{lemma:moveP MSR-E}.
\subsection{Notations and Analysis of MSR-ES}
\subsubsection{Reduced Strategy Space}
The action set for nature is $\{y>0\}$, defined as before. To represent the action set formally, we firstly introduce the operation tuple $\sigma=(i,j,x)$ to denote the switching operation of switching from shop $i$ to shop $j$ at time $x$. For special cases, $(0,j,0)$ denotes the entering operation that the consumer enters shop $j$ at the very beginning; $(i,0,x)$ denotes the buying operation that the consumer buys at shop $i$ at time $x$. An operation tuple can also be represented as $(j,x)$ for short, denoting the switching operation to shop $j$ at time $x$ if $j>0$, the entering operation if $x=0$, and the buying operation at time $x$ if $j=0$.
Then, an action $\psi$ is expressed as a sequence (may be infinite) of the operation tuples:
$$\psi=\{(j_0,x_0),(j_1,x_1),(j_2,x_2),\cdots\}$$
satisfying that
\begin{itemize}
\item $0=x_0\le x_1\le x_2\le\cdots$;
\item if there exists $x\ge 0$ such that $(0,x)\in \psi$, it is the last element in $\psi$.
\end{itemize}
or the full form with the same constraints:
$$\psi=\{(0,j_0,x_0),(j_0,j_1,x_1),(j_1,j_2,x_2),\cdots\}$$
Similar to other extensions, we reduce the action set. In this model, the definition of $B$ is the same as those of \texttt{MSR-E}:
\begin{eqnarray*}
&\mathrm{minimize}& B\\
&\mathrm{subject~to}& \forall i,~a_i+Br_i\ge\min_j(a_j+b_j)
\end{eqnarray*}
and we give the following lemma:
\begin{lemma}\label{lemma:EAdominate}
From the perspective of nature, any strategy $y\in[B,+\infty)$ is dominated. While for the consumer, any strategy in which the buying time $x\in (B,+\infty)\cup \{+\infty\}$ is dominated.
\end{lemma}
Similar to \texttt{MSR}, we reduce the consumer's buying time to the interval $[0,B]$, and nature's action set to $\{y\in(0,B]\}$.
The following lemma shows that a consumer may switch from shop $i$ to shop $j$ for renting, only when $r_i>r_j$ and $a_i<a_j$.
\begin{lemma} \label{lemma:reduceSpace}
If a strategy of the consumer: $$\psi = \{(0,j_0,x_0),(j_0,j_1,x_1),\cdots,(j_{|\psi|-2},0,x_{|\psi|-1})\}$$ satisfies any of the following conditions, then it is dominated.
\begin{itemize}
\item $\exists 0<\tau<|\psi|-1$ such that $x_{\tau-1}=x_\tau$;
\item $\exists (i,j,x)\in \psi$ such that $r_i\le r_j$ and $(j,0,x)\notin\psi$;
\item $\exists (i,j,x)\in \psi$ such that $a_i\ge a_j$ and $(j,0,x)\notin\psi$.
\end{itemize}
\end{lemma}
Here we give some intuitions. In these three cases, we can construct a new action $\psi'$ by deleting one specified operation from $\psi$, and show that $\psi$ is dominated by $\psi'$.
This lemma rules out a huge amount of dominated strategies from our action set and allows us to define the operation set:
\begin{eqnarray*}
\Sigma&\triangleq&\bigg\{\sigma=(i,j,x):i,j\in[n], \ r_i>r_j,\ a_i<a_j, \\
&&x\in(0,B]\bigg\}\bigcup\bigg\{(0,j,0):j\in[n]\bigg\}\\&&\bigcup\bigg\{(j,0,x):j\in[n],x\in[0,B]\bigg\}
\end{eqnarray*}
Thus, we only need to consider such an action set:
\begin{eqnarray*}
\Psi_c\triangleq\bigg\{\psi&=&\{(0,j_0,x_0),(j_0,j_1,x_1),\cdots,(j_{|\psi|-2},0,x_{|\psi|-1})\}\\
&:&0=x_0<x_1<\cdots<x_{|\psi|-2}\le x_{|\psi|-1}\le B,\\
&&r_{j_0}>r_{j_1}>\cdots>r_{j_{|\psi|-2}}~,\\
&&a_{j_0}<a_{j_1}<\cdots<a_{j_{|\psi|-2}}
\bigg\}
\end{eqnarray*}
Since $r_{j_0}>r_{j_1}>\cdots>r_{j_{k}}$, we get $j_0>j_1>\cdots>{j_{k}}$ and $2\leq |\psi|\le n+1$.
\subsubsection{Mathematical Expression of the Cost, Ratio and the optimization problem}
For nature's action $y\in(0,B]$ and the consumer's action $\psi=\{(j_0,x_0),\cdots,(0,x_{|\psi|-1})\}\in\Psi_c$, we define the cost $c(\psi,y)$ as follows:
\begin{align*}
c(\psi,y)\triangleq
\begin{cases}
\sum_{\tau=0}^{k-1} [a_{j_\tau}+r_{j_\tau} (x_{\tau+1}-x_{\tau})]+r_{j_{k}}(y-x_k),\\
\quad\quad\quad\quad\text{ if }\exists 0<k<|\psi|, x_{k-1}\le y<x_k;\\
\sum_{\tau=0}^{|\psi|-2} (a_{j_\tau}+r_{j_\tau} (x_{\tau+1}-x_{\tau}))+b_{j_{|\psi|-2}},\\
\quad\quad\quad\quad\text{ if }y\ge x_{|\psi|-1}.
\end{cases}
\end{align*}
For any action $\psi=\{(j_0,x_0),\cdots,(0,x_{|\psi|-1})\}$, we use $\mathbf{s}(\psi)$ to denote the order of the operations:
$$\mathbf{s}(\psi)\triangleq \{(0,j_0),(j_0,j_1),\cdots,(j_{|\psi|-2},0)\}$$
or the short form:
$$\mathbf{s}(\psi)\triangleq \{j_0,j_1,\cdots,j_{|\psi|-2},0\}$$
Further, we define $\mathcal{S}$ as the collection $\mathbf{s}(\Psi_c)$ as follows:
\begin{eqnarray*}
\mathcal{S}\triangleq\{\mathbf{s}&=&\{j_0,j_1,\cdots,j_k,0\}~:~k\ge 0,\\
&&r_{j_0}>r_{j_1}>\cdots>r_{j_{k}}~,~a_{j_0}<a_{j_1}<\cdots<a_{j_{k}}
\}
\end{eqnarray*}
Note that $j_0,j_1,\cdots,j_k\in[n]$ and $\{0\}\notin \mathcal{S}$, so the amount of elements in $\mathcal{S}$ is upper bounded by $|\mathcal{S}|\le 2^n-1$.
\noindent We group all the actions in $\Psi_c$ whose $\mathbf{s}(\psi)$ are identical. Thus, we partition $\Psi_c$ into $|\mathcal{S}|$ subsets.
\noindent For any action $\psi$, let $\mathbf{x}(\psi)$ denote the sequence of the operation time, defined as follows:
$$\mathbf{x}(\psi)\triangleq(x_1,\cdots,x_{|\psi|-1})$$
For each operation order $\mathbf{s}\in \mathcal{S}$, we define $\mathcal{X}_\mathbf{s}$ as the collection $\{\mathbf{x}(\psi):\mathbf{s}(\psi)=\mathbf{s}\}$, i.e.,
$$
\mathcal{X}_\mathbf{s}\triangleq\{\mathbf{x}=(x_1,x_2,\cdots,x_{|\mathbf{s}|-1}):
0<x_{1}<\cdots<x_{|\mathbf{s}|-1}\le B\}
$$
We observe that any $ \mathbf{s}\in \mathcal{S}$ and $ \mathbf{x}\in\mathcal{X}_\mathbf{s}$ can be combined to a unique action $\psi(\mathbf{s},\mathbf{x})$. Further, we can use $c_{\mathbf{s}}(\mathbf{x},y)$ and $c(\psi({\mathbf{s}},\mathbf{x}),y)$ interchangeably.
For each $\mathbf{s}\in\mathcal{S}$, we define the probability density function $f_{\mathbf{s}}:\mathcal{X}_\mathbf{s}\rightarrow[0,+\infty)\cup \{+\infty\}$
\footnote{ $f_{\mathbf{s}}(\mathbf{x})=+\infty$ represents probability mass on $\mathbf{x}$. }
, which satisfies $$\sum_{\mathbf{s}\in\mathcal{S}}\idotsint\limits_{\mathbf{x}\in\mathcal{X}_\mathbf{s}} f_{\mathbf{s}}(\mathbf{x})\ud\mathbf{x}=1$$
Let $\mathbf{f}=\{f_\mathbf{s}:\mathbf{s}\in\mathcal{S}\}$ denote a mixed strategy for the consumer.
Given a mixed strategy $\mathbf{f}$ of the consumer and nature's choice $y$, the expected competitive ratio is defined as follows:
\begin{equation}
R(\mathbf f, y)\triangleq\frac{C(\mathbf f,y)}{\mathrm{OPT}(y)}
\end{equation}
where $\mathrm{OPT}(y) = \min_{j\in[n]}\{a_j+r_j y\}$, and
\begin{equation}
\label{def:jointExpCost}
C(\mathbf{f}, y) \triangleq \sum_{\mathbf{s}\in\mathcal{S}}\idotsint\limits_{\mathbf{x}\in\mathcal{X}_\mathbf{s}}c_{\mathbf{s}}(\mathbf{x},y) f_{\mathbf{s}}(\mathbf{x})\ud\mathbf{x}
\end{equation}
\noindent The objective of the consumer is $\min_{\mathbf{f}} \max_{y} R(\mathbf{f}, y)$. The following lemma proves that $\forall y\in(0,B]$, $R(\mathbf{f^*},y)$ is a constant in which $\mathbf{f^*}$ is an optimal mixed strategy.
\begin{lemma}\label{lemma:EAconstant}
If $\mathbf{f^*}$ is an optimal solution of the problem $\arg\min_{\mathbf{f}} \max_{y} R(\mathbf{f}, y)$, then there exists a constant $\lambda$ such that $\forall y\in(0,B]$, $R(\mathbf{f^*}, y)=\lambda$.
\end{lemma}
The formalized optimization problem is as follows:
\begin{align}
\mathrm{minimize}& ~~\lambda \label{problem:EA1}\\
\mathrm{subject~ to}& ~~\frac{C(\mathbf f,y)}{\mathrm{OPT}(y)} = \lambda,\forall y \in (0,B]\tag{\ref{problem:EA1}a}\\
& ~~\sum_{\mathbf{s}\in\mathcal{S}}\idotsint\limits_{\mathbf{x}\in\mathcal{X}_\mathbf{s}} f_{\mathbf{s}}(\mathbf{x})\ud\mathbf{x}=1 \tag{\ref{problem:EA1}b}\\
& ~~f_{\mathbf s}(\mathbf x) \geq 0,\forall \mathbf{s}\in\mathcal{S}\tag{\ref{problem:EA1}c}
\end{align}
\subsection{Reduction to MSR-E}
For a mixed strategy $\mathbf{f}$, we define the probability density function of an operation $\sigma=(i,j,x)$ as follows:
\begin{equation} \label{def:pdfForEvent}
\mathbf{p}^{(\mathbf{f})}(\sigma) \triangleq \sum_{\mathbf{s}\in\mathcal{S}:(i,j)\in\mathbf{s}}~\idotsint\limits_{\mathbf{x}_{-\{x\}}:\sigma\in\psi(\mathbf{s},\mathbf{x})} f_{\mathbf{s}}(\mathbf{x})\ud(\mathbf{x}_{- \{x\}})
\end{equation}
where $\mathbf{x}_{-\{x\}}$ is the vector $\mathbf{x}$ in which the element corresponding to $x$ is eliminated. Here $p_{(i,j)}^{(\mathbf{f})}(x)$ can also be viewed as a marginal probability density function.
Also the p.d.f of an operation can be expressed in another form:
$$p_{(i,j)}^{(\mathbf{f})}(x)\triangleq \mathbf{p}^{(\mathbf{f})}((i,j,x))$$
Then we give the following lemma:
\begin{lemma}\label{lemma:EAf2p}
For any 2 mixed strategies $\mathbf{f_1},\mathbf{f_2}$ for the consumer, we have $C(\mathbf{f_1},y)=C(\mathbf{f_2},y)$ for all $y\in(0,B]$ if $\mathbf{p}^{(\mathbf{f_1})}(\sigma)=\mathbf{p}^{(\mathbf{f_2})}(\sigma)$ for all $\sigma\in\Sigma$, i.e., for a mixed strategy $\mathbf{f}$, we only care about its marginal ${\mathbf{p}}^{(\mathbf{f})}(\sigma)$.
\end{lemma}
\begin{figure*}[htb]
\subfigure{
\begin{minipage}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\textwidth]{figure2.eps}
\caption{PDF of the strategies with switching actions, i.e., function $f_\mathbf{s}^*(x)$ when $\mathbf{s}=\{2,1,0\}$. $x_1$ is the switching time and $x_2$ is the buying time.}
\end{minipage}
\quad
\begin{minipage}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\textwidth]{figure1.eps}
\caption{PDF of the strategies without switching actions, i.e., function $f_\mathbf{s}^*(x)$ when $|s|=2$. Blue: $f_\mathbf{s}^*(x)$ when $\mathbf{s}=\{2,0\}$; Red: $f_\mathbf{s}^*(x)$ when $\mathbf{s}=\{1,0\}$.}
\end{minipage}
\quad
\begin{minipage}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\textwidth]{figure3.eps}
\caption{PDF of the virtual shops, i.e., function $p_{(i,j)}^{(\mathbf{f})}(x)$ when $(i,j)$ is a virtual shop. Green: ${p_{(2,1)}^*}(x)$; Blue: ${p_{(2,0)}^*}(x)$; Red: ${p_{(1,0)}^*}(x)$.}
\end{minipage}
}
\end{figure*}
Therefore, the target of the problem converts from the optimal $\mathbf{f}$ to the optimal $\mathbf{p}$. Note that for the switching operation from shop $i$ to shop $j$, we do not care about which action $\psi$ it belongs to. Instead, the only thing that matters is when this switching operation happens. This is similar to the one shop case, in which we only care about when the consumer decides to buy. Thus, for each switching pair $(i,j)$, we consider it as a virtual shop. Among these $O(n^2)$ virtual shops, no switching will appear. Thus, we show that the \texttt{MSR-ES} problem is almost the same as \texttt{MSR-E}.
Now we show our settings for virtual shops. For all $i,j\in[n]$ such that $a_i<a_j,r_i>r_j$, we define the virtual shop $(i,j)$ with entry fee $a_{(i,j)}=a_i-a_j$, renting price $r_{(i,j)}=r_i-r_j$ and buying price $b_{(i,j)}=a_j$. We regard the switching time from $i$ to $j$ as the buying time in virtual shop $(i,j)$. For special case, the prices of virtual shop $(j,0)$ is the same as the real shop $j$. Through this setting, it is not hard to verify that for any action $\psi$ and any $0\leq y\leq B$, the cost function $c(\psi,y)$ is exactly the summation of the cost in the corresponding virtual shops. Similar to a real shop, we define the cost for each virtual shop $(i,j)$:
\begin{eqnarray*}
C_{(i,j)}(\mathbf{p^{(f)}},y)&\triangleq&\int_{0}^{y}(a_{(i,j)}+r_{(i,j)}x+b_{(i,j)})p_{(i,j)}^{(\mathbf{f})}(x)\ud x\\&&+\int_{y}^{B}(a_{(i,j)}+r_{(i,j)}y)p_{(i,j)}^{(\mathbf{f})}(x)\ud x
\end{eqnarray*}
Now we are ready to formalize \texttt{MSR-ES} by the following theorem:
\begin{theorem}
\label{theorem:MSR-ES}
The optimization problem for the consumer can be formalized as follows:
\begin{align}
\mathrm{minimize}&~~ \lambda \label{eqn:f2q}\\
\mathrm{subject ~to}&~~ \frac{C(\mathbf{f},y)}{\mathrm{OPT}(y)}=\lambda \tag{\ref{eqn:f2q}a}\\
&~~C(\mathbf{f},y)={\sum_{(i,j)\in[n]^2:a_i<a_j,r_i>r_j}C_{(i,j)}(\mathbf{p^{(f)}},y)}\nonumber\\
&\quad \quad \quad ~~+{\sum_{j\in[n]}C_{(j,0)}(\mathbf{p^{(f)}},y)}, \quad \forall y\in [0,B] \tag{\ref{eqn:f2q}b}\\
&~~{\sum\limits_{j\in[n]}\int_{0}^{B}p_{(j,0)}^\mathbf{(f)}(x)\ud x=1} \tag{\ref{eqn:f2q}c}\\
&\sum\limits_{j\in[n]:a_j<a_i,r_j>r_i}\int_{y}^{B}p_{(j,i)}^\mathbf{(f)}(x)\ud x \le \int_{y}^{B}p_{(i,0)}^\mathbf{(f)}(x)\ud x \nonumber\\
&+\sum\limits_{j\in[n]:a_i>a_j,r_i<r_j}\int_{y}^{B}p_{(i,j)}^\mathbf{(f)}(x)\ud x,~~\forall i\in[n],y\in(0,B] \tag{\ref{eqn:f2q}d}
\end{align}
\end{theorem}
Thus, \texttt{MSR-ES} can be regarded as \texttt{MSR-E} with $O(n^2)$ shops. The difference is that the summation of the buying probabilities in each virtual shop may be larger than 1. Fortunately, those nice properties of \texttt{MSR-E} still hold for \texttt{MSR-ES}.
\begin{lemma}\label{lemma:MSR-ES seg}
Lemma~\ref{lemma:MSR-E finite} and \ref{lemma:moveP MSR-E} still hold for the virtual shops in the \texttt{MSR-ES} problem.
\end{lemma}
As in other extensions before, the probability density function of the virtual shops is segmented and each segment is an exponential function. The consumer only assigns positive buying probability in exactly one virtual shop at any time. As the buying time increases, she follows the virtual shop order in which the ratio between buying price and renting price is increasing.
In the following 3 figures, we give a simple example when $n=2$ in order to make Lemma~\ref{lemma:MSR-ES seg} easier to understand. We approach the optimal strategy through the discrete model and the figures show the p.d.f. functions in the optimal strategy. It can be seen that Lemma~\ref{lemma:MSR-ES seg} is verified. Since there is a proof for Lemma~\ref{lemma:MSR-ES seg}, we do not give more complicated examples. The parameters for the 2 shops are as follows: $a_1=80, r_1=1, b_1=110, a_2=20, r_2=2, b_2=180$.
\section{Conclusions}
In this paper, we consider the multi-shop ski rental problem (\texttt{MSR}) and its extensions (\texttt{MSR-S}, \texttt{MSR-E}, and \texttt{MSR-ES}), in which there are multiple shops and the consumer wants to minimize the competitive ratio.
For each problem, we prove that in the optimal mixed strategy of the consumer, she only assigns positive buying probability to \emph{exactly one} shop at any time. The shop order strongly relates to the ratio between buying price and renting price, even in which entry fee is involved. Further, in the basic problem (\texttt{MSR}), we derive a linear time algorithm for computing the optimal strategy of the consumer. For \texttt{MSR-S}, we prove that under the optimal mixed strategy, the consumer only switches to another shop at the buying time.
In problems \texttt{MSR-E} and \texttt{MSR-ES}, we show that the optimal strategy can be solved if the breakpoints are known. Similar to the basic problem (\texttt{MSR}), we conjecture that the quasi-concave property also holds for these two variants. Further, we conjecture that there exists an iteration algorithm using gradient decent technique, which might converge to the optimal solution.
\bibliographystyle{abbrv}
| {
"attr-fineweb-edu": 1.776367,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbS3xK0zjCxh71f3i | \section{Introduction}\vspace*{-.5em}
\label{sec1}
There are several situations
where the ordered random
sample,
\be \label{1.1}
\Xsoo,
\ee
corresponding to the i.i.d.\
random sample, $\Xs$, is not fully reported, because the
values of interest are the higher (or lower), up-to-the-present,
record values based on the initial sample, i.e., the partial
maxima (or minima) sequence
\be \label{1.2}
\Xspo,
\ee
where $X^*_{j:j}=\max\{ X^*_1,\ldots,X^*_j \}$.
A situation of this kind commonly appears in athletics, when only
the best performances are recorded.
Through this article we assume that the i.i.d.\ data arise from a
location-scale family,
\[
\{ F((\cdot-\theta_1)/\theta_2); \, \theta_1\in\R, \theta_2 >0 \},
\]
where the d.f.\ $F(\cdot)$ is free of parameters and has finite,
non-zero variance (so that $F$ is non-degenerate), and we
consider the partial maxima BLUE (best linear unbiased
estimator) for both parameters $\theta_1$ and $\theta_2$. This
consideration is along the lines of the classical Lloyd's (1952)
BLUEs, the only difference being that the linear estimators are
now based on the ``insufficient sample'' (\ref{1.2}), rather than
(\ref{1.1}), and this fact implies a substantial reduction on the
available information. Tryfos and Blackmore (1985) used
this kind of data to predict future records in athletic events,
Samaniego and Whitaker (1986, 1988) estimated the population characteristics,
while Hofmann and Nagaraja (2003) investigated the amount of
Fisher Information contained in such data; see also
Arnold, Balakrisnan \& Nagaraja (1998, Section 5.9).
A natural question concerns the consistency of the resulting
BLUEs, since too much lack of information presumably would result
to inconsistency (see at the end of Section 6).
Thus, our main focus is on
conditions guaranteeing consistency,
and the main result
shows that this is indeed the case for the
scale
parameter BLUE from a wide class of distributions.
Specifically, it is
shown that the variance of the BLUE
is at most of order $O(1/\log n)$, when
$F(x)$ has a log-concave
density $f(x)$
and satisfies the Von Mises-type condition (\ref{5.11}) or
(\ref{6.1}) (cf.\ Galambos (1978)) on the right end-point of its
support (Theorem \ref{theo5.2}, Corollary \ref{cor6.1}). The
result is applicable to several commonly used distributions, like
the Power distribution (Uniform), the Weibull (Exponential), the
Pareto, the Negative Exponential, the Logistic,
the Extreme Value (Gumbel)
and the Normal (see
section \ref{sec6}).
A consistency result for the partial maxima BLUE of
the location parameter
would be desirable to be
included here, but it seems that the proposed technique (based on
partial maxima spacings, section \ref{sec4}) does not suffice for
deriving it. Therefore, the consistency for the location parameter
remains an open problem in general, and it is just highlighted by
a particular application to the Uniform location-scale family
(section \ref{sec3}).
The proof of the main result
depends on the fact that, under mild conditions, the partial
maxima spacings have non-positive correlation. The class of distributions
having this
property is called NCP (negative correlation for partial maxima
spacings).
It is shown here that any log-concave distribution
with finite variance belongs to NCP
(Theorem
\ref{theo4.2}).
In particular, if a distribution function
has a density which is
either log-concave or non-increasing
then it is a member of NCP.
For ordinary spacings, similar sufficient conditions
were shown by
Sarkadi (1985) and
Bai, Sarkar \& Wang (1997) --
see also David and Nagaraja (2003, pp.\ 187--188), Burkschat (2009),
Theorem 3.5 --
and will be referred as
``S/BSW-type conditions''.
In every experiment where the i.i.d.\ observations arise in a sequential manner,
the partial maxima data describe the
best performances in a natural way,
as the experiment goes on,
in contrast to the first $n$
record values, $R_1,R_2,\ldots,R_n$, which are obtained
from an inverse sampling scheme -- see, e.g., Berger and Gulati (2001).
Due to the very rare
appearance of records, in the latter case it is implicitly assumed
that the sample size is, roughly, $e^n$.
This has a similar effect in the partial maxima setup, since
the number of different values
are about $\log n$, for large sample size $n$.
Clearly, the total amount of information in the partial
maxima sample is the same as that given by the (few)
record values augmented by record times.
The essential difference of these models (records / partial
maxima) in statistical applications
is highlighted, e.g., in Tryfos and Blackmore (1985),
Samaniego and Whitaker (1986, 1988), Smith (1988), Berger and Gulati (2001) and
Hofmann and Nagaraja
(2003) -- see also Arnold,Balakrishnan \& Nagaraja
(1998, Chapter 5).
\section{Linear estimators based on partial maxima}\vspace*{-.5em}
\setcounter{equation}{0} \label{sec2}
Consider the random
sample $\Xs$ from $F((x-\theta_1)/\theta_2)$ and the corresponding
partial maxima sample $\Xspo$
($\theta_1\in\R$ is
the location parameter and
$\theta_2>0$ is the scale parameter; both parameters are unknown).
Let also $\X$ and $\Xpo$ be the
corresponding
samples from the completely specified d.f.\ $F(x)$,
that generates the location-scale family. Since
\[
( \Xsp )' \law ( \theta_1+\theta_2 X_{1:1}, \theta_1+\theta_2 X_{2:2},
\ldots ,\theta_1+\theta_2 X_{n:n} )',
\]
a linear estimator based on
partial maxima has the form
\[
L=\sum_{i=1}^{n} c_i X^*_{i:i} \law \theta_1 \sum_{i=1}^{n} c_i
+ \theta_2 \sum_{i=1}^{n} c_i X_{i:i},
\]
for some constants $c_i$, $i=1,2,\ldots,n$.
Let $\bbb{X}=(\Xp)'$ be the random vector of partial maxima from
the known d.f.\ $F(x)$, and use the notation
\be \label{2.1}
\bbb{\mu}=\E [\bbb{X}], \hspace*{2ex} \bbb{\Sigma}= \D[\bbb{X}]
\hspace*{1ex}\mbox{and}\hspace*{1ex} \bbb{E}=\E[ \bbb{X}
\bbb{X}'],
\ee
where $\D[\bbb{\xi}]$ denotes the dispersion matrix
of any random vector $\bbb{\xi}$. Clearly,
\[
\bbb{\Sigma}=\bbb{E}-\bbb{\mu} \bbb{\mu}',\hspace*{2ex} \bbb{\Sigma}>0,
\hspace*{2ex} \bbb{E}>0.
\]
The linear estimator $L$
is called BLUE for $\theta_k$
($k=1,2$) if it is unbiased for $\theta_k$ and its variance is
minimal, while it is called BLIE (best linear invariant estimator)
for $\theta_k$ if it is invariant for $\theta_k$ and its mean
squared error, $\MSE[L]=\E[L-\theta_k]^2$, is minimal.
Here ``invariance" is understood in the sense of location-scale invariance as it is defined, e.g., in Shao (2005, p.\ xix).
Using the above notation it is easy to verify the following formulae for the BLUEs
and their variances. They are the partial maxima analogues of Lloyd's (1952) estimators
and, in the case of partial minima, have been obtained
by Tryfos and Blackmore (1985), using least squares. A proof is attached
here for easy reference.
\begin{prop}
\label{prop2.1}
The partial maxima {\rm BLUEs} for $\theta_1$ and for $\theta_2$ are,
respectively,
\be \label{2.2}
L_1=-\frac{1}{\Delta}\bbb{\mu}'\bbb{\Gamma}\bbb{X}^*\ \ \mbox{and} \ \
L_2=\frac{1}{\Delta}{\bf 1}'\bbb{\Gamma}\bbb{X}^*,
\ee
where $\bbb{X}^*=(\Xsp )'$,
$\Delta=({\bf 1}'\bbb{\Sigma}^{-1}{\bf 1}) (\bbb{\mu}'\bbb{\Sigma}^{-1}
\bbb{\mu})-({\bf 1}'\bbb{\Sigma}^{-1}\bbb{\mu})^2>0$,
${\bf 1}=(1,1,\ldots,1)'\in \R^n$ and
$\bbb{\Gamma}=\bbb{\Sigma}^{-1}({\bf 1}\bbb{\mu}' -\bbb{\mu}{\bf
1}')\bbb{\Sigma}^{-1}$. The corresponding variances are
\be
\label{2.3}
\Var [L_1] =
\frac{1}{\Delta}(\bbb{\mu}'\bbb{\Sigma}^{-1}\bbb{\mu}) \theta_2^2
\ \ \mbox{and}\ \
\Var [L_2] =
\frac{1}{\Delta}({\bf 1}'\bbb{\Sigma}^{-1}{\bf 1}) \theta_2^2.
\ee
\end{prop}
\begin{pr}{Proof}
Let $\bbb{c}=(c_1,c_2,\ldots,c_n)'\in \R^n$ and
$L=\bbb{c}'\bbb{X}^*$. Since
$\E[L]=(\bbb{c}'\bbb{1})\theta_1+(\bbb{c}'\bbb{\mu})\theta_2$, $L$
is unbiased for $\theta_1$ iff $\bbb{c}'{\bf 1}=1$ and
$\bbb{c}'\bbb{\mu}=0$, while it is unbiased for $\theta_2$ iff
$\bbb{c}'{\bf 1}=0$ and $\bbb{c}'\bbb{\mu}=1$. Since
$\Var[L]=(\bbb{c}'\bbb{\Sigma}\bbb{c})\theta_2^2$, a simple
minimization argument for $\bbb{c}'\bbb{\Sigma}\bbb{c}$ with
respect to $\bbb{c}$, using Lagrange multipliers, yields the
expressions (\ref{2.2}) and (\ref{2.3}). $\Box$
\end{pr}
\bigskip
Similarly, one can derive the partial maxima version of Mann's (1969)
best linear invariant estimators (BLIEs), as follows.
\begin{prop}
\label{prop2.2}
The partial maxima {\rm BLIEs} for $\theta_1$ and for $\theta_2$ are,
respectively,
\be \label{2.4}
T_1=\frac{{\bf 1}'\bbb{E}^{-1}\bbb{X}^*}{{\bf 1}'\bbb{E}^{-1}{\bf 1}}
\ \ \mbox{and} \ \
T_2=\frac{{\bf 1}'\bbb{G}\bbb{X}^*}{{\bf 1}'\bbb{E}^{-1}{\bf 1}},
\ee
where $\bbb{X}^*$ and ${\bf 1}$ are as in Proposition {\rm
\ref{prop2.1}} and
$\bbb{G}=\bbb{E}^{-1}({\bf 1}\bbb{\mu}'-\bbb{\mu}{\bf 1}')\bbb{E}^{-1}$.
The corresponding mean squared errors are
\be \label{2.5}
\MSE
[T_1] = \frac{\theta_2^2}{{\bf 1}'\bbb{E}^{-1}{\bf 1}} \ \
\mbox{and}\ \
\MSE [T_2] =
\left( 1-\frac{D}{{\bf 1}'\bbb{E}^{-1}{\bf 1}}\right) \theta_2^2,
\ee
where $D=({\bf 1}'\bbb{E}^{-1}{\bf 1}) (\bbb{\mu}'\bbb{E}^{-1}
\bbb{\mu})-({\bf 1}'\bbb{E}^{-1}\bbb{\mu})^2>0$.
\end{prop}
\begin{pr}{Proof}
Let $L=L(\bbb{X}^*)=\bbb{c}'\bbb{X}^*$ be an arbitrary linear
statistic. Since
$L(b\bbb{X}^*+a{\bf 1})=a (\bbb{c}'{\bf 1})+b
L(\bbb{X}^*)$ for arbitrary $a\in\R$ and $b>0$, it follows that $L$ is invariant for $\theta_1$
iff $\bbb{c}'{\bf 1}=1$ while it is invariant for $\theta_2$ iff $\bbb{c}'{\bf 1}=0$.
Both (\ref{2.4}) and
(\ref{2.5}) now follow by a simple minimization argument, since in
the first case we have to minimize the mean squared error
$\E[L-\theta_1]^2=(\bbb{c}'\bbb{E}\bbb{c})\theta_2^2$ under
$\bbb{c}'{\bf 1}=1$, while in the second one, we have to minimize
the mean squared error $\E[L-\theta_2]^2=
(\bbb{c}'\bbb{E}\bbb{c}-2\bbb{\mu}'\bbb{c}+1)\theta_2^2$ under
$\bbb{c}'{\bf 1}=0$.
$\Box$
\end{pr}
\bigskip
The above formulae (\ref{2.2})-(\ref{2.5}) are well-known for
order statistics and records -- see
David (1981, Chapter 6),
Arnold, Balakrishnan \&
Nagaraja (1992, Chapter 7; 1998, Chapter 5),
David and Nagaraja (2003, Chapter 8). In the
present setup, however, the meaning of $\bbb{X}^*$, $\bbb{X}$,
$\bbb{\mu}$, $\bbb{\Sigma}$ and $\bbb{E}$ is completely different.
In the case of order statistics, for example, the vector
$\bbb{\mu}$, which is the mean vector of the order statistics
$\bbb{X}=(\Xo)'$ from the known distribution $F(x)$, depends on the
sample size $n$, in the sense that the components of the vector
$\bbb{\mu}$ completely change with $n$. In the present case of
partial maxima, the first $n$ entries of the vector $\bbb{\mu}$,
which is the mean vector of the partial maxima $\bbb{X}=(\Xp)'$ from
the known distribution $F(x)$, remain constant for all sample sizes
$n'$ greater than or equal to $n$. Similar observations apply for
the matrices $\bbb{\Sigma}$ and $\bbb{E}$. This fact seems to be
quite helpful for the construction of tables giving the means,
variances and covariances of partial maxima for samples up to a size
$n$. It should be noted, however, that even when $F(x)$ is
absolutely continuous with density $f(x)$ (as is usually the case
for location-scale families), the joint distribution of $(X_{i:i},
X_{j:j})$ has a singular part, since
$\Pr[X_{i:i}=X_{j:j}]=i/j>0$, $i<j$.
Nevertheless, there exist simple
expectation and covariance
formulae (Lemma \ref{lem2.2}).
As in the order statistics setup, the actual application of
formulae (\ref{2.2}) and (\ref{2.4}) requires closed forms for
$\bbb{\mu}$ and $\bbb{\Sigma}$, and also to invert the $n\times n$
matrix $\bbb{\Sigma}$. This can be done only for very particular
distributions (see next section, where we apply the results to the
Uniform distribution). Therefore, numerical methods should be
applied in general. This, however, has a theoretical cost: It
is not a trivial fact to verify consistency of the estimators, even in the
classical case of order statistics.
The main purpose of this article
is in verifying consistency for the partial maxima BLUEs.
Surprisingly, it seems that a solution of this problem is not
well-known, at least to our knowledge, even for the classical BLUEs
based on order statistics. However, even if the result of the
following lemma is known, its proof has an independent interest,
because it proposes alternative (to BLUEs)
$n^{-1/2}$--consistent unbiased linear estimators
and provides the intuition for the derivation of the main
result of the present article.
\begin{lem}\label{lem2.1}
The classical {\rm BLUEs} of $\theta_1$ and $\theta_2$,
based on order statistics from a location-scale family,
created by a distribution $F(x)$
with finite non-zero variance, are consistent. Moreover,
their variance is at most of order $O(1/n)$.
\end{lem}
\begin{pr}{Proof}
Let $\bbb{X}^*=(\Xso)'$ and $\bbb{X}=(\Xo)'$ be the ordered samples
from $F((x-\theta_1)/\theta_2)$ and $F(x)$, respectively, so
that $\bbb{X}^*\law \theta_1 {\bf 1}+\theta_2 \bbb{X}$.
Also write $X_1^*,X_2^*,\ldots,X_n^*$ and
$X_1,X_2,\ldots,X_n$ for the corresponding i.i.d.\ samples.
We consider the linear estimators
\[
S_1=\overline{X}^*=\frac1n\sum_{i=1}^n X_i^*\law \theta_1+
\theta_2 \overline{X}
\]
and
\[
S_2=\frac{1}{n(n-1)}\sum_{i=1}^n\sum_{j=1}^n |X^*_j-X^*_i|\law
\frac{\theta_2}{n(n-1)}\sum_{i=1}^n\sum_{j=1}^n |X_j-X_i|,
\]
i.e., $S_1$ is the sample mean and $S_2$ is a multiple of
Gini's statistic. Observe that
both $S_1$ and $S_2$ are linear estimators in order statistics.
[In particular, $S_2$ can be written as
$S_2=4(n(n-1))^{-1}\sum_{i=1}^n (i-(n+1)/2)X_{i:n}^*$.]
Clearly,
$\E(S_1)=\theta_1+\theta_2\mu_0$, $\E(S_2)=\theta_2 \tau_0$,
where $\mu_0$ is the mean, $\E(X_1)$, of the distribution $F(x)$
and $\tau_0$ is the positive finite parameter
$\E|X_1-X_2|$. Since $F$ is known, both $\mu_0\in\R$ and $\tau_0>0$
are known constants,
and we can construct the linear estimators
$U_1=S_1-(\mu_0/\tau_0)S_2$ and $U_2=S_2/\tau_0$. Obviously,
$\E(U_k)=\theta_k$, $k=1,2$, and both $U_1$, $U_2$ are linear
estimators of the form $T_n=(1/n)\sum_{i=1}^n \delta(i,n) X^*_{i:n}$,
with $|\delta(i,n)|$ uniformly bounded for all $i$ and $n$.
If $\sigma_0^2$ is the (assumed finite) variance of $F(x)$,
it follows that
\begin{eqnarray*}
\Var[T_n] & \leq & \displaystyle \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n
|\delta(i,n)||\delta(j,n)| \Cov(X^*_{i:n},X^*_{j:n}) \\
& \leq & \displaystyle \frac{1}{n^2}
\left(\max_{1\leq i \leq n} |\delta(i,n)|\right)^2
\Var(X^*_{1:n}+X^*_{2:n}+\cdots+X^*_{n:n}) \\
& = & \displaystyle \frac{1}{n}
\left(\max_{1\leq i \leq n} |\delta(i,n)|\right)^2
\theta_2^2
\sigma_0^2=O(n^{-1})\to 0,\ \ \ \mbox{as}\ \ n\to\infty,
\end{eqnarray*}
showing that $\Var(U_k)\to 0$, and thus $U_k$ is consistent
for $\theta_k$, $k=1,2$. Since
$L_k$ has minimum variance among all linear unbiased estimators,
it follows that $\Var(L_k)\leq \Var (U_k)\leq O(1/n)$,
and the result follows. $\Box$
\end{pr}
\bigskip
The above lemma implies that the mean
squared error of the BLIEs, based on order statistics, is at most
of order $O(1/n)$, since they have smaller mean squared error than
the BLUEs, and thus they are also consistent. More important is
the fact that, with the technique used in Lemma \ref{lem2.1}, one
can avoid all computations involving means, variances and
covariances of order statistics, and it does not need to invert
any matrix, in order to prove consistency (and in order to obtain
$O(n^{-1})$-consistent estimators). Arguments of similar
kind will be applied in section \ref{sec5}, when the problem of
consistency for the partial maxima BLUE of $\theta_2$ will be
taken under consideration.
We now turn in the partial maxima case.
Since actual application of partial maxima BLUEs and BLIEs
requires the computation of the first two moments
of $\bbb{X}=(\Xp)'$ in terms of the
completely specified d.f.\ $F(x)$, the following formulae
are to be mentioned here (cf.\ Jones and Balakrishnan (2002)).
\begin{lem} \label{lem2.2} Let $\Xpo$ be the partial maxima sequence
based on an arbitrary d.f.\ $F(x)$. \\
{\rm (i)} For $i\leq j$, the joint d.f.\ of $(X_{i:i},X_{j:j})$ is
\be\label{2.6}
F_{X_{i:i},X_{j:j}}(x,y)= \left\{
\begin{array}{lll}
F^j(y) & \mbox { if } & x\geq y, \\
F^i(x)F^{j-i}(y) & \mbox { if } & x\leq y.
\end{array}
\right. \ee {\rm (ii)} If $F$ has finite first moment, then
\be
\label{2.7}
\mu_i=\E[X_{i:i}]=\int_0^{\infty} (1-F^i(x))\ dx -
\int_{-\infty}^0 F^i(x) dx
\ee
is finite for all $i$. \\
{\rm (iii)} If $F$ has finite second moment, then
\be\label{2.8}
\sigma_{ij}=\Cov[X_{i:i},X_{j:j}]= \int\int_{-\infty<x<y<\infty}
F^i(x) (F^{j-i}(x)+F^{j-i}(y)) (1-F^i (y)) \ dy \ dx
\ee is finite
and non-negative for all $i\leq j$. In particular,
\be\label{2.9}
\sigma_{ii}=\sigma_i^2=\Var[X_{i:i}]= 2
\int\int_{-\infty<x<y<\infty} F^i(x) (1-F^i (y)) \ dy \ dx.
\ee
\end{lem}
\begin{pr}{Proof} (i) is trivial and (ii) is well-known. (iii)
follows from Hoeffding's identity
\[
\Cov[X,Y]=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
(F_{X,Y}(x,y)-F_X(x)F_Y(y)) \ dy \ dx
\]
(see Hoeffding (1940), Lehmann (1966), Jones and Balakrishnan
(2002), among others), applied to $(X,Y)=(X_{i:i},X_{j:j})$ with
joint d.f.\ given by (\ref{2.6}) and marginals $F^i(x)$ and
$F^j(y)$. $\Box$
\end{pr}
\bigskip
Formulae (\ref{2.7})-(\ref{2.9}) enable the computation
of means, variances and covariances of partial maxima, even in the
case where the distribution $F$ does not have a density. Tryfos and
Blackmore (1985) obtained an expression for the covariance of partial
minima involving means and covariances of order statistics from
lower sample sizes.
\section{A tractable case: the Uniform location-scale family}\vspace*{-.5em}
\setcounter{equation}{0}\label{sec3}
Let $\Xs \sim U(\theta_1,\theta_1+\theta_2)$, so that $(\Xsp)'\law
\theta_1 {\bf 1} + \theta_2 \bbb{X}$, where $\bbb{X}=(\Xp)'$ is
the partial maxima sample from the standard Uniform distribution.
Simple calculations, using (\ref{2.7})-(\ref{2.9}), show that the
mean vector $\bbb{\mu}=(\mu_i)$ and the dispersion matrix
$\bbb{\Sigma}=(\sigma_{ij})$ of $\bbb{X}$ are given by
(see also Tryfos and Blackmore (1985), eq.\ (3.1))
\[
\mu_i=\frac{i}{i+1}\ \ \mbox{and}\ \ \sigma_{ij}=\frac{i}{(i+1)(j+1)(j+2)}
\ \ \mbox{for}\ \ 1\leq i\leq j\leq n.
\]
Therefore, $\bbb{\Sigma}$ is a patterned matrix of the
form $\sigma_{ij}=a_i b_j$
for $i\leq j$, and thus, its inverse is tridiagonal; see Graybill
(1969, Chapter 8), Arnold, Balakrishnan \& Nagaraja (1992,
Lemma 7.5.1). Specifically,
\[
\bbb{\Sigma}^{-1}=\left(
\begin{array}{cccccc}
\gamma_1 & -\delta_1 & 0 & \ldots & 0 & 0\\
-\delta_1 & \gamma_2 & -\delta_2 & \ldots & 0 & 0\\
0 & -\delta_2 & \gamma_3 & \ldots & 0 & 0\\
\vdots & \vdots& \vdots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & \ldots & \gamma_{n-1} & -\delta_{n-1} \\
0 & 0 & 0 & \ldots & -\delta_{n-1} & \gamma_n
\end{array}
\right)
\]
where
\begin{eqnarray*}
\displaystyle \gamma_i=\frac{4(i+1)^3(i+2)^2}{(2i+1)(2i+3)}, \ \
\delta_i =\frac{(i+1)(i+2)^2(i+3)}{2i+3}, \ \ i=1,2,\ldots,n-1, &
\\
\mbox{ and } \ \
\displaystyle \gamma_n=\frac{(n+1)^2 (n+2)^2}{2n+1}. \hspace*{30ex} &
\end{eqnarray*}
Setting $a(n)={\bf 1}' \bbb{\Sigma}^{-1}{\bf 1}$,
$b(n)=({\bf 1}-\bbb{\mu})' \bbb{\Sigma}^{-1}({\bf 1}-\bbb{\mu})$
and $c(n)=({\bf 1}-\bbb{\mu})' \bbb{\Sigma}^{-1}{\bf 1}$, we get
\begin{eqnarray*}
a(n) & = & \frac{(n+1)^2 (n+2)^2}{2n+1}-2\sum_{i=1}^{n-1}
\frac{(i+1)(i+2)^2(3i+1)}{(2i+1)(2i+3)}=n^2+o(n^2), \\
b(n) & = & \frac{(n+2)^2}{2n+1}-2\sum_{i=1}^{n-1}
\frac{(i-1)(i+2)}{(2i+1)(2i+3)}=\frac12 \log n +o(\log n), \\
c(n) & = & \frac{(n+1) (n+2)^2}{2n+1}-\sum_{i=1}^{n-1}
\frac{(i+2)(4i^2+7i+1)}{(2i+1)(2i+3)}=n+o(n).
\end{eqnarray*}
Applying (\ref{2.3}) we obtain
\begin{eqnarray*}
\Var[L_1] & = & \frac{a(n)+b(n)-2c(n)}{a(n)b(n)-c^2(n)}\theta_2^2
=\left(\frac{2}{\log n}+o\left( \frac{1}{\log n}\right)\right)\theta_2^2,
\ \ \mbox{and}\ \ \\
\Var[L_2]& = & \frac{a(n)}{a(n)b(n)-c^2(n)}\theta_2^2
=\left(\frac{2}{\log n}+o\left( \frac{1}{\log n}\right)\right)\theta_2^2.
\end{eqnarray*}
The preceding computation shows that, for the Uniform location-scale family,
the partial maxima BLUEs are consistent for both the location and
the scale parameters, since their variance goes to zero with the
speed of $2/\log n$. This fact, as expected, contradicts the
behavior of the ordinary order statistics BLUEs, where the
speed of convergence is of order $n^{-2}$ for the variance
of both Lloyd's estimators. However, the comparison is
quite unfair here, since Lloyd's estimators are based on the
complete sufficient statistic $(X^*_{1:n},X^*_{n:n})$, and thus
the variance of order statistics BLUE is minimal among all
unbiased estimators.
On the other hand we should emphasize that, under the same model,
the BLUEs (and the BLIEs) based solely on the first $n$ upper records
are not even consistent. In fact, the variance of both BLUEs converges to
$\theta^2_2/3$, and the MSE of both BLIEs approaches $\theta_2^2/4$,
as $n\to\infty$; see Arnold, Balakrishnan \& Nagaraja (1998,
Examples 5.3.7 and 5.4.3).
\section{Scale estimation and partial maxima spacings}\vspace*{-.5em}
\setcounter{equation}{0}\label{sec4}
In the classical order statistics setup,
Balakrishnan and Papadatos (2002) observed
that the computation of BLUE (and
BLIE) of the scale parameter is simplified considerably
if one uses spacings instead of order statistics -- cf.\
Sarkadi (1985).
Their observation applies here too,
and simplifies the form of the partial maxima BLUE (and BLIE).
Specifically, define the partial maxima spacings
as $Z_i^*=X^*_{i+1:i+1}-X^*_{i:i}\geq 0$ and
$Z_i=X_{i+1:i+1}-X_{i:i}\geq 0$,
for $i=1,2,\ldots,n-1$, and let
$\bbb{Z}^*=(Z_1^*,Z_2^*,\ldots,Z_{n-1}^*)'$ and
$\bbb{Z}=(Z_1,Z_2,\ldots,Z_{n-1})'$. Clearly,
$\bbb{Z}^* \law \theta_2 \bbb{Z}$, and any unbiased (or even
invariant) linear estimator of $\theta_2$ based on
the partial maxima sample, $L=\bbb{c}'\bbb{X}^*$,
should necessarily satisfy $\sum_{i=1}^n c_i=0$
(see the proofs of Propositions \ref{prop2.1} and \ref{prop2.2}).
Therefore, $L$ can be expressed as a linear function on $Z^*_i$'s,
$L=\bbb{b}'\bbb{Z}^*$, where now
$\bbb{b}=(b_1,b_2,\ldots,b_{n-1})'\in \R^{n-1}$. Consider
the mean vector
$\bbb{m}=\E[\bbb{Z}]$, the dispersion matrix
$\bbb{S}=\D[\bbb{Z}]$, and the second moment matrix
$\bbb{D}=\E[\bbb{Z}\bbb{Z}']$ of $\bbb{Z}$. Clearly,
$\bbb{S}=\bbb{D}-\bbb{m}\bbb{m}'$, $\bbb{S}>0$, $\bbb{D}>0$, and
the vector $\bbb{m}$ and the matrices $\bbb{S}$ and $\bbb{D}$ are
of order $n-1$. Using exactly the same arguments as in
Balakrishnan and Papadatos (2002), it is easy to verify the following.
\begin{prop}\label{prop4.1}
The partial maxima {\rm BLUE} of $\theta_2$, given in Proposition
{\rm \ref{prop2.1}}, has the alternative form \vspace*{-0.6em}
\be\label{4.1}
L_2=\frac{\bbb{m}'\bbb{S}^{-1}\bbb{Z}^*}{\bbb{m}'\bbb{S}^{-1}\bbb{m}},
\ \ \mbox{ with }\ \
\Var[L_2]=\frac{\theta_2^2}{\bbb{m}'\bbb{S}^{-1}\bbb{m}}, \ee
while the corresponding {\rm BLIE}, given in Proposition {\rm
\ref{prop2.2}}, has the alternative form
\be\label{4.2}
T_2=\bbb{m}'\bbb{D}^{-1}{\bbb{Z}}^*, \ \ \mbox{
with }\ \ \MSE[T_2]=(1-\bbb{m}'\bbb{D}^{-1}\bbb{m})\theta_2^2.
\ee
\end{prop}
It should be noted that, in general, the non-negativity of the
BLUE of $\theta_2$ does not follow automatically,
even for order
statistics. In the order statistics setup, this problem
was posed by Arnold, Balakrishnan \&
Nagaraja (1992), and the best known
result, till now, is the one
given by Bai, Sarkar \& Wang (1997) and Sarkadi (1985).
Even after the slight
improvement, given by Balakrishnan and Papadatos (2002) and
by Burkschat (2009),
the general case remains unsolved. The same
question (of non-negativity of the BLUE) arises in the partial
maxima setup, and the following theorem provides a partial
positive answer.
We omit the
proof, since it again
follows by a straightforward
application of the arguments given in Balakrishnan and
Papadatos
(2002). \vspace*{-0.6em}
\begin{theo}
\label{theo4.1} {\rm (i)} There exists a constant $a=a_n(F)$,
$0<a<1$, depending only on the sample size $n$ and the d.f.\
$F(x)$ {\rm (i.e., $a$ is free of the parameters $\theta_1$ and
$\theta_2$)}, such that $ T_2=a \, L_2. $ This constant is given
by $ a=\bbb{m}'\bbb{D}^{-1}\bbb{m}=
\bbb{m}'\bbb{S}^{-1}\bbb{m}/(1+\bbb{m}'\bbb{S}^{-1}\bbb{m})$. \\
{\rm (ii)} If either $n=2$ or the {\rm (free of parameters)} d.f.\
$F(x)$ is such that
\be \label{4.3}
\Cov[Z_i,Z_j] \leq 0\ \ \mbox{
for all $i\neq j$,\ \ $i,j=1,\ldots,n-1$,}
\ee
then the partial
maxima {\rm BLUE (and BLIE)} of $\theta_2$ is non-negative.
\end{theo}
\vspace*{-0.6em}
Note that, as in order statistics,
the non-negativity of $L_2$
is equivalent to the fact
that the vector $\bbb{S}^{-1}\bbb{m}$
(or, equivalently, the vector $\bbb{D}^{-1}\bbb{m}$) has
non-negative entries; see Balakrishnan and Papadatos (2002) and Sarkadi (1985).
Since it is important to know whether
(\ref{4.3}) holds, in the sequel we shall make use of the following
definition.
\begin{DEFI}\label{def4.1}
A d.f.\ $F(x)$ with finite second moment (or the
corresponding density $f(x)$,
if exists)
\\
(i) belongs to the class NCS (negatively correlated spacings)
if its order statistics have negatively correlated
spacings for all sample sizes $n\geq 2$.
\\
(ii)
belongs
to the class NCP if it has negatively correlated partial maxima
spacings, i.e., if (\ref{4.3}) holds for all $n\geq 2$.
\end{DEFI}
An important result by Bai, Sarkar \& Wang (1997)
states that a
log-concave density $f(x)$ with finite variance belongs to NCS -- cf.\
Sarkadi (1985). We
call this sufficient condition as the S/BSW-condition
(for ordinary spacings). Burkschat (2009, Theorem 3.5) showed an extended
S/BSW-condition,
under which the log-concavity of both $F$ and $1-F$ suffice
for the NCS class.
Due to the existence of simple
formulae like (\ref{4.1}) and (\ref{4.2}), the NCS and NCP classes
provide useful tools in verifying consistency for the scale
estimator, as well as, non-negativity.
Our purpose is to prove an S/BSW-type
condition for partial maxima (see Theorem
\ref{theo4.2}, Corollary \ref{cor4.1}, below). To this end,
we first state
Lemma \ref{lem4.1},
that will be used in the sequel.
Only through the rest of the present section, we shall use
the
notation $Y_k=\max\{X_1,\ldots,X_k\}$,
for any integer $k\geq 1$.
\begin{lem}\label{lem4.1} Fix two integers $i$, $j$, with $1\leq i<j$, and suppose
that the i.i.d.\ r.v.'s $X_1,X_2,\ldots$ have a common d.f.\
$F(x)$.
Let $I(\mbox{expression})$ denoting the indicator function taking
the value $1$, if the expression holds true, and $0$ otherwise.
\\ {\rm (i)} The conditional d.f.\ of $Y_{j+1}$ given
$Y_j$ is
\[
\Pr[Y_{j+1} \leq y |\ Y_j]=\left\{
\begin{array}{ccc}
0, & \mbox{if} & y<Y_j\\
F(y), & \mbox{if} & y\geq Y_j
\end{array}
\right.=F(y)I(y\geq Y_j),\ \ y\in\R.
\]
If, in addition, $i+1<j$, then the following property {\rm (which
is an immediate consequence of the Markovian character of the
extremal process)} holds:
\[
\Pr[Y_{j+1}\leq y |\ Y_{i+1}, Y_j]=\Pr[Y_{j+1}\leq y |\
Y_j],\ \ y\in\R.
\]
{\rm (ii)} The conditional d.f.\ of $Y_i$ given
$Y_{i+1}$ is
\begin{eqnarray*}
\Pr[Y_i \leq x |\ Y_{i+1}]
& = & \left\{
\begin{array}{ccc}
\displaystyle \frac{F^i(x)}{\sum_{j=0}^i F^j(Y_{i+1})F^{i-j}(Y_{i+1}-)}, & \mbox{if} & x<Y_{i+1} \\
1, & \mbox{if} & x\geq Y_{i+1}
\end{array}
\right. \\
& = & I(x\geq Y_{i+1})
+I(x<Y_{i+1})\frac{F^i(x)}{\sum_{j=0}^i F^j(Y_{i+1})F^{i-j}(Y_{i+1}-)},\ \ x\in\R.
\end{eqnarray*}
If, in addition, $i+1<j$, then the following
property {\rm (which is again an immediate consequence of the
Markovian character of the extremal process)} holds:
\[
\Pr[Y_i\leq x |\ Y_{i+1}, Y_j]=\Pr[Y_i \leq x |\
Y_{i+1}],\ \ x\in\R.
\]
{\rm (iii)} Given $(Y_{i+1}, Y_{j})$, the random variables $Y_i$
and $Y_{j+1}$ are independent.
\end{lem}
We omit the proof since the assertions are simple by-products of the
Markovian character of the process $\{Y_k,\ k\geq 1\}$, which can
be embedded in a continuous time extremal process $\{Y(t),\ t>0\}$;
see Resnick (1987, Chapter 4).
We merely note that a version of the Radon-Nikodym derivative
of $F^{i+1}$ w.r.t.\ $F$ is given by
\be
\label{(4.4)}
h_{i+1}(x)=\frac{dF^{i+1}(x)}{dF(x)}=\sum_{j=0}^{i}
F^j(x)F^{i-j}(x-), \ \ \ x\in\R,
\ee
which is equal to $(i+1)F^{i}(x)$ only if $x$ is a continuity point of
$F$. To see this, it suffices to verify the identity
\be
\label{(4.5)}
\int_{B} \ dF^{i+1}(x)=\int_{B} h_{i+1}(x)
\ dF(x) \ \
\mbox{for all Borel sets} \ \ B\subseteq \R.
\ee
Now (4.5) is proved as follows:
\begin{eqnarray*}
\int_{B} \ dF^{i+1} & = & \Pr(Y_{i+1}\in B) \\
& = & \sum_{j=1}^{i+1} \Pr\left[ Y_{i+1}\in B, \ \sum_{k=1}^{i+1}
I(X_k=Y_{i+1})=j\right] \\
&=& \sum_{j=1}^{i+1} \sum_{1\leq k_1<\cdots<k_j\leq i+1}
\Pr[ X_{k_1}=\cdots=X_{k_j}\in B, \\
&& \hspace{22ex}X_s<X_{k_1} \mbox{ for } s\notin
\{k_1,\ldots,k_j\}] \\
&=& \sum_{j=1}^{i+1} {i+1 \choose j}
\Pr[ X_{1}=\cdots=X_j\in B, X_{j+1}<X_{1},\ldots,X_{i+1}<X_1] \\
&=& \sum_{j=1}^{i+1} {i+1 \choose j}
\int_B \E\left[\left(\prod_{k=2}^j I(X_k=x)\right)
\left(\prod_{k=j+1}^{i+1} I(X_k<x)\right)\right] \ dF(x) \\
&=&
\int_B \left(\sum_{j=1}^{i+1} {i+1 \choose j} (F(x)-F(x-))^{j-1} F^{i+1-j}(x-) \right)
\ dF(x) \\
& = & \int_{B} h_{i+1}(x) dF(x),
\end{eqnarray*}
where we used the identity $\sum_{j=1}^{i+1}{i+1 \choose j}(b-a)^{j-1} a^{i+1-j}=
\sum_{j=0}^{i}b^{j} a^{i-j}$, $a\leq b$.
We can now show the main result of this section, which presents an
S/BSW-type condition for the partial maxima spacings.
\begin{theo}\label{theo4.2}
Assume that the
d.f.\ $F(x)$, with
finite second moment,
is a log-concave distribution
{\rm (in the sense that $\log F(x)$ is a concave
function in $J$, where
$J=\{x\in\R: 0<F(x)<1\}$)}, and has not an atom at its right
end-point, $\omega(F)=\inf\{x\in\R :F(x)=1\}$. Then, $F(x)$ belongs to
the class {\rm NCP}, i.e., {\rm (\ref{4.3})} holds for all $n\geq 2$.
\end{theo}
\begin{pr}{Proof}
For arbitrary r.v.'s $X\geq x_0>-\infty$ and $Y\leq y_0<+\infty$, with
respective d.f.'s $F_X$, $F_Y$,
we have
\be\label{4.9}
\E[X]=x_0+\int_{x_0}^{\infty} (1-F_X(t)) \ dt \ \
\mbox{and}\ \ \ \E[Y]=y_0-\int_{-\infty}^{y_0} F_Y(t) \ dt
\ee
(cf.\ Papadatos (2001), Jones and Balakrishnan (2002)). Assume
that $i<j$. By Lemma \ref{lem4.1}(i) and (\ref{4.9}) applied to
$F_X=F_{Y_{j+1}| Y_j}$,
it follows that
\[
\E [ Y_{j+1}|\ Y_{i+1},Y_j]=\E [ Y_{j+1}|\ Y_j]
=Y_j+\int_{Y_j}^{\infty} (1-F(t)) \
dt, \ \ \ \mbox{w.p.\ 1}.
\]
Similarly, by Lemma \ref{lem4.1}(ii) and (\ref{4.9}) applied to
$F_Y=F_{Y_i| Y_{i+1}}$,
we conclude that
\[
\E [ Y_i | \ Y_{i+1},Y_j]=\E [ Y_i | \ Y_{i+1}]
=Y_{i+1}-\frac{1}{h_{i+1}(Y_{i+1})}
\int_{-\infty}^{Y_{i+1}} F^i (t) \ dt, \ \ \ \mbox{w.p.\ 1},
\]
where $h_{i+1}$ is given by (\ref{(4.4)}). Note that $F$
is continuous on $J$, since it is log-concave there, and thus,
$h_{i+1}(x)=(i+1)F^i(x)$ for $x\in J$. If $\omega(F)$ is finite,
$F(x)$ is also continuous at $x=\omega(F)$, by assumption.
On the other hand, if $\alpha(F)=\inf\{x:F(x)>0\}$ is finite, $F$ can be
discontinuous at
$x=\alpha(F)$, but in this case, $h_{i+1}(\alpha(F))= F^i(\alpha(F))>0$; see
(\ref{(4.4)}). Thus, in all cases, $h_{i+1}(Y_{i+1})>0$ w.p.\ 1.
By conditional independence of $Y_i$ and $Y_{j+1}$ (Lemma
\ref{lem4.1}(iii)), we have
\begin{eqnarray*}
\Cov(Z_i,Z_j|\ Y_{i+1},Y_j) & = &
\Cov(Y_{i+1}-Y_i,Y_{j+1}-Y_j|\ Y_{i+1},Y_j) \\
&=& -\Cov (Y_i,Y_{j+1}| Y_{i+1},Y_j)=0, \ \ \ \mbox{w.p.\ 1},
\end{eqnarray*}
so that $\E[\Cov(Z_i,Z_j|\ Y_{i+1},Y_j)]=0$, and thus,
\begin{eqnarray}
\Cov [Z_i,Z_j] & = &
\Cov [\E(Z_i|\ Y_{i+1},Y_j), \E(Z_j|\ Y_{i+1},Y_j)]
+\E [\Cov(Z_i,Z_j|\ Y_{i+1},Y_j)]
\nonumber \\
& = & \Cov [\E(Y_{i+1}-Y_i |\ Y_{i+1},Y_j), \E(Y_{j+1}-Y_j|\
Y_{i+1},Y_j)]
\nonumber \\
& = & \Cov [Y_{i+1}-\E (Y_i |\ Y_{i+1},Y_j),
\E(Y_{j+1}|\ Y_{i+1},Y_j)-Y_j]
\nonumber \\
&= & \Cov [ g(Y_{i+1}), h(Y_j) ],
\label{(4.8)}
\end{eqnarray}
where
\[
g(x)=
\left\{
\begin{array}{ll}
\displaystyle
\frac{1}{(i+1)F^{i}(x)}\int_{-\infty}^x F^i(t) \ dt, & x>\alpha(F),
\\
\vspace{-1em}
\\
0, & \mbox{otherwise,}
\end{array}\right.
\ \ \
h(x)=\int_{x}^{\infty} (1-F(t)) \ dt.
\]
Obviously, $h(x)$ is non-increasing. On the
other hand, $g(x)$ is non-decreasing in $\R$.
This can be
shown as follows. First observe that $g(\alpha(F))=0$ if
$\alpha(F)$ is finite, while $g(x)>0$ for $x>\alpha(F)$.
Next observe that $g$ is finite and continuous at $x=\omega(F)$
if $\omega(F)$ is finite, as follows by the assumed continuity of $F$ at
$x=\omega(F)$ and the fact that $F$ has finite
variance.
Finally, observe that $F^i(x)$, a product of
log-concave functions, is also log-concave in $J$. Therefore,
for arbitrary $y\in J$, the function
$d(x)=F^i(x)/\int_{-\infty}^y F^i(t) dt$, $x\in(-\infty,y)\cap
J$, is a probability density, and thus, it is a log-concave
density with support $(-\infty,y)\cap J$.
By Pr\`{e}kopa (1973) or Dasgupta and
Sarkar (1982) it follows that the corresponding distribution
function, $D(x)=\int_{-\infty}^x d(t) dt=\int_{-\infty}^x F^i(t)
dt/ \int_{-\infty}^y F^i(t) dt$, $x\in (-\infty,y)\cap J$, is a
log-concave distribution, and since $y$ is arbitrary,
$H(x)=\int_{-\infty}^x F^i(t) dt$ is a log-concave function, for
$x\in J$. Since $F$ is continuous in $J$, this is equivalent
to the fact that the function
\[
\frac{H'(x)}{H(x)}=\frac{F^i(x)}{\int_{-\infty}^x F^i(t) \ dt}, \ \
x\in J,
\]
is non-increasing, so that $g(x)=H(x)/((i+1)H'(x))$
is non-decreasing in $J$.
The desired result follows from (\ref{(4.8)}), because the r.v.'s
$Y_{i+1}$ and $Y_j$ are positively quadrant dependent (PQD --
Lehmann (1966)), since it is readily verified that $F_{Y_{i+1},Y_j}(x,y)\geq
F_{Y_{i+1}}(x)F_{Y_j}(y)$ for all $x$ and $y$ (Lemma
\ref{lem2.2}(i)). This completes the proof. $\Box$
\end{pr}
\bigskip
The restriction $F(x)\to 1$ as $x\to \omega(F)$ cannot be removed
from the theorem. Indeed, the function
\[
F(x)=\left\{
\begin{array}{ll}
0 & x\leq 0, \\
x/4, & 0\leq x<1, \\
1, & x\geq 1,
\end{array}
\right.
\]
is a log-concave distribution in $J=(\alpha(F),\omega(F))=(0,1)$,
for which $\Cov[Z_1,Z_2]=\frac{59}{184320}>0$. The function $g$,
used in the proof,
is given by
\[
g(x)=
\left\{
\begin{array}{ll}
\displaystyle
\max\left\{0,\frac{x}{(i+1)^2}\right\}, & x<1,
\\
\vspace{-1em}
\\
\displaystyle
\frac{x-1}{i+1}+\frac{1}{(i+1)^2 4^i}, & x\geq 1,
\end{array}\right.
\]
and it is not monotonic.
Since the family of densities with log-concave distributions
contains both families of log-concave and non-increasing densities
(see, e.g., Pr\`{e}kopa (1973), Dasgupta and Sarkar (1982), Sengupta and Nanda (1999),
Bagnoli and Bergstrom (2005)), the following corollary is an immediate
consequence of Theorems \ref{theo4.1} and \ref{theo4.2}.
\begin{cor}\label{cor4.1}
Assume that $F(x)$ has finite second moment.
\\ {\rm (i)} If
$F(x)$ is a log-concave
d.f.\ {\rm (in
particular, if $F(x)$ has either a log-concave or a non-increasing
(in its interval support) density $f(x)$)},
then the partial maxima {\rm BLUE} and the
partial maxima {\rm BLIE} of $\theta_2$ are non-negative.
\\ {\rm (ii)} If $F(x)$ has either a log-concave or a non-increasing
{\rm (in its interval support)}
density $f(x)$ then it belongs to the {\rm NCP} class.
\end{cor}
Sometimes it is asserted that ``the distribution of a log-convex density
is log-concave'' (see, e.g., Sengupta and Nanda (1999), Proposition 1(e)),
but this is not correct in its full generality, even if
the corresponding r.v.\ $X$ is non-negative. For example, let
$Y\sim$ Weibull with shape parameter $1/2$, and set $X\law Y|Y<1$.
Then $X$ has density
$f$ and d.f.\ $F$ given by
\[
f(x)=\frac{\exp(-\sqrt{1-x})}{2(1-e^{-1})\sqrt{1-x}}, \ \ \
F(x)=\frac{\exp(-\sqrt{1-x})-e^{-1}}{1-e^{-1}}, \ \ \ 0<x<1,
\]
and it is easily checked that $\log f$ is convex in $J=(0,1)$, while
$F$ is not log-concave in $J$. However, we point out that
if $\sup J=+\infty$ then any log-convex density, supported on $J$, has to be
non-increasing in $J$ and, therefore, its distribution is
log-concave in $J$. Examples of log-convex distributions
having a log-convex density are given by Bagnoli and Bergstrom (2005).
\section{Consistent estimation of the scale parameter}\vspace*{-.5em}
\setcounter{equation}{0}\label{sec5}
Through this section we always assume that $F(x)$, the d.f.\ that
generates the location-scale family, is non-degenerate and has
finite second moment. The main purpose is to verify
consistency for $L_2$, applying the results of section \ref{sec4}.
To this end, we firstly state and prove a simple lemma that goes
through the lines of Lemma \ref{lem2.1}. Due to the obvious fact
that $\MSE[T_2]\leq \Var[L_2]$, all the results of the present
section apply also to the BLIE of $\theta_2$.
\begin{lem} \label{lem5.1}
If $F(x)$ belongs to the {\rm NCP} class then \\
{\rm (i)}
\be\label{5.1}
\Var [L_2]\leq \frac{\theta_2^2}{\sum_{k=1}^{n-1}
m_k^2/s_k^2},
\ee
where $m_k=\E[Z_k]$ is the $k$-th component of the vector
$\bbb{m}$ and $s_k^2=s_{kk}=\Var[Z_k]$ is the $k$-th diagonal
entry of the matrix $\bbb{S}$. \\ {\rm (ii)} The partial maxima
{\rm BLUE}, $L_2$, is consistent if the series
\be \label{5.2}
\sum_{k=1}^{\infty} \frac{m_k^2}{s_k^2}=+\infty.
\ee
\end{lem}
\begin{pr}{Proof}
Observe that part (ii) is an immediate consequence
of part (i), due to the fact that, in contrast
to the order statistics setup, $m_k$ and
$s_k^2$ do not depend on the sample size $n$.
Regarding (i), consider the linear
unbiased estimator
\[
U_2=\frac{1}{c_n}\sum_{k=1}^{n-1} \frac{m_k}{s_k^2} Z_k^*\law
\frac{\theta_2}{c_n} \sum_{k=1}^{n-1} \frac{m_k}{s_k^2} Z_k,
\]
where $c_n=\sum_{k=1}^{n-1} m_k^2/s_k^2$. Since $F(x)$ belongs to
NCP and the weights of $U_2$ are positive, it follows that the
variance of $U_2$, which is greater than or equal to the variance
of $L_2$, is bounded by the RHS of (\ref{5.1}); this completes the
proof. $\Box$
\end{pr}
\bigskip
The proof of the following theorem is now immediate.
\begin{theo}\label{theo5.1}
If $F(x)$ belongs to the {\rm NCP}
class and if there exists a finite constant $C$
and a positive integer $k_0$
such that
\be\label{5.3}
\frac{ \E[Z_k^2]}{k\E^2 [Z_k]}\leq C ,\ \mbox{ for
all } k\geq k_0,
\ee
then
\be\label{5.4}
\Var [ L_2] \leq O\left( \frac{1}{\log n} \right),
\ \mbox{ as }n\to\infty.
\ee
\end{theo}
\begin{pr}{Proof}
Since for $k\geq k_0$,
\[
\displaystyle
\frac{m_k^2}{s_k^2}=\frac{m_k^2}{\E [Z_k^2]-m_k^2}=
\frac{1}{\displaystyle
\frac{\E[Z_k^2]}{\E^2[Z_k]}-1}\geq \frac{1}{Ck-1},
\]
the result follows by (\ref{5.1}). $\Box$
\end{pr}
\bigskip
Thus, for proving consistency of order $1/\log n$ into NCP class
it is sufficient to verify (\ref{5.3}) and, therefore, we shall
investigate the quantities $m_k=\E[Z_k]$ and
$\E[Z_k^2]=s_k^2+m_k^2$. A simple application of Lemma
\ref{lem2.2}, observing that $m_k=\mu_{k+1}-\mu_k$ and
$s_k^2=\sigma_{k+1}^2-2\sigma_{k,k+1}+\sigma_k^2$, shows that
\begin{eqnarray}
\label{5.5}\E[Z_k] & = & \int_{-\infty}^{\infty} F^{k}(x) (1-F(x)) \ dx, \\
\label{5.6}\E^2[Z_k] & = & 2\int\int_{-\infty<x<y<\infty}
F^{k}(x) (1-F(x)) F^{k}(y) (1-F(y)) \ dy \ dx,\\
\label{5.7}\E[Z_k^2] & = & 2\int\int_{-\infty<x<y<\infty} F^k(x) (1-F(y))
\ dy \
dx.
\end{eqnarray}
Therefore, all the quantities of interest can be expressed as
integrals in terms of the (completely arbitrary) d.f.\ $F(x)$
(cf.\ Jones and Balakrishnan (2002)).
For the proof of the main result we finally need the following
lemma and its corollary.
\begin{lem} \label{lem5.2}{\rm (i)} For any $t>-1$,
\be\label{5.8}
\lim_{k\to\infty} k^{1+t} \int_0^1 u^k (1-u)^t \
du=\Gamma(1+t)>0.
\ee
{\rm (ii)}
For any $t$ with $0\leq t<1$ and any $a>0$, there exist positive
constants $C_1$, $C_2$, and a positive integer $k_0$ such that
\be\label{5.9}
0<C_1< k^{1+t} (\log k)^a \int_0^1 \frac{u^k (1-u)^t}{L^a(u)} \
du<C_2<\infty, \ \mbox{ for all }k\geq k_0,
\ee
where $L(u)=-\log
(1-u)$.
\end{lem}
\begin{pr}{Proof}
Part (i) follows by Stirling's formula.
For part (ii),
with the substitution $u=1-e^{-x}$, we write the integral in
(\ref{5.9}) as
\[
\frac{1}{k+1}\int_0^{\infty} (k+1)(1-e^{-x})^k e^{-x}
\frac{\exp(-tx)}{x^a} \ dx = \frac{1}{k+1}
\E \left[ \frac{\exp(-t T)}{T^a}\right],
\]
where $T$ has the same distribution as the maximum of $k+1$ i.i.d.\
standard exponential r.v.'s. It is well-known that
$\E [T]=1^{-1}+2^{-1}+\cdots+(k+1)^{-1}$. Since the second derivative
of the function $x\to e^{-tx}/x^a$ is $x^{-a-2}e^{-tx}
(a+(a+tx)^2)$, which is positive for $x>0$, this function is
convex, so by Jensen's inequality we conclude that
\begin{eqnarray*}
k^{1+t} (\log k)^a \int_0^1 \frac{u^k (1-u)^t}{(L(u))^a} \ du \geq
\frac{k}{k+1} \left(
\frac{\log k}{1+1/2+\cdots+1/(k+1)}\right)^a\hspace*{7ex}\mbox{~}
\\
\times \exp\left[-t\left(1+\frac12+\cdots+\frac{1}{k+1}-\log k \right)
\right],
\end{eqnarray*}
and the RHS remains positive as $k\to\infty$, since it converges
to $e^{-\gamma t}$, where $\gamma=.5772\ldots$ is Euler's
constant. This proves the lower bound in (\ref{5.9}). Regarding
the upper bound, observe that the function $g(u)=(1-u)^t/L^a(u)$,
$u\in(0,1)$,
has second derivative
\[
g''(u)=\frac{-(1-u)^{t-2}}{L^{a+2}(u)} [ t(1-t)
L^2(u)+a(1-2t)L(u)-a(a+1)], \ \ 0<u<1,
\]
and since $0\leq t<1$, $a>0$ and $L(u)\to +\infty$ as $u\to 1-$,
it follows that there exists a constant $b\in (0,1)$ such that
$g(u)$ is concave in $(b,1)$. Split now the integral in
(\ref{5.9}) in two parts,
\[
I_k=I_k^{(1)}(b)+I_k^{(2)}(b)=\int_0^b \frac{u^k (1-u)^t}{L^a(u)}
\ du+\int_b^1 \frac{u^k (1-u)^t}{L^a(u)} \ du,
\]
and observe that for any fixed $s>0$ and any fixed
$b\in(0,1)$,
\[
k^s I_k^{(1)}(b)\leq k^s b^{k-a} \int_0^b \frac{u^a
(1-u)^t}{L^a(u)} \ du\leq k^s b^{k-a} \int_0^1 \frac{u^a
(1-u)^t}{L^a(u)} \ du\to 0, \ \ \mbox{ as } k\to\infty,
\]
because the last integral is finite and independent of $k$.
Therefore, $k^{1+t}(\log k)^a I_k$ is bounded above if
$k^{1+t}(\log k)^a I_k^{(2)}(b)$ is bounded above for some $b<1$.
Choose $b$ close enough to $1$ so that $g(u)$ is concave in
$(b,1)$.
By Jensen's inequality and the fact that
$1-b^{k+1}<1$ we conclude that
\[
I_k^{(2)}(b)=\frac{1-b^{k+1}}{k+1}\int_b^1 f_k(u) g(u) \ du
=\frac{1-b^{k+1}}{k+1}\E[g(V)]\leq \frac{1}{k+1} g[\E(V)],
\]
where $V$ is an r.v.\ with density $f_k(u)=(k+1)u^k/(1-b^{k+1})$,
for $u\in(b,1)$. Since
$\E(V)=((k+1)/(k+2))(1-b^{k+2})/(1-b^{k+1})>(k+1)/(k+2)$, and $g$
is positive and decreasing (its first derivative is
$g'(u)=-(1-u)^{t-1}L^{-a-1}(u)(tL(u)+a)<0$, $0<u<1$), it follows
from the above inequality that
\begin{eqnarray*}
k^{1+t}(\log k)^a I_k^{(2)}(b)\leq \frac{k^{1+t}(\log k)^a} {k+1}\
g[\E(V)] \leq \frac{k^{1+t}(\log k)^a}{k+1}\
g\left(\frac{k+1}{k+2}\right)
\\
=\frac{k}{k+1} \left( \frac{\log k }{\log (k+2)}\right)^a
\left(\frac{k}{k+2}\right)^t \leq 1.
\end{eqnarray*}
This shows that $k^{1+t}(\log k)^a I_k^{(2)}(b)$ is bounded above,
and thus, $k^{1+t}(\log k)^a I_k$ is bounded above,
as it was to be shown. The proof
is complete. $\Box$
\end{pr}
\begin{cor}\label{cor5.1} {\rm (i)} Under the assumptions of Lemma
{\rm \ref{lem5.2}(i)}, for any $b\in [0,1)$ there exist positive
constants $A_1$, $A_2$ and a positive integer $k_0$ such that
\[
0<A_1< k^{1+t} \int_b^1 u^k (1-u)^t \ du<A_2<+\infty,
\ \mbox{ for all }k\geq k_0.
\]
{\rm (ii)} Under the assumptions of Lemma {\rm \ref{lem5.2}(ii)}, for
any $b\in[0,1)$
there exist positive
constants $A_3$, $A_4$ and a positive integer $k_0$ such that
\[
0<A_3< k^{1+t} (\log k)^a \int_b^1 \frac{u^k (1-u)^t}{L^a(u)} \
du<A_4<\infty, \ \mbox{ for all }k\geq k_0.
\]
\end{cor}
\begin{pr}{Proof} The proof follows from Lemma \ref{lem5.2} in
a trivial way, since the corresponding integrals over $[0,b]$ are
bounded above by a multiple of $b^{k-a}$, of the form $A b^{k-a}$,
with $A<+\infty$ being independent of $k$. $\Box$
\end{pr}
\bigskip
We can now state and prove our main result:
\begin{theo}\label{theo5.2}
Assume that $F(x)$ lies in {\rm NCP}, and let $\omega=\omega(F)$
be the upper end-point of the support of $F$, i.e., $\omega=
\inf\{x\in \R: F(x)=1\}$, where $\omega=+\infty$ if $F(x)<1$ for
all $x$. Suppose that $\lim_{x\to\omega-}F(x)=1$, and that $F(x)$
is differentiable in a left neighborhood $(M,\omega)$ of $\omega$,
with derivative $f(x)=F'(x)$ for $x\in (M,\omega)$. For
$\delta\in\R$ and $\gamma\in\R$, define the {\rm (generalized hazard
rate)} function
\be\label{5.10}
L(x)=L(x;\delta,\gamma;F)=\frac{f(x)}{(1-F(x))^{\gamma}
(-\log (1-F(x)))^\delta},\ \ \
x\in (M,\omega),
\ee
and set
\[
L_*=L_{*}(\delta,\gamma;F)=\liminf_{x\to\omega-}
L(x;\delta,\gamma,F),\ \ L^*=L^*(\delta,
\gamma;F)=\limsup_{x\to\omega-} L(x;\delta,\gamma,F).
\]
If either {\rm (i)} for some $\gamma<3/2$ and $\delta=0$,
\vspace*{.5ex} or {\rm (ii)} for some $\delta>0$ and some
$\gamma$ with $1/2<\gamma\leq 1$,
\be\label{5.11}
0<L_*(\delta,\gamma;F)\leq L^*(\delta,\gamma;F)<+\infty,
\ee
then
the partial maxima {\rm BLUE} $L_2$ {\rm (given by (\ref{2.2}) or (\ref{4.1}))}
of the scale parameter $\theta_2$ is consistent
and, moreover, $\Var [L_2]\leq O(1/\log n)$.
\end{theo}
\begin{pr}{Proof}
First observe that for large enough $x<\omega$, (\ref{5.11})
implies that
$f(x)>(L_*/2)(1-F(x))^{\gamma}(-\log(1-F(x)))^{\delta}>0$, so that
$F(x)$ is eventually strictly increasing and continuous. Moreover,
the derivative $f(x)$ is necessarily finite
since
$f(x)<2 L^* (1-F(x))^{\gamma}(-\log(1-F(x)))^{\delta}$.
The assumption $\lim_{x\to\omega-}F(x)=1$ now shows that
$F^{-1}(u)$ is uniquely defined in a left neighborhood of $1$,
that $F(F^{-1}(u))=u$ for $u$ close to $1$, and that
$\lim_{u\to 1-}F^{-1}(u)=\omega$. This,
in turn, implies that $F^{-1}(u)$ is
differentiable for $u$ close to $1$, with (finite) derivative
$(F^{-1}(u))'=1/f(F^{-1}(u))>0$. In view of Theorem \ref{theo5.1},
it suffices to verify (\ref{5.3}), and thus we seek for an upper
bound on $\E[Z_k^2]$ and for a lower bound on $\E[Z_k]$. Clearly,
(\ref{5.3}) will be deduced if we shall verify that, under (i),
there exist finite constants $C_3>0$, $C_4>0$ such that
\be\label{5.12}
k^{3-2\gamma}\E[Z_k^2]\leq C_3 \ \ \ \mbox{ and } \ \ \
k^{2-\gamma}\E[Z_k]\geq C_4,
\ee
for all large enough $k$. Similarly, (\ref{5.3}) will be verified
if we show that, under (ii), there exist finite constants $C_5>0$
and $C_6>0$ such that
\be\label{5.13}
k^{3-2\gamma}(\log k)^{2\delta}\E[Z_k^2]\leq C_5 \ \ \ \mbox{ and } \ \ \
k^{2-\gamma}(\log k)^{\delta} \E[Z_k]\geq C_6,
\ee
for all large enough $k$. Since the integrands in the integral
expressions (\ref{5.5})-(\ref{5.7}) vanish if $x$ or $y$ lies
outside the set $\{ x\in \R: 0<F(x)<1\}$, we have the equivalent
expressions
\begin{eqnarray}
\label{5.14}\E[Z_k] & = & \int_{\alpha}^{\omega} F^{k}(x) (1-F(x)) \ dx, \\
\label{5.15}\E[Z_k^2] & = & 2\int\int_{\alpha<x<y<\omega} F^k(x)
(1-F(y))
\ dy \ dx,
\end{eqnarray}
where $\alpha$ (resp., $\omega$) is the lower (resp., the upper)
end-point of the support of $F$. Obviously,
for any fixed $M$
with $\alpha<M<\omega$
and any
fixed $s>0$, we have, as in the proof of Lemma \ref{lem5.2}(ii),
that
\begin{eqnarray*}
\lim_{k\to\infty} k^s \int_{\alpha}^{M} F^{k}(x) (1-F(x)) \ dx=0, \\
\lim_{k\to\infty} k^s \int_{\alpha}^M\int_x^{\omega} F^k(x) (1-F(y))
\ dy \ dx=0,
\end{eqnarray*}
because $F(M)<1$ and both integrals (\ref{5.14}), (\ref{5.15}) are
finite for $k=1$, by the assumption that the variance is finite
((\ref{5.15}) with $k=1$ just equals to the variance of $F$; see
also (\ref{2.9}) with $i=1$). Therefore, in order to verify
(\ref{5.12}) and (\ref{5.13}) for large enough $k$,
it is sufficient to replace $\E[Z_k]$ and $\E[Z_k^2]$, in both
formulae (\ref{5.12}), (\ref{5.13}), by the integrals
$\int^{\omega}_{M} F^{k}(x) (1-F(x)) dx$ and
$\int^{\omega}_M\int_x^{\omega} F^k(x) (1-F(y)) dy dx $,
respectively, for an arbitrary (fixed) $M\in(\alpha,\omega)$.
Fix now $M\in(\alpha,\omega)$ so large
that $f(x)=F'(x)$ exists and it is finite and strictly positive
for all $x\in(M,\omega)$,
and make the transformation $F(x)=u$ in the first integral, and
the transformation $(F(x),F(y))=(u,v)$ in the second one. Both
transformations are now one-to-one and continuous, because both
$F$ and $F^{-1}$ are differentiable in their respective intervals
$(M,\omega)$ and $(F(M),1)$, and their derivatives are finite and
positive. Since $F^{-1}(u)\to \omega$ as $u\to 1-$, it is easily
seen that (\ref{5.12}) will be concluded if it can be shown that
for some fixed $b<1$ (which can be chosen arbitrarily close to
$1$),
\begin{eqnarray}
\label{5.16}
k^{3-2\gamma}\int^{1}_b \frac{u^k}{f(F^{-1}(u))}
\left( \int_u^{1} \frac{1-v}{f(F^{-1}(v))} \ dv \right) du & \leq
&
C_3 \ \ \ \mbox{ and } \\
\label{5.17}
k^{2-\gamma}\int^{1}_{b} \frac{u^{k}
(1-u)}{f(F^{-1}(u))} \ du & \geq & C_4,
\end{eqnarray}
holds for all large enough $k$.
Similarly,
(\ref{5.13}) will be deduced if
it will be proved that
for some fixed $b<1$
(which can be chosen arbitrarily close to $1$),
\begin{eqnarray}
\label{5.18}
k^{3-2\gamma}(\log k)^{2\delta}\int^{1}_b
\frac{u^k}{f(F^{-1}(u))} \left( \int_u^{1}
\frac{1-v}{f(F^{-1}(v))} \ dv \right) du & \leq & C_5 \ \ \
\mbox{ and } \\
\label{5.19}
k^{2-\gamma}(\log k)^{\delta} \int^{1}_{b} \frac{u^{k}
(1-u)}{f(F^{-1}(u))} \ du & \geq & C_6,
\end{eqnarray}
holds for all large enough $k$. The rest of the proof is thus
concentrated on showing (\ref{5.16}) and (\ref{5.17}) (resp.,
((\ref{5.18}) and (\ref{5.19})), under the assumption (i) (resp.,
under the assumption (ii)).
Assume first that (\ref{5.11}) holds under (i).
Fix now $b<1$ so large that
\[
\frac{L_*}{2}(1-F(x))^{\gamma}<f(x) <2L^* (1-F(x))^{\gamma},\
\mbox{ for all }x\in (F^{-1}(b),\omega);
\]
equivalently,
\be\label{5.20} \frac{1}{2L^*}<
\frac{(1-u)^{\gamma}}{f(F^{-1}(u))} <\frac{2}{L_*}, \ \ \mbox{ for
all }u\in(b,1).
\ee
Due to (\ref{5.20}), the inner integral in (\ref{5.16}) is
\[
\int_u^{1}
\frac{1-v}{f(F^{-1}(v))}
\ dv =
\int_u^{1} (1-v)^{1-\gamma}
\frac{(1-v)^{\gamma}}{f(F^{-1}(v))} \ dv \leq
\frac{2(1-u)^{2-\gamma}}{(2-\gamma)L_*}.
\]
By Corollary \ref{cor5.1}(i) applied for $t=2-2\gamma>-1$, the LHS
of (\ref{5.16}) is less than or equal to
\[
\frac{2k^{3-2\gamma}}{(2-\gamma)L_*} \int^{1}_b u^k
(1-u)^{2-2\gamma}\frac{(1-u)^{\gamma}}{f(F^{-1}(u))} \ du \leq
\frac{4k^{3-2\gamma}}{(2-\gamma)L_*^2} \int^{1}_b u^k
(1-u)^{2-2\gamma} \ du \leq C_3,
\]
for all $k\geq k_0$, with $C_3=4 A_2
L_*^{-2}(2-\gamma)^{-1}<\infty$, showing (\ref{5.16}). Similarly,
using the lower bound in (\ref{5.20}), the integral in
(\ref{5.17}) is
\[
\int^{1}_{b} \frac{u^{k} (1-u)}{f(F^{-1}(u))} \ du = \int^{1}_{b}
u^k (1-u)^{1-\gamma}\frac{(1-u)^{\gamma}}{f(F^{-1}(u))}\ du \geq
\frac{1}{2L^*} \int^{1}_{b} u^k (1-u)^{1-\gamma}\ du,
\] so that, by Corollary \ref{cor5.1}(i) applied for
$t=1-\gamma>-1$, the LHS of (\ref{5.17}) is greater than or equal
to
\[
\frac{k^{2-\gamma}}{2L^*} \int_b^1 u^k (1-u)^{1-\gamma} \ du \geq
\frac{A_1}{2L^*}>0,\ \ \mbox{ for all $k\geq k_0$,}
\] showing (\ref{5.17}).
Assume now that (\ref{5.11}) is satisfied under (ii). As in part (i),
choose a large enough $b<1$ so that
\be\label{5.21}
\frac{1}{2L^*}< \frac{(1-u)^{\gamma}L^{\delta} (u)}{f(F^{-1}(u))}
<\frac{2}{L_*}, \ \ \mbox{ for all }u\in(b,1),
\ee
where $L(u)=-\log(1-u)$.
Due to (\ref{5.21}), the inner integral in
(\ref{5.18}) is
\[
\int_u^{1} \frac{(1-v)^{1-\gamma}}{L^{\delta}(v)}
\frac{(1-v)^{\gamma}L^{\delta}(v)}{f(F^{-1}(v))} \ dv \leq
\frac{2}{L_*}
\int_u^{1} \frac{(1-v)^{1-\gamma}}{L^{\delta}(v)}
\ dv
\leq \frac{2(1-u)^{2-\gamma}}{L_* L^{\delta}(u)},
\]
because $(1-u)^{1-\gamma}/L^{\delta}(u)$ is decreasing (see the
proof of Lemma \ref{lem5.2}(ii)). By Corollary \ref{cor5.1}(ii)
applied for $t=2-2\gamma\in[0,1)$ and $a=2\delta>0$, the double
integral in (\ref{5.18}) is less than or equal to
\[
\frac{2}{L_*} \int^{1}_b \frac{u^k
(1-u)^{2-2\gamma}}{L^{2\delta}(u)} \frac{(1-u)^{\gamma}L^{\delta}
(u)}{f(F^{-1}(u))} du
\leq \frac{4}{L_*^2} \int^{1}_b \frac{u^k
(1-u)^{2-2\gamma}}{L^{2\delta}(u)} du \leq
\frac{C_5}{k^{3-2\gamma}(\log k)^{2\delta}},
\]
for all $k\geq k_0$, with $C_5=4 A_4 L_*^{-2}<\infty$, showing
(\ref{5.18}). Similarly, using the lower bound in (\ref{5.21}),
the integral in (\ref{5.19}) is
\[
\int^{1}_{b} \frac{u^k (1-u)^{1-\gamma}}{L^{\delta}(u)}
\frac{(1-u)^{\gamma}L^{\delta}(u)}{f(F^{-1}(u))}\ du \geq
\frac{1}{2L^*} \int^{1}_{b} \frac{u^k
(1-u)^{1-\gamma}}{L^{\delta}(u)}\ du,
\]
and thus, by Corollary \ref{cor5.1}(ii) applied for
$t=1-\gamma\in[0,1)$ and $a=\delta>0$, the LHS of (\ref{5.19}) is
greater than or equal to
\[
\frac{k^{2-\gamma}(\log k)^{\delta}}{2L^*} \int_b^1 \frac{u^k
(1-u)^{1-\gamma}}{L^{\delta}(u)} \ du \geq \frac{A_3}{2L^*}>0,\ \
\mbox{ for all $k\geq k_0$,}
\]
showing (\ref{5.19}). This completes the proof. $\Box$
\end{pr}
\begin{REM}\label{rem5.1}
Taking $L(u)=-\log (1-u)$, the limits $L_*$ and $L^*$ in
(\ref{5.11}) can be rewritten as
\begin{eqnarray*}
L_*(\delta,\gamma;F) & = & \liminf_{u\to
1-}\frac{f(F^{-1}(u))}{(1-u)^{\gamma}L^{\delta}(u)}
=\left(\limsup_{u\to
1-}(F^{-1}(u))'(1-u)^{\gamma}L^{\delta}(u)\right)^{-1},
\\
L^*(\delta,\gamma;F)&=&\limsup_{u\to
1-}\frac{f(F^{-1}(u))}{(1-u)^{\gamma}L^{\delta}(u)}
=\left(\liminf_{u\to
1-}(F^{-1}(u))'(1-u)^{\gamma}L^{\delta}(u)\right)^{-1}.
\end{eqnarray*}
In the particular case where $F$ is absolutely continuous with
a continuous density $f$ and interval support, the function
$f(F^{-1}(u))=1/(F^{-1}(u))'$ is
known as the density-quantile function (Parzen, (1979)), and
plays a fundamental role in the theory of order statistics.
Theorem \ref{theo5.2} shows, in some sense, that the behavior of
the density-quantile function at the upper end-point, $u=1$,
specifies the variance behavior of the partial
maxima BLUE for the scale parameter $\theta_2$. In fact,
(\ref{5.11}) (and (\ref{6.1}), below) is a Von Mises-type
condition (cf.\ Galambos (1978), \S\S 2.7, 2.11).
\end{REM}
\begin{REM}\label{rem5.2}
It is obvious that condition $\lim_{x\to\omega-}F(x)=1$ is necessary for
the consistency of BLUE (and BLIE). Indeed,
the event that all partial maxima are equal to $\omega(F)$ has probability
$p_0=F(\omega)-F(\omega-)$ (which is independent of $n$). Thus, a point mass at
$x=\omega(F)$ implies that for all $n$, $\Pr(L_2=0)\geq p_0>0$. This situation is trivial.
Non-trivial cases also exist, and we provide one at the end of next
section.
\end{REM}
\section{Examples and conclusions}\vspace*{-.5em}
\setcounter{equation}{0}\label{sec6}
In most commonly used location-scale families, the following
corollary suffices for concluding consistency of the BLUE (and the
BLIE) of the scale parameter.
Its proof follows by a
straightforward combination of Corollary \ref{cor4.1}(ii) and
Theorem \ref{theo5.2}.\vspace*{-0.6em}
\begin{cor}\label{cor6.1}
Suppose that $F$ is absolutely continuous with finite variance,
and that its density $f$ is
either log-concave
or non-increasing in its interval support
$J=(\alpha(F),\omega(F))=\{ x\in\R: 0<F(x)<1\}$.
If, either for some $\gamma<3/2$ and $\delta=0$,
or for some $\delta>0$ and some $\gamma$ with
$1/2<\gamma\leq 1$,
\be\label{6.1}
\lim_{x\to\omega(F)-} \frac{f(x)}{(1-F(x))^{\gamma} (-\log
(1-F(x)))^\delta}=L\in(0,+\infty),
\ee
then the partial maxima
{\rm BLUE} of the scale parameter is consistent and, moreover, its
variance is at most of order $O(1/\log n)$.
\end{cor}
Corollary \ref{cor6.1} has immediate applications to several
location-scale families. The following are some of them, where
(\ref{6.1}) can be verified easily. In all these families
generated by the distributions mentioned below, the variance of
the partial maxima BLUE $L_2$ (see (\ref{2.3}) or (\ref{4.1})),
and the mean squared error of the partial maxima BLIE $T_2$ (see
(\ref{2.5}) or (\ref{4.2})) of the scale parameter is at most of
order $O(1/\log n)$, as the sample size
$n\to\infty$.\vspace*{-0.6em}
\bigskip
\noindent {\small\bf 1. Power distribution (Uniform).}
$F(x)=x^\lambda$, $f(x)=\lambda x^{\lambda-1}$, $0<x<1$
($\lambda>0$), and $\omega(F)=1$. The density is
non-increasing for
$\lambda\leq 1$ and log-concave for $\lambda\geq 1$. It is easily
seen that (\ref{6.1}) is satisfied for $\delta=\gamma=0$ (for
$\lambda=1$ (Uniform) see section \ref{sec3})
\bigskip
\noindent {\small\bf 2. Logistic distribution.}
$F(x)=(1+e^{-x})^{-1}$, $f(x)=e^{-x}(1+e^{-x})^{-2}$, $x\in\R$,
and $\omega(F)=+\infty$. The density is log-concave, and it is
easily seen that (\ref{6.1}) is satisfied for $\delta=0$,
$\gamma=1$.\vspace*{-0.6em}
\bigskip
\noindent {\small\bf 3. Pareto distribution.} $F(x)=1-x^{-a}$,
$f(x)=a x^{-a-1}$, $x>1$ ($a>2$, so that the second moment is
finite), and $\omega(F)=+\infty$. The density is decreasing, and
it is easily seen that (\ref{6.1}) is satisfied for $\delta=0$,
$\gamma=1+1/a$.
Pareto case provides an example
which lies in NCP and not in NCS class -- see
Bai, Sarkar \& Wang (1997).
\vspace*{-0.6em}
\bigskip
\noindent
{\small\bf 4. Negative Exponential distribution.} $F(x)=f(x)=e^x$,
$x<0$, and $\omega(F)=0$.
The density is log-concave and it is
easily seen that (\ref{6.1}) is satisfied for $\delta=\gamma=0$.
This model is particularly important, because it corresponds to
the partial minima model from the standard exponential
distribution -- see Samaniego and Whitaker (1986).
\bigskip
\noindent
{\small\bf 5. Weibull distribution (Exponential).} $F(x)=1-e^{-x^c}$,
$f(x)=cx^{c-1}\exp(-x^c)$,
$x>0$ ($c>0$), and $\omega(F)=+\infty$.
The density is non-increasing for $c\leq 1$ and log-concave for
$c\geq 1$, and it is easily seen that (\ref{6.1}) is satisfied for
$\delta=1-1/c$, $\gamma=1$. It should be noted that Theorem
\ref{theo5.2} does not apply for $c<1$, since
$\delta<0$.\vspace*{-0.6em}
\bigskip
\noindent
{\small\bf 6. Gumbel (Extreme Value) distribution.}
$F(x)=\exp(-e^{-x})=e^{x}f(x)$,
$x\in\R$, and $\omega(F)=+\infty$.
The distribution is log-concave and (\ref{6.1})
holds with $\gamma=1$, $\delta=0$ ($L=1$).
This model is particularly important for its applications
in forecasting records, especially in athletic events --
see Tryfos and
Blackmore
\vspace{0.6em}
(1985).
\noindent {\small\bf 7. Normal Distribution.} $f(x)=\varphi(x)=(2\pi
e^{x^2})^{-1/2}$, $F=\Phi$,
$x\in \R$, and $\omega(F)=+\infty$.
The density is log-concave and Corollary \ref{cor6.1} applies with
$\delta=1/2$ and $\gamma=1$. Indeed,
\[
\lim_{+\infty} \frac{\varphi(x)}{(1-\Phi(x))(-\log(1-\Phi(x)))^{1/2}}=
\lim_{+\infty} \frac{\varphi(x)}{x(1-\Phi(x))}\
\frac{x}{(-\log(1-\Phi(x)))^{1/2}},
\]
and it is easily seen that
\[
\lim_{+\infty} \frac{\varphi(x)}{x(1-\Phi(x))}=1,\ \ \ \ \lim_{+\infty}
\frac{x^2}{-\log(1-\Phi(x))}=2,
\]
so that $L=\sqrt{2}$.\vspace*{-0.6em}
\bigskip
In many cases of interest
(especially in athletic events),
best performances
are presented as partial minima
rather than maxima;
see, e.g., Tryfos and Blackmore (1985).
Obviously, the present theory applies also to
the partial minima setup. The easiest way to convert
the present results for the partial minima case is to consider
the i.i.d.\ sample $-X_1,\ldots,-X_n$, arising from
$F_{-X}(x)=1-F_X(-x-)$, and to observe that
$ \min\{X_1,\ldots,X_i\}=-\max\{-X_1,\ldots,-X_i\}$, $i=1,\ldots,n$. Thus,
we just have to replace $F(x)$ by $1-F(-x-)$
in the corresponding formulae.
There are some related problems and questions
that, at least to our point of view, seem to be quite
interesting.
One problem is to verify consistency for the partial maxima BLUE
of the location parameter. Another problem concerns the complete
characterizations of the NCP and NCS classes (see Definition
\ref{def4.1}), since we only know S/BSW-type sufficient conditions.
Also, to prove or disprove the non-negativity of the partial maxima
BLUE for the scale parameter, outside the NCP class (as well as
for the order statistics BLUE of the scale parameter outside the
NCS class).
Some questions concern lower variance bounds for the
partial maxima BLUEs. For example we showed in section \ref{sec3}
that the rate $O(1/\log n)$ (which, by Theorem \ref{theo5.2},
is just an upper bound for the variance of $L_2$)
is the correct order for the
variance of both estimators in the Uniform location-scale family.
Is this the usual case? If it is so, then we could properly
standardize the estimators, centering and
multiplying them by $(\log n)^{1/2}$.
This would result to limit theorems analogues
to the corresponding ones for order statistics --
e.g., Chernoff, Gastwirth \& Johns (1967); Stigler (1974); --
or analogues to the corresponding ones of Pyke (1965), (1980),
for partial maxima spacings instead of ordinary spacings.
However, note that the
Fisher-Information approach, in the particular case of the one-parameter
(scale) family generated by the standard Exponential distribution,
suggests a variance of about $3\theta_2^2/(\log n)^3$ for the
minimum variance unbiased estimator
(based on partial maxima) --
see Hoffman and Nagaraja (2003, eq.\ (15) on p.\ 186).
A final question concerns the
construction of approximate BLUEs (for both location and scale)
based on partial maxima,
analogues to Gupta's (1952) simple linear estimators based on order
statistics. Such a kind of approximations and/or limit theorems
would be especially useful for practical purposes, since the
computation of BLUE via its closed formula requires inverting
an $n\times n$ matrix. This problem has been partially
solved here: For the NCP class, the
estimator $U_2$, given in the proof of Lemma \ref{lem5.1}, is
consistent for $\theta_2$ (under the assumptions of Theorem
\ref{theo5.2}) and can be computed by a simple formula if we
merely know the means and variances of the partial maxima
spacings.
Except of the trivial case given in Remark \ref{rem5.2}, above, there exist non-trivial
examples where no consistent sequence of unbiased estimators exist for the scale parameter.
To see this, we make use of the following result.
\begin{theo}\label{theo6.1} {\rm (Hofmann and Nagaraja 2003, p.\
183)} Let $\Xs$ be an i.i.d.\ sample
from the scale family with distribution function
$F(x;\theta_2)=F(x/\theta_2)$ {\rm ($\theta_2>0$ is the scale parameter)} and density
$f(x;\theta_2)=f(x/\theta_2)/\theta_2$, where $f(x)$ is known,
it has a continuous derivative $f'$, and its support, $J(F)=\{x:f(x)>0\}$,
is one of the intervals $(-\infty,\infty)$, $(-\infty,0)$ or $(0,+\infty)$.
\\
{\rm (i)} The Fisher Information contained in the partial maxima
data $\Xspo$ is given by
\[
I^{\max}_n=\frac{1}{\theta_2^2}\sum_{k=1}^n
\int_{J(F)} f(x) F^{k-1}(x)\left(1+\frac{xf'(x)}{f(x)}+\frac{(k-1)xf(x)}{F(x)}\right)^2
dx.
\]
{\rm (ii)} The Fisher Information contained in the partial minima
data $\Xspmo$ is given by
\[
I^{\min}_n=\frac{1}{\theta_2^2}\sum_{k=1}^n
\int_{J(F)} f(x) (1-F(x))^{k-1}\left(1+\frac{xf'(x)}{f(x)}-\frac{(k-1)xf(x)}{1-F(x)}\right)^2
dx.
\]
\end{theo}
It is clear that for fixed $\theta_2>0$, $I^{\max}_n$ and $I^{\min}_n$ both increase
with the sample size $n$. In particular, if $J(F)=(0,\infty)$ then,
by Beppo-Levi's Theorem,
$I_n^{\min}$ converges (as $n\to\infty$) to
its limit
\be
\label{eq6.2}
I^{\min}=\frac{1}{\theta_2^2} \int_{0}^{\infty} \left\{\mu(x)
\left(1+\frac{xf'(x)}{f(x)}-x\mu(x)\right)^2 +x^2
\mu^2(x)\left(\lambda(x)+\mu(x)\right)\right\}
dx,
\ee
where $\lambda(x)=f(x)/(1-F(x))$ and $\mu(x)=f(x)/F(x)$ is the
failure rate and reverse failure rate of $f$, respectively.
Obviously, if $I^{\min}<+\infty$, then the Cram\'{e}r-Rao
inequality shows that
no consistent sequence of unbiased estimators exists. This, of
course, implies that in the corresponding scale family,
any sequence of linear (in partial minima) unbiased estimators is inconsistent. The same is clearly true for the location-scale family, because any linear unbiased estimator for $\theta_2$ in the location-scale family is also a linear unbiased estimator for $\theta_2$ in the corresponding scale family.
In the following we show that there exist distributions with
finite variance such that $I^{\min}$ in (\ref{eq6.2}) is finite:
Define $s=e^{-2}$ and
\[
F(x)=\left\{
\begin{array}{cc}
0, & x\leq 0, \vspace{.2em}\\
\displaystyle
\frac{1}{1-\log(x)}, & 0<x \leq s, \\
& \vspace*{-.7em}\\
1-(ax^2+bx+c) e^{-x}, & x\geq s,
\end{array}
\right.
\]
where
\begin{eqnarray*}
a&=&\frac{1}{54} \exp(e^{-2})(18-6e^2+e^4)\simeq 0.599, \\
b&=&-\frac{2}{27} \exp(-2+e^{-2})(9-12e^2+2e^4)\simeq -0.339, \\
c&=&\frac{1}{54} \exp(-4+e^{-2})(18-42e^2+43e^4)\simeq 0.798. \\
\end{eqnarray*}
Noting that $F(s)=1/3$, $F'(s)=e^2/9$ and $F''(s)=-e^4/27$,
it can be easily verified that the corresponding density
\[
f(x)=\left\{
\begin{array}{cc}
\displaystyle
\frac{1}{x(1-\log(x))^2}, & 0<x \leq s, \\
& \vspace*{-.7em}\\
(ax^2+(b-2a)x+c-b) e^{-x}, & x\geq s,
\end{array}
\right.
\]
is strictly positive for $x\in(0,\infty)$, processes
finite moments of any order, and has continuous derivative
\[
f'(x)=\left\{
\begin{array}{cc}
\displaystyle
\frac{1+\log(x)}{x^2(1-\log(x))^3}, & 0<x \leq s, \\
& \vspace*{-.7em}\\
-(ax^2+(b-4a)x+2a-2b+c) e^{-x}, & x\geq s.
\end{array}
\right.
\]
Now the integrand in (\ref{eq6.2}), say $S(x)$, can be written as
\[
S(x)=\left\{
\begin{array}{cc}
\displaystyle
\frac{1-2\log(x)}{x(-\log(x))(1-\log(x))^3}, & 0<x \leq s, \\
& \vspace*{-.7em}\\
A(x)+B(x), & x\geq s,
\end{array}
\right.
\]
where, as $x\to+\infty$,
$A(x)\sim A x^4 e^{-x}$ and $B(x)\sim B x^6 e^{-2x}$, with $A$, $B$
being positive constants. Therefore,
$\int_{0}^s S(x)dx=\int_{2}^{\infty} \frac{1+2y}{y(1+y)^3}dy=\log(3/2)-5/18\simeq
0.128$. Also, since $S(x)$ is continuous in $[s,+\infty)$ and
$S(x)\sim A x^4e^{-x}$ as $x\to+\infty$, it follows that $\int_{s}^{\infty}
S(x)dx<+\infty$ and $I^{\min}$ is finite.
Numerical integration shows that $\int_{s}^{\infty} S(x)dx\simeq 2.77$
and thus, $I^{\min}\simeq 2.9/\theta_2^2<3/\theta_2^2$.
In view of the Cram\'{e}r-Rao bound
this means that, even if a huge sample of partial minima has been recorded,
it is impossible to construct an unbiased scale estimator
with variance less than $\theta_2^2/3$. Also, it should be noted that a similar example
can be constructed such that $f''(x)$ exists (and is continuous) for all $x>0$.
Of course the above example can be adapted to the partial maxima case by considering the location-scale family generated by
the distribution function
\[
F(x)=\left\{
\begin{array}{cc}
(ax^2-bx+c) e^{x}, & x\leq -s,
\vspace*{.5em}
\\
\displaystyle
\frac{-\log(-x)}{1-\log(-x)}, & -s\leq x <0,
\vspace*{.5em}
\\
1, & x\geq 0,
\end{array}
\right.
\]
with $s$, $a$, $b$ and $c$ as before.
\vspace{-0.6em}
\bigskip
\noindent {\small\bf Acknowledgements.}
Research partially
supported by the University of Athens' Research Fund under Grant
70/4/5637. Thanks are due to an anonymous referee for the very careful reading of the manuscript and also for correcting some mistakes.
I would also like to express my thanks to Professors Fred
Andrews, Mohammad Raqab, Narayanaswamy Balakrishnan, Barry Arnold
and Michael Akritas, for their helpful discussions and comments;
also to Professors Erhard Cramer and Claude Lef\`{e}vre, for
bringing to my attention many interesting references related to
log-concave densities. Special thanks are due to Professor Fredos
Papangelou for his helpful remarks that led to a more general
version of Theorem \ref{theo4.2}, and also to Dr.\ Antonis Economou,
who's question for the Normal Distribution was found to be crucial
for the final form of the main result.
\vspace{-0.6em}
\small
| {
"attr-fineweb-edu": 1.951172,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfSfxK0zjCxN_vM6j |
\subsection{Data set}
Our data set is composed of search result data from Airbnb Experiences marketplace. As we mentioned in the introduction, this marketplace allows users to search for unique activities to do while travelling. A typical experience takes 2 hours on average. Experience Hosts can offer these activities to several travellers at the same time, and they can offer the same activity multiple times per day. We sampled a set of anonymized searches which occurred between 1st of January and 31st of May $2019$.
We kept only searches in which the user actually purchased an experience.
For each search we collected detailed information on displayed experiences, including the ranking position for all experiences in that search.
In Figure \ref{fig:position_plot} we show the ranking position of the experience that ends up being booked by the customer. As it can be observed, most of the booked experiences were ranking high on the search page. Therefore, we can conclude that the top ranking positions is where the biggest competition for bookings is happening.
\begin{figure}[thb]
\centering
\includegraphics[width=1.0\linewidth]{figures/DistributionPosition.png}
\caption{Booking ranking position distribution.}
\label{fig:position_plot}
\end{figure}
In addition to experience ranking information we collect a variety of other experience attributes that are used for feature engineering purpose, as described in Section \ref{sec:model}.
Table \ref{tab:statistics} reports few more statistics of our data set
\begin{table}[thbp]
\centering
\scalebox{1}{
\begin{tabular}{l | r}
\toprule
Number of search events & $500k$ \\
Number of distinct experiences & $54k$ \\
Number of search events - Data set 2 (training) & $87k$\\
Number of search events - Data set 2 (test) & $36k$\\
Number of search events - Data set 1 (training) & $80k$\\
Number of search events - Data set 1 (test) & $32k$\\
\bottomrule
\end{tabular}
}
\vskip 0.05in
\caption{Statistics of our data sets.
}
\label{tab:statistics}
\end{table}
A large portion of our training set is used for training the value model, which was described in Section \ref{subsection:value}. We use a small portion of the training set to train our pricing strategy which determines an optimal price for each experience. We consider two scenarios. In the first scenario we train our pricing strategy using the first $23$ days of April and we use the last week of April as our test set, referred to as Data set $1$. In the second scenario we train our pricing strategy on the first $23$ days of May and use the last week of May as a test set, referred to as Data set $2$.
\subsection{Experimental Framework}
For each day in training set, we have a set of searches, where each search contains many experiences displayed to the user. In order to find the optimal price for each search, we need to have the value distribution of each experience that appeared in that search. We therefore train a value model (\textit {VALUE}\xspace) using all the information available one day before the user search \footnote{Note that this corresponds to the real-world scenario where the machine learning models are re-trained everyday, and used for inference purpose during the next day.}.
For each search, we assume that the value follows a normal distribution centered at our predicted score for each experience. The scores are predicted using the machine learning model that was trained using past booked prices as labels, and listing attributes as features. It should be noted that no price related attributes were included as features for this model, and thus our predicted value is independent of the actual price of experiences.
In order to find the value distribution for each experience $i$ and $ds$, we use a sample standard deviation of values over $30$ days as an estimator of the $\sigma_i^2$.
When training the pricing strategy, we restrict ourselves to the top-$20$ experiences that appear on the search page in each search.
For each experience shown in a particular search, we use our pricing strategy to compute a price vector. Since the predicted price is search-specific, i.e., experience optimal price depends on the co-displayed experiences, we restrict our attention to the experiences that appeared frequently in the top $20$ search positions and combine the obtained prices from different searches for each experience by a simple average.
The revenue maximization (\textit {REV\_MAX}\xspace) strategy which was described in Section \ref{sec:rev_max} is applied to each search to determine the optimal price for each experience from top $20$ positions. The algorithm used to carry out the optimization is L-BFGS-B \cite{liu1989limited, zhu1997algorithm}. L-BFGS-B is a limited-memory quasi-Newton method for simple box-constrained optimization problem. In experiments, to reduce computational complexity, we may further restrict price search space of experience $i$ to a smaller range, e.g., $[\hat{v}_i - 2 \hat{\sigma}_i^2, \hat{v}_i + 2 \hat{\sigma}_i^2]$ where $\hat{v}_i$ and $\hat{\sigma}_i$ are estimated by the output from value model. At the end of this process, every experience has its actual price, and a suggested price. In cases where we are not able to provide any suggestion, e.g. experience were never ranked at top in searches, we use the predicted value from \textit {VALUE}\xspace model as the suggested price. In the next section we describe how we use the suggested price and the actual price together with booking information to build a set of metrics for evaluation purpose.
\begin{comment}
\subsection{To Fix}
We compare our pricing model with the home pricing model. In the home pricing model, a key component is booking probability. They predict the probability of booking for a future date as of the day they make the prediction. They trained separate gradient boosting machine model for each market with an adaptive training data sampling rate. Different from our pricing model, they scored the booking probability as a function of price, while our values are estimated independent of the actual price. To implement their idea in the context of experiences product
\end{comment}
\begin{comment}
\item The experience gets booked at $P$ and we want to lower its price to $P_{sugg}$. The model believes that this experience should lower its price in order to be booked. If our suggestion is adopted, we incur a loss $P-P_{sugg}$;
\item Booked + increase its price, in order to maximize our revenue. After increasing the price to $P_{sugg}$, if the experience still gets booked, then we have revenue gain, which is $P_{sugg}-P$; if the experience is not booked, then
\item The experience is not booked at price $P$, and the suggested price $P_{sugg} < P$. If the experience is not booked at $P_{sugg}$, then it does not affect the revenue; if the experience gets booked, which means that now it has the largest gap
\item Non-booked experiences: if our model suggests a lower price, this means that our model believes that lowering its price on average will increase our revenue. Under $P_{sugg}$, if the experience does not get booked, no change in revenue; if it gets booked, then
\end{comment}
\begin{comment}
The model is evaluated on instances that start between $2019-06-01$ and $2019-07-01$.
\begin{itemize}
\item History data: looking back 7 ds. For initial run, we can look back more days
\item Future ds\_night data, from dlcp, looking forward on 180 days or max lead days future booking
\item Over-sampling for prices at extrapolated (lower and higher) ratios to encourage exploration
* Schema:
* Label table (*pricing*.*strategy\_model\_training\_label*)
\end{itemize}
\end{comment}
\iffalse
The value model and occupancy rate model are trained over the past two months' data and retrained every day. In the light of a dynamic market, strategy models are trained using a more recent data. The training starts from $ds\_start$ to $ds\_eval-1$, and testing from $ds\_eval$ to $ds\_end$.
\subsubsection{Rev Max Strategy Layer}: We first exclude search events that lead no booking from training. Then for each search event from $ds\_start$ to $ds\_eval-1$, we derive the value distributions for experiences that appear in top 20 positions based on predicted values from value model. Then the pricing strategy layer is applied to get the optimal price for each top experience. The algorithm used to carry out the optimization is L-BFGS-B \cite{liu1989limited, zhu1997algorithm}. L-BFGS-B is a limited-memory quasi-Newton method for bound-constrained optimization. $[v_i^{min}, v_i^{max}]$ are chosen chosen as $[\mu_i - k\cdot \sigma_i, \mu_i + k\cdot \sigma_i]$, with $k$ set to $2$ in our experiments.
Finally, for each experience, we average over optimal prices from all training search events to obtain one suggested price for this experience.
\fi
\subsection{Metrics}
\label{sec:metrics}
In this section we introduce a set of metrics used in our offline evaluation. Most of our metrics are inspired by previous work on Airbnb dynamic pricing \cite{ye2018customized} and were adapted to our search-based methodology. The main assumption is that if an experience is booked after a particular search then the suggested price should have been same or higher than the booked price, otherwise we have ``regret", and if it was not booked, it would be better to suggest a lower price.
Let $i$ be a generic experience, and $S = \{S_j\}_j$ be a set of searches, where $S_j={i_{1j}, \dots, i_{N_j}}$ is a search containing $N_j$ experiences, $S^*_j$ denotes an experience which got booked during search $j$, $P_{ij}$ denotes the price of experience $i$ in search $S_j$ and $P_{i_{sugg}}$ denotes the suggested price for experience $i$ during the test search\footnote{Note that our suggestion for experience $i$ is fixed for the whole test set.}, we can define our metrics in the following way:
\begin{itemize}
\item \textit{Booking Regret} (\emph{BR}), defined as,
\begin{equation}
BR = \median_{S_j \in S} \big( \max(\frac{P_{ij}-P_{i_{sugg}}}{P_{ij}}, 0),i = S^*_j \big),
\end{equation}
where we first compute the regret of each search as the relative difference between the booked experience price and the suggested price for that experience, and then we get the median over all searches.
The intuition is that a good price suggestion method should not suggest a price that is lower than booked price, which hurts the revenue of suppliers. Thus a lower booking regret
is an indicator of a better price suggestion strategy. On the other hand, the lower the suggested price, the higher the regret w.r.t. the price that the experience was booked for;
\item \textit{Weighted Booking Regret} (\emph{$BR_w$}), is defined as,
\begin{equation}
BR_w = \median_{S_j \in S} \big( \max(P_{ij}-P_{i_{sugg}}, 0),i = S^*_j \big).
\end{equation}
Since booking regret captures the revenue loss w.r.t. the booked price but not the absolute loss of the suppliers, we define $BR_w$ to measure the absolute revenue loss;
\item \textit{Price Decrease Recall} (\emph{PDR}), is defined as,
\begin{equation}
PDR = \frac{\sum_{S_j \in S} |\{i \in S_j | i \neq S^*_j \land P_{i_{sugg}} < P_{ij} \}| }{\sum_{S_j \in S} |\{i \in S_j | i \neq S^*_j \}| },
\end{equation}
where in the numerator we are considering the experiences that were not booked, and had a lower price suggested than their original price, and the denominator includes all the experiences that were not booked over all searches. The intuition here is that if the experience was not booked, and the price suggestion was higher than the actual price then we have a miss, otherwise we have a hit. Higher \textit {PDR}\xspace is a possible indication of a better price suggestion. However \textit {PDR}\xspace has limitations in the presence of competition, e.g. a properly priced experience may still not get booked when it co-occurs with experiences that are more competitive. Another point is that not all non-booked experiences during one search have to be sold for the best outcome. To overcome these limitations and have some insights on what each strategy is thinking when it lowers a price, we further defined \textit {PDR\_HP}\xspace (high revenue potential) and \textit {PDR\_LP}\xspace (low revenue potential), where we compute \textit {PDR}\xspace for the two subsets of non-booked experiences. In the first case (\textit {PDR\_HP}\xspace) we consider only experiences that have a value above the upper quartile of all experiences values, and a conversion rate that is below the lower quartile, despite receiving many impressions. In the second case (\textit {PDR\_LP}\xspace) we consider experiences that have a value below the lower quartile, and high conversion rate (above the upper quartile). A good pricing strategy should have \textit {PDR\_HP}\xspace that is higher than \textit {PDR\_LP}\xspace, indicating it is targeting the high revenue potential experiences.
\iffalse
\item \textit{Price Decrease Recall High Potential} (\emph{PDR\_HP}), is defined as \emph{PDR} but we restrict only to experiences that have a high value (top $25\%$) but have a low number of bookings despite appearing many times on top search results (least $25\%$ in conversion rate).
\fi
\item \textit{Revenue Potential} (\emph{REV\_POTENT}), is defined as,
\begin{equation}
\begin{aligned}
REV\_POTENT &=
\frac{1}{|S|} \sum_{S_j \in S} \max_{\{i \in S_j | i \neq S^*_j \land P_{i_{sugg}} < P_{ij} \}} gain_{ij} \cdot D_i , \\
& gain_{ij} = P_{ij} - P_{S^*_{j}j},
\end{aligned}
\end{equation}
where it considers all non-booked experiences for which we suggested a price that is lower than the actual price, and we want to approximately obtain what would have been the revenue gain if they were booked, due to the adoption of the suggested price. $D_i$ \footnote{We adjusted the demand by an elasticity of demand of $1.5$, that is an increase of $1.5\%$ for a price drop of $1\%$.} indicates a demand index, which is the probability of an experience getting booked. This will be described in more detail in the next section, where we learned this probability for a comparison with other strategies.
\item \textit{Recall} (\emph{RECALL}), is defined as the percentage of experiences for which the model was able to suggest a price.
\iffalse
\jw{optional:}
\item \textit{Targeted Price Decrease Recall} (\emph{$PDR_{HP}$}) is defined as,
\begin{equation}
PDR_{HP} = \frac{\sum\limits_{S_j \in S} |\{i \in S_j \land i \in HP | i \neq S^*_j \land P_{i_{sugg}} < P_{ij} \}| }{\sum\limits_{S_j \in S} |\{i \in S_j \land i \in HP| i \neq S^*_j \}| },
\end{equation}
where we compute $PDR$ for experiences that in the high revenue potential group $HP$. To describe $HP$, first denote $B = \{b_i\}_i$ as the set of booking percentages, where $b_i$ =
is the percentage of bookings of $i$ , and denote $A$ as the set of all calendar prices, then define $HP = \{i \vert b_i < Q_1(B) \land P_{i\cdot} > Q_3(A)\}$
where on numerator we are considering the experiences that were not booked and in high revenue potential, and had a lower price suggested than their original price, and denominator includes all the experiences that were not booked over all searches. The intuition here is that if an experience get not booked, and the price suggestion was higher than the actual price than we have a miss, otherwise we have a hit. A higher \emph{PDR} is the indicator of a better price suggestion algorithm.
\jw{Hi Puya, I was wondering if it is better to call these experiences or (experience) instances ? experiences is ok}
\pv{We probably need to get back the PDR high and low potential, because otherwise it seems not enough, but lets see, you can decide.}
\jw{Jiawei try to make sure the symbols in metrics are in line with the model section.}
\fi
\end{itemize}
\iffalse
To boost revenue, we construct a more insightful metrics than $PDR$, where we compute the price decrease recall of experiences that have higher revenue potential. Denote $P_i$ as the representative price of an experience, $d_i, i=1,\cdots, N$ as the number of search events that one experience displayed at top, and $b_i, i=1,\cdots, N$ as the number of total bookings of experience $i$ from search event during testing set. We approximate the booking probability of an experience $i$ who displayed at top of a search event by $booking\_prob_i = \frac{d_i}{D_i}$. Then we investigate the pricing strategy's behavior of two groups of experiences:
\begin{enumerate}
\item Lower revenue potential: experiences that rank upper 25\% for booking\_prob and lower 25\% for booking price;
\item Higher revenue potential: experiences that rank lower 25\% for booking\_prob and upper 25\% for booking price.
\end{enumerate}
Experiences that in the second group could drive revenue significantly if a lower price is suggested, and increase the booking gains. On the contrary, though lowering the price for the first group may increase bookings, it may have a negative effect on the booking values, and thus there is fewer improvement on revenue through decreasing their prices. Therefore, we expect a good strategy to decrease the price of low-booking\_prob high-value experiences more aggressively compared to high-booking\_prob low-value experiences.
\fi
\begin{comment}
\begin{itemize}
\item Our model suggests a lower price. For $S_i$, it still gets booked (revenue loss: $P_{list} - P_{sugg}$) (bad). For $S_i^*$, it will possibly be booked due to the price decrease ($P_{sugg} - P_{e_b}$).(not sure)
maximize
#($P_{list} < P_{sugg}$)/booked search
#($P_{list} > P_{sugg}$))/non-booked search
Revenue gain (may be negative):
$$\sum_{S_i^* \cap \{P_{list} > P_{sugg}\}} (P_{sugg} - P_{sold})$$
\item Our model suggests a higher price. For $S_i$, not sure if it will still gets booked; For $S_i^*$, it will not be booked, and thus no effect.
\end{itemize}
\end{comment}
\subsection{Comparisons }
In this section we describe the baselines and related work we used in our offline experiments for comparison to our proposed solution. The Customized Regression Model (\textit {CRM}\xspace ) proposed in \cite{ye2018customized} for determining an optimal price for Airbnb Home rental is the main related work in our comparisons. The \textit {CRM}\xspace method consists of two components: a booking probability model and a pricing strategy layer. The booking probability model is constructed to estimate the demand for a future night at specific prices. The authors recognize the difficulty in using the estimated demand directly in \eqref{demand_rev} to maximize the revenue, and therefore construct a second strategy layer that maps the booking probability to a price suggestion.
We implemented the \textit {CRM}\xspace booking probability model \cite{ye2018customized} using the same set of features used in our value model (Table \ref{tab:features}), plus a pivot price. In contrast to Airbnb Home marketplace where only a single guest can book a single listing night, in the Airbnb Experience marketplace multiple guests can book the same Experience on the same day. Therefore, we needed to adapt the implementation of \textit {CRM}\xspace booking probability model to account for the difference by considering experiences which had at least a single booking as positives and ones which had zero bookings as negatives.
\iffalse
the probability that its occupancy rate is within a range $(a, b], a \geq 0, b>0$ \footnote{The booking probability is a special case when $a=0, b=\infty$.} at a pivot price $P_{pivot}$, namely
\begin{equation}
P_r(occrate \in (a,b] \vert X_{ds} ,ds_{night}, P_{pivot}),
\end{equation}
where $P_{pivot}$ could be the most representative calendar price set by the host, and $X_{ds}$ represents features used in the training. For features set we used the same feature set of the value model plus the price.
\fi
The second component of \textit {CRM}\xspace requires to learn a demand index function $V_{\bm{\theta}}$, which takes the booking probability as input. To learn $\bm{\theta}$, the \textit {CRM}\xspace strategy model adopts a customized loss function, and learns a $\bm{\theta}$ for each experience.
However, our data set has less price dynamics, and thus it was not ideal to learn $\bm{\theta}$ at the experience level. Therefore, we aggregated the experiences at the market and category level and learned one $\bm{\theta}$ for each market and category. When \textit {CRM}\xspace is not able to suggest any price, we used the actual price as suggested one.
\iffalse
\begin{equation}
\label{home_price}
P_{sugg} = pivot\_price \cdot V_{\theta},
\end{equation}
where $V_{\theta}$ is a nonlinear transformation of $P_r[pivot\_price]$ and $\theta$ is the parameter to learn.
\fi
\begin{comment}
\begin{equation}
\mathcal{L} = \mbox{argmin}_{\bm{\theta}} \sum_{i=1}^N(L(P_i, y_i) - P_{sugg})^{+} + (P_{sugg} - U(P_i, y_i))^{+}.
\end{equation}
\end{comment}
\noindent\textbf{Baselines.} To better monitor the behavior of the set of metrics, we also compare with two baseline pricing strategies. The first strategy prices all products at zero (\textit {ZERO}\xspace), and the second strategy (\textit {AVG}\xspace) uses the average booked price observed in the training set as a suggested price.
\subsection{Related Work and Adaptation}
The CRM model consists of two components: a booking probability model that predicts whether an available future night of a specific listing will be booked as of the day when making prediction, plus a second strategy model which maps booking probability to price suggestion. Specifically, the price suggestion is centered at a pivot price, and is decreased or increased by a magnitude which is a non-linear function of booking probability. The non-linear function contains parameters learned by minimizing a customized loss function.
\subsubsection{Booking probability Model}
In CRM, the booking probability model (BPM) predicts, for a given listing at a specific date $ds$, the cumulative probability $P_r$ that a future $ds\_night$ will be booked from $ds$ at different prices. The model uses features including lead time, supply and demand, and listing features. We replace this booking probability model with an occupancy rate model, which predicts for each instance that starts in the next $28$ days, the probability that the occupancy rate is in a range $(a, b]$ when the price is set at some pivot price, namely,
\begin{equation}
Pr(occrate \in (a,b] | X_ds ,ds_{night}, ds_{night hour}, P_{pivot}),
\end{equation}
where $X_ds$ is the features of an experience instance. This model predicts for the next $28$ days, the occupancy rate for each experience instance, and the probability of booking is a special case when $a=0, b=\infty$.
{\color{red}{Hi Puya, could you check about this occupancy rate model?}}
Due to the challenges in demand estimation, instead of directly maximizing the revenue using the booking probability Model, a better choice is to construct a strategy model which compute optimal price based on some targeted metrics and use the estimated demand at the pivot price $P_{pivot}$ as input to this model. Then the price suggestion is obtained via the following non-linear transformation of $P_r$,
\begin{equation}
P_{sugg} = P\cdot V = P_{pivot} \cdot \left\{
\begin{array}{ll}
1 + \theta_1(P_r(P_{pivot})^{\phi_H^{-P_r D}}-\theta_2), \ \ if \ D \geq 0 \\
1 + \theta_1(P_r(P_{pivot})^{\phi_L^{-(1-P_r)D}}-\theta_2), \ \ if \ D \geq 0,
\end{array}
\right.
\end{equation}
where $P_{pivot}$ could be the most representative calendar price set by the host. $P_r(P_{pivot})$ is the booking probability estimated at price $P_{pivot}$ from the first model.
$\theta_1$ and $\theta_2$ are parameters learned in the strategy model. $D$ is a normalized demand score, and in our experiment, we set it to be the ratio of bookings over all available instances in the last 30 days, and normalize it on $[-1,1]$.
\subsubsection{Strategy Model}
The second strategy layer takes the (price, probability) as a input, and learn the optimal price so as to minimize the loss of making 'bad' price suggestions. They define the lower bound and upper bound of the optimal price range as \begin{equation}
\begin{aligned}
L(P_i, y_i) &= y_i \cdot P_i + (1-y_i)\cdot c_1 P_i \\
U(P_i, y_i) & = (1-y_i) \cdot P_i + y_i \cdot c_2 P_i,
\end{aligned}
\end{equation}
where $c_1$ and $c_2$ are two hyper-parameters to tune for the best performance in terms of the defined metrics. The objective function is defined as
\begin{equation}
\mathcal{L} = \mbox{argmin}_{\theta} \sum_{i=1}^N(L(P_i, y_i) - f_{\theta}(x_i))^{+} + (f_{\theta}(x_i) - U(P_i, y_i))^{+},
\end{equation}
where $f_{\theta}$ is assumed to be of an asymmetric exponential form.
The original CRM model was trained on listing level of homes. However, for experiences, due to less price dynamics, the learning of $\theta_1$ and $\theta_2$ at the instance level is not ideal. We aggregate experiences at market-category level during the training.
\subsection{Price recommendation}
After model training, we learn $\theta_1$ and $\theta_2$ at a market-category level, and then make price suggestion at an instance level for instances in the testing set, using the predicted booking probability from our occupancy rate model. We average price suggestion for each instance in the testing set to get aggregated price suggestion for each experience.
\begin{comment}
\begin{itemize}
\item features used to train the model: {\color{red}{Same features that are used in the occupancy model built by Puya, plus the listing price for each instance.}}
\item the objective of learning is to predict whether an available future night of a specific listing will be booked as of the day we make this prediction. Therefore during the model training, for each date (ds), the model predicts future available listing nights, and compute the loss.
\end{itemize}
\end{comment}
\subsection{Value models}
\label{subsection:value}
Our first problem to solve is to find the inherent value of each item. We rely on the marketplace feedback at this stage, using the purchase event as confirmation of value for the booked experience. Specifically, whether to book an experience at a specific price or not is a decision made by our customers, and thus the booking event represents customer's validation of the price set by the seller. This is common practice for picking ground truth when modeling a marketplace, because it is a meeting point of demand and supply. We model the booked price using a variety of demand, supply and item relevant features. More formally we do a regression with the following loss function:
\begin{equation}
\label{eqregression}
\sum_{j=1}^m (f_{\bm{\theta}}(\bm{x_j}) - y_j)^2 + \lambda \cdot \Vert \bm{{\theta}}\Vert^2,
\end{equation}
where $y_j$ is the booked price, $\lambda$ is the regularization factor, $m$ is the number of bookings in the training set, $\bm{\theta}$ is the parameter to learn, and $\bm{x_j}$ is a set of features which describe the booked experience as well as the overall demand and market conditions.
\begin{table}[thbp]
\centering
\scalebox{1}{
\begin{tabular}{l | l}
\toprule
Category & item category \\
Host language & \# languages spoken by the host \\
Reviews & \# of reviews \\
AVG Review & Average Review score \\
Photo Quality & The picture quality of the item \\
Conversion Rate & Conversion rate of the item in search \\
Demand Score & an index of demand for an item \\
\bottomrule
\end{tabular}
}
\vskip 0.05in
\caption{A subset of features used to learn the value model.
}
\label{tab:features}
\end{table}
Table \ref{tab:features} reports a subset of the features we considered during value learning phase.
In order to find a value distribution for each item, we make an assumption that for each experience, the value follows a normal distribution $N(\mu_i, \sigma_i^2), i = 1,\cdots, N$, where $\mu_i$ is output from our value model and $\sigma_i^2$ is estimated by looking at values of the experience $i$ in the past month. The optimization for the objective function outputs price vectors that are in the same scale as input values, so using booked price ensures the price suggestion will land in a reasonable range of market prices. We use XGBoost \cite{chen2016xgboost} to train the value model.
The output of the first phase is a predicted booking price $f_{\bm{\theta}}(\bm{x_j}) = v_j$ for every experience. We use this prediction as the mean of a value distribution for experiences in the second optimization stage.
\subsection{Revenue Maximization Pricing Strategy}
\label{sec:rev_max}
So far we have proposed a method that is able to learn a value distribution for each item. In the next stage of our solution we aim at determining an optimal price for each item considering their search context and other items that appear alongside them in the search results.
Our proposed pricing strategy considers that these set of items are "competing" among each other, and its objective is to maximize the suppliers revenue. We proceed by computing the optimal price for each search event individually, and then aggregating the computed prices to output a single price suggestion for each item.
\begin{comment}
Figure \ref{fig:position_plot} is a distribution plot of the number of bookings wrt the ranking positions in search. For each search event, we look at experiences that are displayed at least top $20$ positions from paginated section, and compute the optimal price vector that maximize the expected revenue.
\end{comment}
More formally, for each search, we maximize the following objective function:
\begin{equation}
\label{new_objective}
\E R_{\bm{p}} = \sum_{i=1}^N p_i \cdot P_r[i=\mbox{argmax}\{\alpha_i v_j-p_j\} \wedge (\alpha_iv_i-p_i \geq 0)],
\end{equation}
where $\alpha_i$ is a search-specific value multiplier capturing information about user's preference in this search. For example, it could be the ranking score from the search ranking model. In general, if an item has a higher ranking score in a search, then its value in that search should also be amplified. This winning probability takes into account the fact that in addition to user preferences, price is an important determining factor during purchase.
To compute the optimal price for each search, we rewrite the winning probability explicitly as a function of the distribution function of values. We assume that the values for experiences in the same search are mutually independent variables drawn from $N(\mu_i, \sigma_i^2)$, as described previously.
To reduce the computation load, we constrain the search space to bounded sets:
\begin{itemize}
\item Truncate the value distributions $F_i, i=1,\cdots, N$ from $R$ to a bounded range $[v_{min}, v_{max}]$. In implementation, this means that for each value distribution, we shift all probability mass above $v_{max}$ to the point $v_{max}$ and all probability mass below $v_{min}$ to the point $v_{min}$. Choice of $v_{min}$ and $v_{max}$ will be given in Theorem \ref{thm1}.
\item Restrict the price vector to a bounded set $[\xi v_{min}, v_{max}]$, $\xi > 1$ and consequently this reduces the search space from $R^N$ to bounded rectangles $[\xi v_{min}, v_{max}]^N$. This also ensures that price output from the algorithm will fall into a reasonable region.
\end{itemize}
These constraints on input and output variables incur loss on the revenue, and we will bound this loss in section \ref{theory}. After domain truncation, we can rewrite the winning probability of $i$-th item as:
\begin{equation}
\label{prob_new}
\begin{aligned}
&Pr(i=\mbox{argmax} \{ \alpha_j v_j - p_j\} \wedge (\alpha_i v_i - p_i) \geq 0)\\
&= Pr(\cap_{j\neq i}\{\alpha_j v_j - p_j < \alpha_i v_i - p_i\} \wedge (\alpha_i v_i - p_i) \geq 0)\\
&= \int_{v_{min}}^{v_{max}}Pr(\cap_{j\neq i}\{\alpha_j v_j - p_j < \alpha_i v - p_i\} \wedge (v \geq \frac{p_i}{\alpha_i}) \vert v_i = v)f_i(v)dv \\
&= \int_{\max(\frac{p_i}{\alpha_i} , v_{min})}^{v_{max}}Pr(\cap_{j\neq i}\{\alpha_j v_j - p_j < \alpha_i v - p_i\} \vert v_i = v)f_i(v)dv \\
&= \int_{\max(\frac{p_i}{\alpha_i} , v_{min})}^{v_{max}}\prod_{j\neq i}F_j((\alpha_i v - p_i+p_j)/\alpha_j )f_i(v)dv, \\
\end{aligned}
\end{equation}
where $F_i$ and $f_i$ are the distribution function and probability density function of the $i$-th experience's value distribution, respectively. In our experiments, \eqref{prob_new} is evaluated numerically by discretizing the value support.
The objective function can be rewritten as
\begin{equation}
\E R_p = \sum_{i=1}^N p_i \int_{\max(p_i/\alpha_i , v_{min})}^{v_{max}}\prod_{j\neq i}F_j((\alpha_i v - p_i+p_j)/\alpha_j )f_i(v)dv.
\end{equation}
For top $N$ experiences from each search event $j$, we calculate optimal price vector $p^{(j)}_1, \cdots p^{(j)}_N$. Since the expected revenue is a function of winning probabilities, which depend on the co-displayed experiences, the optimal price of the same experience is a random variable depending on the underlying value distributions of all top experiences in the search. For the experience $i$, we aggregate prices obtained from all search events where it appeared as one of the top results by taking the average, i.e., $p^*_i = \frac{1}{n_i}\sum_{j: i\in S_j}p^{(j)}_j$, where $S_j$ is the set of experiences that were ranked on top for the $j$-th search event, and $n_i$ is the number of search events where the experience $i$ were ranked on top.
\begin{comment}
\pv{TBD Jiawei, put here what has been done so far in terms of value building air/exp-pricing, the regression is the first model for value estimation}
\pv{TBD Jiawei,why the expected revenue as summation over probability of being bought x price or value is not good? lets add some details about it, and eventually we need to compare may be this.}
\pv{Next Steps for Jiawei. Things to focus on long term: (1) How to use value that is in a different scale as price? (2) Can we define the value differently? We could use similar regression but not normal distribution or totally different value. Any idea? }
\pv{Does it make sense to consider the relative difference for instance in percentage? something like $i=\mbox{argmax} (v_j-p_j)/v_j$ ?}
{\color{red}{I think this is a good idea.}}
\pv{everything here is going under model}
\end{comment}
\subsection{Theoretical results}
\label{theory}
In this part, we study the revenue loss due to the restriction and truncation performed on the price and value distribution support, respectively. We first restate the Lemma 24 and Lemma 27 in \cite{cai2015extreme}, which present results on the restriction of price vector.
\begin{lemma}
\label{lm1}
Suppose that the values of items are independently distributed on $[v_{min}, v_{max}]$, and for any price vector $\bm{p} = (p_1,\cdots, p_n)$, construct a new price vector $\hat{\bm{p}}$ as follows: $\hat{p}_i = v_{max}, \ \text{if } p_i > v_{max}$, $\hat{p}_i = v_{min}, \ \text{if } p_i < v_{min}$, and otherwise $\hat{p}_i = p_i$. Then the expected revenue $ \E R_{\hat{\bm{p}}}$ and $ \E R_{\bm{p}}$ from two price vectors $\hat{\bm{p}}$ and $\bm{p}$ satisfy $\E R_{\hat{\bm{p}}} \geq \E R_{\bm{p}}$.
\end{lemma}
\begin{lemma}
\label{lm2}
$\forall \delta > 0$, for any price vector $\bm{p} = (p_1,\cdots, p_n)$, define $\bm{p}'$ as follows, let $p_i' = p_i$, if $p_i \geq \delta $, and otherwise $p_i' = \delta$.
Then the expected revenues $ \E R_{\bm{p}}$ and $ \E R_{\bm{p'}}$ from these two price vectors satisfy $\E R_{\bm{p'}} \geq \E R_{\bm{p}} - \delta$.
\end{lemma}
By Lemma \ref{lm1} and Lemma \ref{lm2}, we can see that when values of items are independently distributed on $[v_{min}, v_{max}]$, then for any price vector $\bm{p} \in R^N$, if we transform it to another price vector $\bm{p}''\in [\xi v_{min}, v_{max}]^N$, $\xi > 1$, then the expected revenue $ \E R_{\bm{p}''}$ and $ \E R_{\bm{p}}$ from these two price vectors $\bm{p}''$ and $\bm{p}$ satisfy $\E R_{\bm{p}''} \geq \E R_{\bm{p}} - \xi v_{min}$.
Next we show that we can truncate the support of value distributions to bounded range without hurting much revenue.
\begin{theorem}
\label{thm1}
Given a collection of random variables $\{ v_i \}_{i=1,\cdots,N}$, where $v_i \sim N(\mu_i, \sigma_i^2)$, if we truncate their distributions to a bounded range $[v_{min}, v_{max}]$, where $v_{max} = \max_{i=1,\cdots, N}\{ Z_i^{\alpha}\}$ and $v_{min} = \min_{i=1,\cdots, N}\{ Z_i^{1-\alpha}\}$ with $Z_i^{\alpha}, \alpha \in (0,1)$ being the $\alpha$-quantile of distributions of $v_i$ \footnote{In our experiments, $\alpha$ is often set as $0.975$.}. For any price vector $\bm{p} \in [\xi v_{min}, v_{max}]^N$, $\xi > 1$, $\vert \E R_{\bm{p}} - \E \hat{R}_{\bm{p}} \vert \leq v_{max}\cdot (1-\alpha^N)$, where $R_{\bm{p}}$ and $\hat{R}_{\bm{p}}$ are the revenues when the consumer's values are distributed before and after truncation respectively.
\end{theorem}
\begin{proof}
For a set of random variables $\{ v_i \}_{i=1,\cdots,n}$ that are distributed as $v_i \sim N(\mu_i, \sigma_i^2)$, define a new set of random variables $\{ \hat{v}_i \}_{i=1,\cdots,n}$ as
\begin{equation}
\hat{v}_i = \left \{
\begin{aligned}
&v_{max}, &\ if \ v_i > v_{max}\\
&v_{min}, &\ if \ v_i \leq v_{min}\\
&v_i, \ \ &\text{otherwise}.
\end{aligned}
\right.
\end{equation}
An important fact is that for any given price $\bm{p}$, $R_{\bm{p}}$ and $\hat{R}_{\bm{p}}$ are different only when $v_i \neq \hat{v}_i$ for some $i$, which reduces to the event that $\exists i, v_i > v_{max}$. To see this, if $\forall i$, $v_{min} < v_i \leq v_{max}$, then $v_i = \hat{v}_i$, and thus $R_{\bm{p}} = \hat{R}_{\bm{p}}$. If $\exists i$ such that $v_i \leq v_{min}$, and thus $\hat{v}_i = v_{min}$, then the price of item $i$ is higher than values $v_i$ and $\hat{v}_i$, so the item $i$ will not be purchased for both cases. Since the maximum price is $v_{max}$, we have the following bound for $\vert \E R_{\bm{p}} - \E\hat{R}_{\bm{p}} \vert$,
\begin{equation}
\begin{aligned}
\vert \E R_{\bm{p}} - \E\hat{R}_{\bm{p}} \vert &\leq v_{max} \cdot \Pr[\exists i, v_i > v_{max}]\\
& = v_{max} \cdot \Pr[\max_i v_i > v_{max}]\\
& = v_{max} \cdot (1-\Pr[\max_i v_i \leq v_{max}])\\
& = v_{max} \cdot (1- \prod_{i=1}^{N}\Pr[ v_i \leq v_{max}])\\
& \leq v_{max} \cdot (1- \alpha^N)
\end{aligned}
\end{equation}
\end{proof}
\begin{comment}
Now combining these results, we present a lower bound on the revenue loss incurred by price restriction and value support truncation.
\begin{corollary}
Given a collection of random variables $\{ v_i \}_{i=1,\cdots,N}$, where $v_i \sim N(\mu_i, \sigma_i^2)$, if we truncate their distributions to a bounded range $[v_{min}, v_{max}]$, where $v_{max} = \max_{i=1,\cdots, N}\{ Z_i^{\alpha}\}$ and $v_{min} = \min_{i=1,\cdots, N}\{ Z_i^{1-\alpha}\}$ with $Z_i^{\alpha}, \alpha \in (0,1)$ being the $\alpha$-quantile of distributions of $v_i$. For any price vector $\bm{p} \in R^N$, transform it to another price vector $\bm{p}''$ $ \in [\xi v_{min}, v_{max}]^N$, $\xi>1$, then we have $\E \hat{R}_{\bm{p}''} \geq \E R_{\bm{p}} - v_{max} \cdot (1- \alpha^N) - \xi v_{min}$.
\end{corollary}
\begin{proof}
For any price vector $\bm{p}$, from Lemma \ref{lm1} we can transform it to another price vector $\bm{p'} \in [v_{min}, v_{max}]^N$ such that $ \E R_{\bm{p}} \leq \E R_{\bm{p}'}$. Then transform $\bm{p}'$ to another price vector $\bm{p}''$ as follows: $\forall i $, if $p_i' < \xi v_{min}$, $p_i'' = \xi v_{min}$, and $p_i'' = p_i'$ otherwise. By Lemma \ref{lm2}, we have $\E R_{\bm{p}''} \geq \E R_{\bm{p'}} - \xi v_{min}$. By Theorem \ref{thm1}, we have
\begin{equation}
\begin{aligned}
\E \hat{R}_{\bm{p''}} & \geq \E R_{\bm{p}''} - v_{max} \cdot (1- \alpha^N) \\
& \geq \E R_{\bm{p}'} - v_{max} \cdot (1- \alpha^N) - \xi v_{min} \\
& \geq \E R_{\bm{p}} - v_{max} \cdot (1- \alpha^N) - \xi v_{min}
\end{aligned}
\end{equation}
\end{proof}
\end{comment}
\begin{comment}
\subsection{Experiments with polynomial-time algorithm from paper Extreme Value Theorems for Optimal Multidimensional Pricing (EVT-OMP)}
A challenge in directly maximizing revenue based on \eqref{objective} is that
The algorithmic idea in paper EVT-OMP is to shift from the space of value distributions, which is exponential in $n$ and multidimensional, to the space of all possible revenue distributions, which is still exponential in $n$, but single-dimensional. For a given price vector, the revenue is a random variable depending on the underlying value distribution. Since every price vector leads to different revenue distribution so that there is still an exponential number of possible revenue distributions. The basic idea in this paper is to construct a polynomial-size cover of the set of all possible revenue distributions under the total variation distance between distributions.
The paper adopts an alternative characterization of expected revenue using (winning-value, winning-price) distributions for $n$ items.
\begin{equation}
\label{winning}
\begin{aligned}
R_{Pr} &= \sum_{i_1 \in [\vert V \vert], i_2 \in [\vert P \vert]} p^{i_2} \cdot Pr_{i_1, i_2} \bm{I}_{v^{(i_1)} \geq
p^{(i_2)}}\\
&= \sum_{i_2 \in [\vert P \vert]} p^{i_2} \cdot \big(\sum_{i_1\in [\vert V \vert]} Pr_{i_1, i_2} \big) \bm{I}_{v^{(i_1)} \geq
p^{(i_2)}}
\end{aligned}
\end{equation}
The cover is constructed using dynamic-programming. Starting from the first item, at each iteration it keeps updating $Pr_{i_1, i_2}$ after inputting a new item's price and value distribution. By rounding probabilities to the nearest integer of $\frac{1}{m}$, it constructs a cover of all possible revenue distributions which has at most $O(m+1^{\vert P \vert \cdot \vert V \vert })$ probabilities. After computing the winning distributions for $n$ items, it outputs the price vector which corresponds to the largest expected revenue \eqref{winning}.
Since the dynamic programming method is only suited to the case where $\{v_i\}$ are supported on a common discrete set $S$, and the set of possible prices is a discrete set. Thus, for more general continuous value distributions, several reduction is required before the dynamic programming algorithm.
\begin{enumerate}
\item Reduction from MHR distributions to bounded distributions $[u_{min}, u_{max}]$, where $[u_{min}, u_{max}]$ are functions of underlying value distributions.
Restrict the prices into the same range.
\item Discretize the support of the bounded distributions.
\item Discretize the probabilities assigned by value distributions supported on a discrete set to points in their support.
\end{enumerate}
We will briefly go through each step and point out some practical issues.
\textbf{Step 1}. To truncate the value distributions, the first step is to compute a threshold $\beta$. $\beta$ is an anchoring point of the MHR distributions (Theorem 19 in the paper). Let $X_1, \cdots, X_n$ be a collection of independent random variables whose distributions are MHR. Then there exists some anchoring point $\beta$ such that $Pr[\max{X_i} \geq \frac{\beta}{2}] \geq 1-\frac{1}{\sqrt{e}}$ and $E(\max{X_i}I(\max{X_i}>2\beta\log_2{1/\epsilon})) \leq 36\beta\epsilon\log_2{1/\epsilon}$, for all $\epsilon \in (0, 1/4)$. $\beta$ is computed through Algorithm 1.
Theorem 41 says that after this reduction, given $\epsilon \in (0, 1/4)$ we can establish a polynomial-time reduction from the problem of computing a near-optimal price vector when the buyer's value distributions are arbitrary MHR distributions to the case where the buyer's value distributions are supported $[\frac{\epsilon}{2}\beta, 2\log_{2}(\frac{1}{\epsilon})\beta]$ An issue is the upper bound $u_{max}$ is too large for us to make use. For example, suppose we have 16 mutually independent variables following normal distributions mean vector $(10,200,10,199,70,55,12,15,10,\\
100,20,10,59,105,81,11)$ and standard deviations $(1,1,1,3,2,2,1,1,1,\\1,1,3,2,2,1,1)$. Then following this algorithm, the $\beta=200, u_{max}=2257, u_{min}:4$ when $\epsilon = 0.2$. Ideally, a smaller $u_{max}$ is better for a more efficient reduction. In addition, larger $r = \frac{u_{max}}{u_{min}}$, this will bring a high cardinality when we further discretize the value distributions support. $\beta$ provides a lower bound to the optimal revenue. In particular, the optimal revenue $\geq (1-\frac{1}{\sqrt{e}})\frac{\beta}{2} = c1\cdot \beta$. Does this mean if there is a high value item, then our optimal revenue will be bounded below by this item? Different reductions procedure based on whether the approximation to the optimal revenue is intended to be additive or multiplicative. See Lemma 30 and 31. Specifically, given a near-optimal price vector could achieves revenue $(1-\delta)$ of optimal revenue in case of $\{\hat{v}_i\}_{i=1^n}$, then we can efficiently compute a price vector with revenue at least $1-\delta - \frac{2\epsilon+3c_2(\epsilon)}{c1}$-fraction of optimal revenue in case of $\{{v}_i\}_{i=1^n}$. However this bound is very loose. For example, in Figure \ref{fig:bound}, we can see that when $\epsilon$ is greater than $0.0003$, $\frac{2\epsilon+3c_2(\epsilon)}{c1}$ is already greater than 1.
\begin{figure}
\label{fig:bound}
\includegraphics[width=0.8\linewidth]{figures/Rplot1.pdf}
\label{fig:bound}
\end{figure}
\textbf{Step 2}. $\forall \epsilon >0$, if we restrict the prices to lie in the range of $[\epsilon\beta, 2\log_{2}(\frac{1}{\epsilon})\beta]$. (only lose a $O(\epsilon \log_{2}(\frac{1}{\epsilon}))$ fraction of the optimal revenue.). Lemma 40 says we can constrain the prices to $[\frac{\epsilon}{2}\beta, 2\log_{2}(\frac{1}{\epsilon})\beta]$ without hurting the revenue. After this truncation, support discretization is also needed for prices. The resulting cardinality of price support set is $O(\frac{\log r}{\epsilon^2})$. For moderate $\epsilon$, this value could be very large.
\textbf{Step 3}. Transform $\{v_i\}_{i=1^n} \sim F_i$ into a new collection of random variables $\{\hat{v}_i\}_{i=1^n} \sim \hat{F}_i$ that take values in $[\frac{\epsilon}{2}\beta, 2\log_{2}(\frac{1}{\epsilon})\beta]$ and satisfies: a near-optimal price vector for the setting where the buyer's values are distributed as $\{\hat{v}_i\}_{i=1^n}$ can be efficiently transformed into a near-optimal price vector for the original setting, i.e. where the buyer's values are distributed as $\{{v}_i\}_{i=1^n}$. The way to truncate the distribution from MHR to bounded range is by shifting all probability mass from $(2\log_2(\frac{1}{\epsilon})*\beta, +\infty)$ to the point $2\log_2(\frac{1}{\epsilon}$, and all probability mass from $(-\infty, \epsilon \beta)$ to $\frac{\epsilon}{2}\beta$. Given a mutually independent variables that are MHR, generate a new collection of random variables $\{\hat{v_i}\}_i$ via the following coupling: for all $i\in [n]$, set $\hat{v_i} = \frac{\epsilon}{2}\beta$ if $v_i < \epsilon \beta$, set $\hat{v_i} = 2\log_2(\frac{1}{\epsilon})$ if $2\log_2(\frac{1}{\epsilon})$, and set $\hat{v_i} = v_i$ otherwise. Now we have $\{v_i\}$ which supported on bounded range, we construct a new collection of variables that supported on a discrete set of carnality $O(\frac{\log(r)}{\delta^2})$ or $O(\log(r)/\hat{\epsilon}^{16})$, where $\hat{\epsilon} = \min\{\epsilon, \frac{1}{(4\ceil{\log_2 r})^{1/6}} \} $. Again, the cardinality is too large for good accuracy.
\textbf{Discretization of probabilities} In this step, the paper proposed that the value distributions supported on $S=\{v_1, \cdots v_{k1}\}$ to
The original value distribution is $ \{ F_i \} $. Compute $\{ \hat{F_i} \}$ discretize the probabilities assigned by value distributions supported on a discrete set to points in their support. The support of $\{v_i \}_{i\in[n]}$ is $S=\{ s_1, \cdots, s_k \}$, we need to construct another collection of $\{ v_i'\}_{i\in[n]}$ whose distributions are supported on the same set $S$ but only use probabilities that are integer multiples of $1/m$. Denote $\pi_{s_j} = \mathcal{P}(v_i = s_j)$. Then round $\pi_{s_j}$ down to the nearest integer multiple of $1/m$ to get $\pi_{s_j}'= \mathcal{P}(v_i' = s_j)$. Finally round $\pi_{s_1}$ up to get $\pi_{s_1}'$ to guarantee that $\pi'$ is still a distribution. However, a proper choice of $m$ needs to be greater $2\cdot \vert V \vert$.
\end{comment}
\section{Introduction}
\label{sec:introduction}
\input{introduction.tex}
\section{Related work}
\label{sec:related}
\input{related.tex}
\section{Problem Definition}
\label{sec:probdef}
\input{probdef.tex}
\section{Model}
\label{sec:model}
\input{model.tex}
\section{Experiments}
\label{sec:experiments}
\input{experiments.tex}
\section{Results}
\label{sec:results}
\input{results.tex}
\section{Conclusions and future work}
\label{sec:conclusion}
\balance
\input{conclusion.tex}
\newpage
\bibliographystyle{acm}
| {
"attr-fineweb-edu": 1.480469,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUc7s5qWTA_H8tsUje | \section{Introduction}
Cet article fait suite à \cite{J4} où l'on détermine la partie supercuspidale de la cohomologie de Rham pour le premier revêtement de la tour de Drinfeld. Dans les articles fondateurs \cite{lt1,lt2,dr1,dr2}, deux tours de revêtements rigides-analytiques $({\mathcal{M}}_{LT}^n)_n$ ont été construits dont la cohomologie étale $l$-adique a un lien profond avec les correspondances de Jacquet-Langlands et de Langlands locales. La première, la tour de Drinfeld, est une famille dénombrable de revêtements successifs de l'espace symétrique de Drinfeld à savoir ${\mathbb{P}}_K^d \backslash \bigcup_{H \in {\mathcal{H}}} H$ le complémentaire dans l'espace projectif sur $K$ une extension finie de ${\mathbb{Q}}_p$ des hyperplans $K$-rationnels. La seconde, la tour de Lubin-Tate, constitue une famille de revêtements sur la boule unité rigide $\mathring{{\mathbb{B}}}_K$. L'étude des propriétés cohomologiques de ces espaces a culminé dans les travaux \cite{dr1, cara3, harrtay, falt, fargu, dat1, dat0, mied, Scholze1}... où il a été montré que la partie supercuspidale de la cohomologie $l$-adique réalisait à la fois les correspondances de Jacquet-Langlands et de Langlands locale. Le but de ce travail ainsi que de \cite{J4} est d'obtenir des résultats similaires pour les cohomologies $p$-adiques avec l'espoir d'exhiber des versions $p$-adiques des correspondances de Langlands locale encore conjecturales. En guise de première étape à l'établissement de ce programme conséquent, nous nous concentrons uniquement dans ces travaux sur la cohomologie de de Rham où nous nous attendons à obtenir les mêmes représentations que pour la cohomologie $l$-adique où l'on a oublié l'action du groupe de Weil. En particulier, nous pouvons seulement exhiber la correspondance de Jacquet-Langlands.
Même si certaines méthodes en $l$-adique ne peuvent être adaptées à l'étude que nous voulons mener et si la cohomologie de de Rham de l'ensemble des deux tours semble inaccessible pour le moment, quelques cas particuliers ont été établis comme le cas de dimension $1$ du coté Drinfeld (quand $K={\mathbb{Q}}_p$) \cite{brasdospi}, ($K$ quelconque)\cite{GPW1}, du coté Lubin-Tate \cite{GPW1} et récemment en dimension quelconque pour le premier revêtement du côté Drinfeld \cite{J4} (voir aussi \cite{scst,iovsp,ds} pour le niveau $0$ du côté Drinfeld\footnote{En niveau $0$ du coté Lubin-Tate, le résultat est immédiat car la boule unitée ouverte rigide n'a pas de cohomologie.}). Dans cet article, nous prouvons le résultat pour le premier revêtement du coté Lubin-Tate. L'espoir de prolonger les résultats du côté Drinfeld aux espaces analogues du coté Lubin-Tate est motivé par le lien profond qui existe entre la géométrie des deux tours. Ce lien s'illustre dans les travaux \cite{falt2, fafa} où il a été montré que chaque tour en niveau infini à une structure d'espace perfecto\"ide et que ces deux espaces sont isomorphes.
Avant d'énoncer le résultat principal de cet article, introduisons les représentations qui apparaissent dans l'énoncé. Soit $K$ une extension finie de ${\mathbb{Q}}_p$ d'anneau des entiers ${\mathcal{O}}_K$, d'uniformisante $\varpi$ et de corps résiduel ${\mathbb{F}}={\mathbb{F}}_q$. Notons $C=\widehat{\bar{K}}$ le complété d'une clôture alg\'ebrique de $K$ et $\breve{K}= \widehat{K^{nr}}$ le compl\'eté de l'extension maximale non ramifi\'ee dans $\bar{K}$. On note $({\mathcal{M}}_{LT}^n)_n$ les revêtements de dimension $d$ sur $\breve{K}$ de la tour de Lubin-Tate. Chacun de ces espaces rigide-analytiques admettent des actions des groupes $\gln_{d+1}({\mathcal{O}}_K)=G^\circ$, $D^*$ avec $D$ l'algèbre à division sur $K$ de dimension $(d+1)^2$ et d'invariant $1/(d+1)$, $W_K$ le groupe de Weil de $K$ qui commute entre elles. De plus, ces revêtements se décompose sous la forme ${\mathcal{M}}_{LT}^n=\lt^n\times{\mathbb{Z}}$ et on peut aisément relier les cohomologie de ${\mathcal{M}}_{LT}^n$ et de $\lt^n$.
Étant donné un caractère primitif $\theta$ du groupe ${\mathbb{F}}_{q^{d+1}}^*$, on peut lui associer les représentations suivantes :
\begin{enumerate}
\item Sur le groupe $\gln_{d+1}({\mathbb{F}}_q)$, $\dl(\theta)$ sera la repr\'esentation cuspidale associ\'ee \`a $\theta$ via la correspondance de Deligne-Luzstig. On pourra la voir comme une repr\'esentation de $G^\circ $ par inflation.
\item $\tilde{\theta}$ sera le caract\`ere de $I_K (\varphi^{d+1})^{{\mathbb{Z}}}\subset W_K$ par le biais de $I_K \to I_K/ I_{K_N}\cong {\mathbb{F}}_{q^{d+1}}^*\xrightarrow{\theta} \widehat{\bar{K}}^*$ avec $K_N=K(\varpi^{1/N})$. On impose de plus $\tilde{\theta}(\varphi^{d+1})=(-1)^d q^{\frac{d(d+1)}{2}}$.
\end{enumerate}
Nous donnons maintenant les repr\'esentations associ\'ees sur les groupes $D^*$, $W_K$. Posons :
\begin{enumerate}
\item $\rho(\theta):= \cind_{{\mathcal{O}}_D^*\varpi^{\mathbb{Z}}}^{D^*} \theta$ o\`u $\theta$ est vue comme une ${\mathcal{O}}_D^* \varpi^{{\mathbb{Z}}}$-repr\'esentation via ${\mathcal{O}}_D^* \varpi^{{\mathbb{Z}}} \to {\mathcal{O}}_D^* \to {\mathbb{F}}_{q^{d+1}}^*$.
\item $\wwwww(\theta):= \cind_{I_K (\varphi^{d+1})^{{\mathbb{Z}}}}^{W_K} \tilde{\theta}$.
\end{enumerate}
Le théorème principal de l'article est le suivant :
\begin{theointro}
\label{theointroprinc}
Pour tout caractère primitif $\theta: {\mathbb{F}}_{q^{d+1}}^*\to C^*$, il existe des isomorphismes de $G^\circ$-représentations
\[\homm_{D^*}(\rho(\theta), \hdrc{i}(({\mathcal{M}}_{LT, C}^1/ \varpi^{{\mathbb{Z}}}))){\cong} \begin{cases} \dl(\theta)^{d+1} &\text{ si } i=d \\ 0 &\text{ sinon} \end{cases}.\]
\end{theointro}
Rappelons que les représentations $\rho(\theta)$ considéré parcourt l'ensemble des représentations irréductibles de niveau $0$ de $D^*$ de caract\`ere central trivial sur $\varpi^{{\mathbb{Z}}}$ dont l'image par la correspondance de Jacquet-Langlands est supercuspidale.
Expliquons dans les grandes lignes la stratégie de la preuve. L'un des grands avantages à travailler sur les premiers revêtements de chacune des tours est que la géométrie des deux espaces considérés reste encore accessible. Cela nous permet alors d'espérer pouvoir appliquer les arguments purement locaux des thèses de \cite{yosh} (côté Lubin-Tate) et \cite{wa} (côté Drinfeld). En niveau supérieur, la géométrie se complique grandement et nous doutons qu'une stratégie similaire puisse fonctionner. Du côté Drinfeld, cela s'illustre par l'existence d'une équation globale au premier revêtement (voir \cite{J3}). Du côté Lubin-Tate, la situation est encore meilleure car on a l'existence d'un modèle semi-stable généralisé (voir plus bas) construit par Yoshida. Cette propriété jouera un rôle clé dans la stratégie que nous avons entrepris et nous expliquons maintenant son utilité.
Soit ${\mathcal{X}}$ un schéma formel $p$-adique, il est de réduction semi-stable généralisée si, Zariski-localement sur ${\mathcal{X}}$, on peut trouver un morphisme étale vers ${\rm Spf}({\mathcal{O}}_K\langle X_1,...,X_n\rangle/
(X_1^{\alpha_1}...X_r^{\alpha_r}-\varpi))$ pour certains $r\leq n$ et $\alpha_i\geq 1$ (ou $\varpi$ est une uniformisante de $K$). Observons que pour tout schéma formel ${\mathcal{X}}$ vérifiant cette propriété, l'anneau local complété en un point fermé est de la forme
\[
{\mathcal{O}}_{\breve{K}} \llbracket T_0,\ldots, T_d \rrbracket /(T_0^{e_0}\cdots T_r^{e_r}- \varpi )
\]
avec $r\leq d$ et $e_i$ premier à $p$.
Nous dirons alors que ${\mathfrak{X}}$ est ponctuellement semi-stable généralisé si pour tous point fermés, l'anneau local complété est de cette forme. Il s'agit d'une notion un peu plus faible car on peut s'autoriser des espaces qui ne sont pas $p$-adiques\footnote{Par exemple, $\spf({\mathcal{O}}_{\breve{K}} \llbracket T_0,\ldots, T_d \rrbracket /(T_0^{e_0}\cdots T_r^{e_r}- \varpi ))$ est ponctuellement semi-stable généralisé mais pas semi-stable généralisé.}.
Le résultat suivant qui a en premier été prouvé par Grosse-Kl\"onne dans \cite[Theorem 2.4.]{GK1} pour le cas semi-stable puis dans \cite[Théorème 5.1.]{J4} pour le cas semi-stable généralisé, est le point de départ de notre argument.
\begin{propintro
Étant donné un schéma formel semi-stable généralisé ${\mathcal{X}}$ avec pour décomposition en composantes irréductibles ${\mathcal{X}}_s=\bigcup_{i\in I} Y_i$, pour toute partie finie $J\subset I$, la flèche naturelle de restriction
\[\hdr{*} (\pi^{-1}(]Y_J[_{\mathcal{X}}))\fln{\sim}{} \hdr{*} (\pi^{-1}(]Y_{J}^{lisse}[_{\mathcal{X}}))\]
est un isomorphisme pour $Y_J=\inter{Y_j}{j\in J}{}$ et $Y_J^{lisse}=Y_{J}\backslash \bigcup_{i\notin J}Y_i$.
\end{propintro}
Ce résultat nous permet de mettre en place l'heuristique suivante. Étant donné un schéma formel semi-stable généralisé, on a recouvrement de la fibre générique par les tubes des composantes irréductibles $(]Y_i[)_{i\in I}$ de la fibre spéciale. On veut alors appliquer la suite spectrale de Cech à ce recouvrement. On se ramène alors à calculer les cohomologies des intersections $]Y_J[$ pour $J\subset I$ et donc de $]Y_J^{lisse}[$. La géométrie de ces derniers plus simples et les exemples apparaissant dans cet article admettent par exemple des modèles entiers lisses ce qui permet de calculer leurs cohomologies par théorème de comparaison avec la cohomologie rigide de leur fibre spéciale.
Pour pouvoir mettre en place cette stratégie dans ce cas, nous devons comprendre le modèle semi-stable construit par Yoshida et décrire explicitement les différentes composantes irréductibles de la fibre spéciale. En fait, le premier revêtement admet un modèle naturel $Z_0$ provenant de son interprétation modulaire. Ce dernier n'est pas encore semi-stable et nous y résolvons les singularités en éclatant successivement des fermés bien choisis. Plus précisément, il construit une famille de fermés dans la fibre spéciale $(Y_M)_M$ indexés par les sous-espaces vectoriels propres $M\subsetneq {\mathbb{F}}^{d+1}$ qui vérifie $Y_M\cap Y_N=Y_{M\cap N}$. Lorsque $M$ parcourt l'ensemble des hyperplans, les fermés $Y_M$ décrivent l'ensemble des composantes irréductibles de $Z_0$ et on peut exhiber une stratification de la fibre spéciale par les fermés
\[
Y^{[h]}:=\bigcup_{N :\dim N=h} Y_N
\]
On construit alors une suite de modèles $Z_0, \cdots, Z_d$ en éclatant successivement le long des fermés de cette stratification. De plus, à chaque modèle, on peut construire de manière similaire une famille de fermés $(Y_{M,h})_h$ de $Z_h$ avec $Y_{M,h}=Y_M$ en prenant des transformées propres ou strictes suivant les cas (voir la définition \ref{defitrans} pour plus de précisions). Cette famille est essentielle pour comprendre la géométrie de la fibre spéciale et le résultat suivant illustre ce principe.
\begin{theointro
On a les points suivants :
\begin{enumerate}
\item $Z_d$ est ponctuellement semi-stable généralisé.
\item Les composantes irréductibles de la fibre spéciale de $Z_h$ sont les fermés de dimension $d$ suivant $(Y_{M,h})_{M: \dim M\in \left\llbracket 0,h-1\right\rrbracket\cup\{d\}}$.
\item Les intersections non-vides de composantes irréductibles de $Z_h$ sont en bijection avec les drapeaux $M_1\subsetneq \cdots \subsetneq M_k$ tels que $\dim M_{k-1}< h$ via l'application \[M_1\subsetneq \cdots \subsetneq M_k \mapsto \bigcap_{1\leq i \leq k} Y_{M_i,h}.\]
\item Si $Y_{M,h}$ est une composante irréductible de $Z_{h,s}$ avec $\dim M\neq d$, alors les morphismes naturels $\tilde{p}_i$ avec $i=h+1,\ldots, d$ induisent des isomorphismes $ Y_{M,d}^{lisse} \cong\cdots \cong Y_{M,h+1}^{lisse} \cong Y_{M,h}^{lisse}$.
\item Le changement de base du tube $]Y_{\{0\},d}^{lisse}[_{Z_d} \otimes \breve{K}(\varpi_N)\subset {\rm LT}_1 \otimes \breve{K}(\varpi_N)$ admet un modèle lisse dont la fibre spéciale est isomorphe à la variété de Deligne-Lusztig $\dl_{\bar{{\mathbb{F}}}}$ associée à $\gln_{d+1}$ et à l'élément de Coxeter $(1,\cdots, d)\in {\mathfrak{S}}_d$ (ici, $N=q^{d+1}-1$ et $\varpi_N$ est le choix d'une racine $N$-ième de $\varpi$).
\end{enumerate}
\end{theointro}
Grâce à ce résultat et au théorème d'excision cité précédemment, nous pouvons alors prouver :
\begin{propintro
On a un isomorphisme $\gln_{d+1}({\mathcal{O}}_K)$-équivariant :
\[\hdr{*} (\lt^1/\breve{K}_N)\cong \hrig{*} (\dl^d_{\bar{{\mathbb{F}}}}/\breve{K}_N)\] Par dualité de Poincaré, on a un isomorphisme semblable pour les cohomologies à support compact.
\end{propintro}
\begin{rem
Notons que le modèle $Z_d$ obtenu est semi-stable généralisé au sens faible et nous ne pouvons appliquer directement le théorème d'excision. Cette difficulté est surmontée par une astuce déjà présente dans les travaux de Yoshida où $Z_d$ est plongé dans un modèle entier $\hat{{\rm Sh}}$ bien choisi d'une variété de Shimura qui est cette fois-ci semi-stable généralisée au sens fort (et donc $p$-adique). On peut alors un voisinage étale $U\subset \hat{{\rm Sh}}$ de $Z_d$ qui est $p$-adique et semi-stable généralisé et sur lequel on peut appliquer le théorème d'excision.
\end{rem}
L'intérêt principal du résultat précédent réside dans le fait que les variétés de Deligne-Lusztig $\dl^d_{{\mathbb{F}}}$ réalisent les correspondances de Green desquels peuvent se déduire les correspondances de Jacquet-Langlands et de Langlands locale pour les représentation supercuspidales de niveau $0$. La dernière difficulté consiste à vérifier que les actions des groupes $G^\circ:=\gln_{d+1}({\mathcal{O}}_K)$, $D^*$ et $W_K$ sur $\lt^1$ induisent les bons automorphismes sur la variété de Deligne-Lusztig pour pouvoir appliquer cette correspondance. Plus précisément, l'espace algébrique $\dl^d_{{\mathbb{F}}}$ est un revêtement galoisien de $\Omega^d_{{\mathbb{F}}}:={\mathbb{P}}_{{\mathbb{F}}}^d \backslash \bigcup_{H\in {\mathbb{P}}^d({\mathbb{F}})} H$ tel que $\gal (\dl^d_{{\mathbb{F}}}/\Omega^d_{{\mathbb{F}}})={\mathbb{F}}_{q^{d+1}}$ est cyclique d'ordre premier à $p$. De plus, il admet une action de $G^\circ$ qui commute à la projection du revêtement. Il s'agit alors de prouver que les actions des groupes ${\mathcal{O}}_D^*$ et $I_K$ sur $\lt^1$ se transporte à $\dl^d_{{\mathbb{F}}}$ et induisent des isomorphismes naturels :
\[
{\mathcal{O}}_D^* /(1+\Pi_D {\mathcal{O}}_D)\cong \gal (\dl^d_{{\mathbb{F}}}/\Omega^d_{{\mathbb{F}}})\cong I_K/I_{K_N}
\]
avec $N=q^{d+1}-1$ et $K_N=K(\varpi^{1/N})$.
Pour le groupe $I_K$, cela a été déjà fait par Yoshida dans sa thèse et le cas de ${\mathcal{O}}_D^*$ est essentiel pour étudier la correspondance de Jacquet-Langlands. Expliquons comment nous parvenons à traiter ce cas. Comme les actions de $I_K$ et de ${\mathcal{O}}_D^*$ commutent entre elles, on obtient une action de ${\mathcal{O}}_D^*$ sur la base $\Omega^d_{{\mathbb{F}}}$ qui commute avec celle de $G^\circ$. Nous nous ramenons alors à prouver le résultat technique suivant :
\begin{lemintro}\label{lemintroautdl}
On a
\[ \aut_{\gln_{d+1}({\mathbb{F}})}(\Omega^d_{\bar{{\mathbb{F}}}})= \{ 1 \}. \]
\end{lemintro}
En particulier, cette annulation montre que l'on a bien une flèche naturelle ${\mathcal{O}}_D^* \to \gal (\dl^d_{{\mathbb{F}}}/\Omega^d_{{\mathbb{F}}})$ qui induit une flèche ${\mathcal{O}}_D^* /(1+\Pi_D {\mathcal{O}}_D)\to \gal (\dl^d_{{\mathbb{F}}}/\Omega^d_{{\mathbb{F}}})$ car le groupe $1+\Pi_D {\mathcal{O}}_D$ est $N$-divisible alors que $ \gal (\dl^d_{{\mathbb{F}}}/\Omega^d_{{\mathbb{F}}})$ est de $N$-torsion. Par finitude et égalité des cardinaux, il suffit de montrer que cette flèche est injective.
Mais par description explicite de la cohomologie $l$-adique de $\lt^1$ et de $\dl^d_{{\mathbb{F}}}$ en tant que $\bar{{\mathbb{Q}}}_l[{\mathbb{F}}_{q^{d+1}}^*]$-module, l'identité est le seul élément qui agit par un automorphisme de trace non-nulle. Par formule de Lefschetz, il suffit alors de prouver que les éléments réguliers elliptiques de ${\mathcal{O}}_D^*$ qui ne sont pas dans $1+\Pi_D {\mathcal{O}}_D$ n'ont aucun point fixe dans $\lt^1(C)$. Cela se prouve en étudiant l'application des périodes $\pi_{GH}: \lt^1(C)\to {\mathbb{P}}^d(C)$ de Gross-Hopkins (cf \cite{grho}). Les points fixes dans le but sont faciles à déterminer et leur fibre est explicite. On peut alors montrer qu'il n'y a aucun point fixe dans chacune de ces fibres par calcul direct.
En particulier, le raisonnement précédent permet aussi de réaliser les correspondances de Jacquet-Langlands et de Langlands locale pour la cohomologie $l$-adique donnant ainsi une version plus forte du résultat principal de la thèse de Yoshida \cite{yosh} :
\begin{theointro}
\label{theointroprincet}
Fixons un isomorphisme $C\cong \bar{{\mathbb{Q}}}_l$. Pour $l$ premier à $p$, tout caractère primitif $\theta: {\mathbb{F}}_{q^{d+1}}^*\to C^*$, il existe des isomorphismes de $G^\circ\times W_K$-représentations
\[\homm_{D^*}(\rho(\theta), \hetc{i}(({\mathcal{M}}_{LT}^1/ \varpi^{{\mathbb{Z}}}),\bar{{\mathbb{Q}}}_l)\otimes C)\underset{G^{\circ}\times W_K}{\cong} \begin{cases} \dl(\theta) \otimes \wwwww(\theta) &\text{ si } i=d \\ 0 &\text{ sinon} \end{cases}\]
\end{theointro}
\subsection*{Remerciements}
Le présent travail a été, avec \cite{J1,J2,J3,J4}, en grande partie réalisé durant ma thèse à l'ENS de lyon, et a pu bénéficier des nombreux conseils et de l'accompagnement constant de mes deux maîtres de thèse Vincent Pilloni et Gabriel Dospinescu. Je les en remercie très chaleureusement. Je tenais aussi à exprimer ma reconnaissance envers Juan Esteban Rodriguez Camargo pour les nombreuses discussions sur les éclatements qui ont rendu possible la rédaction de ce manuscrit.
\subsection*{Conventions\label{paragraphconv}}
Dans tout l'article, on fixe $p$ un premier. Soit $K$ une extension finie de ${\mathbb{Q}}_p$ fixée, $\mathcal{O}_K$ son anneau des entiers, $\varpi$ une uniformisante et ${\mathbb{F}}={\mathbb{F}}_q$ son corps r\'esiduel. On note $C=\hat{\bar{K}}$ une complétion d'une clôture algébrique de $K$ et $\breve{K}$ une complétion de l'extension maximale non ramifiée de $K$. Soit $L\subset C$ une extension complète de $K$ susceptible de varier, d'anneau des entiers $\mathcal{O}_L$, d'idéal maximal ${\mathfrak{m}}_L$ et de corps r\'esiduel $\kappa$. $L$ pourra être par exemple spécialisé en $K$, $\breve{K}$ ou $C$.
Soit $S$ un $L$-espace analytique, on note ${\mathbb{A}}^n_{ S}={\mathbb{A}}^n_{ L}\times S$ l'espace affine sur $S$
L'espace ${\mathbb{B}}^n_S$ sera la boule unité et les boules ouvertes seront notées $\mathring{{\mathbb{B}}}^n_S$.
Si $X$ est un affinoïde sur $L$, on notera $X^\dagger$ l'espace surconvergent associé. Dans ce cas, le complexe de de Rham $\Omega_{X/L}^\bullet$ (resp. le complexe de de Rham surconvergent $\Omega_{X^\dagger/L}^\bullet$) qui calcule la cohomologie de de Rham $\hdr{*}(X)$ (resp. la cohomologie de de Rham surconvergent $\hdr{*}(X^\dagger)$). Quand $X$ est quelconque, ces cohomologies seront formées à partir de l'hypercohomologie de ces complexes. Par \cite[Proposition 2.5]{GK1}, le théorème $B$ de Kiehl \cite[Satz 2.4.2]{kie} et la suite spectrale de Hodge-de Rham, si $X$ est Stein, ces cohomologies sont encore calculées à partir de leur complexe respectif\footnote{En cohomologie de de Rham (non surconvergente), l'hypothèse $X$ quasi-Stein suffit.}. Les deux cohomologies coïncident si $X$ est partiellement propre (par exemple Stein).
La cohomologie rigide d'un schéma algébrique $Y$ sur $\kappa$ sera notée $\hrig{*} (Y/L)$. Si $X$ est un espace rigide sur $L$ et $Y$ un schéma algébrique sur $\kappa$, $\hdrc{*} (X^\dagger)$ et $\hrigc{*} (Y/L)$ seront les cohomologies de $X^\dagger$ et de $Y$ à support compact. On rappelle la dualité de Poincaré :
\begin{theo} \label{theodualitepoinc}
\begin{enumerate}
\item \cite[proposition 4.9]{GK1} Si $X$ est un $L$-affinoïde lisse de dimension pure $d$, on a \[\hdr{i} (X^\dagger)\cong \hdrc{2d-i} (X^\dagger)^\lor \text{ et } \hdrc{i} (X^\dagger)\cong \hdr{2d-i} (X^\dagger)^\lor\]
\item \cite[proposition 4.11]{GK1} Si $X$ est un $L$-espace lisse et Stein de dimension pure $d$, on a \[\hdr{i} (X^\dagger)\cong \hdrc{2d-i} (X^\dagger)^\lor \text{ et } \hdrc{i} (X^\dagger)\cong \hdr{2d-i} (X^\dagger)^\lor\]
\item \cite[théorème 2.4]{bert1} Si $Y$ est un schéma algébrique lisse sur $\kappa$ de dimension $d$, on a pour tout $i$ \[\hrig{i} (Y/L)\cong \hrigc{2d-i} (Y/L)^\lor \text{ et } \hrigc{i} (Y/L)\cong \hrig{2d-i} (Y/L)^\lor\]
\end{enumerate}
\end{theo}
Nous donnons un théorème de comparaison :
\begin{theo}\label{theopurete}
\cite[proposition 3.6]{GK3}
Soit ${\mathcal{X}}$ un schéma affine formel sur $\spf ({\mathcal{O}}_L)$ de fibre spéciale ${\mathcal{X}}_s$ et de fibre générique ${\mathcal{X}}_\eta$. Supposons ${\mathcal{X}}$ lisse, alors on a un isomorphisme fonctoriel \[\hdr{*}({\mathcal{X}}_\eta^\dagger)\cong \hrig{*}({\mathcal{X}}_s)\]
\end{theo}
Dans ce paragraphe, $L$ est non-ramifié sur $K$. Soit ${\mathcal{X}}$ un schéma formel topologiquement de type fini sur $\spf ({\mathcal{O}}_L)$ de fibre g\'en\'erique ${\mathcal{X}}_\eta$ et de fibre sp\'eciale ${\mathcal{X}}_s$. On a une flèche de spécialisation $\spg: {\mathcal{X}}_\eta \rightarrow {\mathcal{X}}_s$ et pour tout fermé $Z\subset {\mathcal{X}}_s$, on note $]Z[_{{\mathcal{X}}}$ l'espace analytique $\spg^{-1} (Z)\subset {\mathcal{X}}_\eta$.
\begin{defi
On dit que ${\mathcal{X}}$ est faiblement de réduction semi-stable généralisé s'il admet un recouvrement ${\mathcal{X}}=\uni{U_t}{t\in T}{}$ et un jeu de morphisme fini étale \[\varphi_t : U_t\to \spf ({\mathcal{O}}_L\left\langle x_1,\cdots , x_d\right\rangle/(x_1^{\alpha_1}\cdots x_r^{\alpha_r}-\varpi )).\]
\end{defi}
Quitte à rétrécir les ouverts $U_t$ et à prendre $r$ minimal, on peut supposer que les composantes irréductibles $\bar{U}_t$ sont les $V(\bar{x}^*_i)$ pour $i\leq r$ avec $\bar{x}^*_i =\bar{\varphi}_t (\bar{x}_i)$ et ont multiplicités $\alpha_i$.
Observons que pour schéma formel semi-stable ${\mathcal{X}}$, on vérifie aisément que ${\mathcal{X}}$ est ponctuellement de réduction semi-stable généralisé au sens suivant.
\begin{defi
Un schéma formel est ponctuellement de réduction semi-stable généralisé si l'anneau local complété en chacun des points fermés et de la forme
\[{\mathcal{O}}_L\left\llbracket x_1,\cdots , x_d\right\rrbracket /(x_1^{\alpha_1}\cdots x_r^{\alpha_r}-\varpi ).\]
\end{defi}
Cette notion plus faible nous permet par exemple de considérer des schémas formels qui ne sont pas localement des complétions $p$-adiques de schémas algébriques de type fini sur ${\mathcal{O}}_L$. Par exemple, l'espace lui-même $\spf({\mathcal{O}}_L\left\llbracket x_1,\cdots , x_d\right\rrbracket /(x_1^{\alpha_1}\cdots x_r^{\alpha_r}-\varpi ))$ est ponctuellement semi-stable généralisé alors qu'il n'est pas semi-stable généralisé d'après la remarque précédent. Le résultat suivant suggère que sous l'hypothèse d'être localement la complétion $p$-adique d'un schéma algébrique de type fini sur ${\mathcal{O}}_L$, les deux notions de semi-stabilité sont en quelque sorte équivalentes.
\begin{prop
Si ${\mathcal{X}}$ est un schéma formel ponctuellement semi-stable généralisé qui admet une immersion ouverte vers un schéma formel ${\mathcal{Y}}$ qui est localement la complétion $p$-adique de schémas algébriques de type fini sur ${\mathcal{O}}_L$, alors il existe un voisinage étale $U$ de ${\mathcal{X}}$ dans l'espace ambiant ${\mathcal{Y}}$ qui est semi-stable généralisé.
\end{prop}
\begin{proof}
Voir la preuve de \cite[Propositions 4.8. (i)]{yosh}
\end{proof}
L'intérêt des espaces semi-stables que nous avons introduits provient du fait que leur géométrie fait naturellement apparaître des recouvrements dont on peut espérer calculer la cohomologie des intersections grâce à \ref{theopurete} et au résultat qui va suivre. Pour pouvoir énoncer ce dernier, nous introduisons quelques notations pour un schéma formel semi-stable généralisé ${\mathfrak{X}}$ (au sens le plus fort) dont la fibre spéciale admet la décomposition en composantes irréductibles ${\mathcal{X}}_s=\bigcup_{i\in I} Y_i$. Pour toute partie finie $J\subset I$ de l'ensemble des composantes irréductibles, on note $Y_J=\inter{Y_j}{j\in J}{}$ et $Y_J^{lisse}=Y_{J}\backslash \bigcup_{i\notin J}Y_i$. Le résultat est le suivant
\begin{theo}\label{theoexcision}
Étant donné un schéma formel semi-stable généralisé ${\mathcal{X}}$ comme précédemment avec pour décomposition en composantes irréductibles ${\mathcal{X}}_s=\bigcup_{i\in I} Y_i$, pour toute partie finie $J\subset I$, la flèche naturelle de restriction
\[\hdr{*} (\pi^{-1}(]Y_J[_{\mathcal{X}}))\fln{\sim}{} \hdr{*} (\pi^{-1}(]Y_{J}^{lisse}[_{\mathcal{X}}))\]
est un isomorphisme.
\end{theo}
\begin{proof}
Il s'agit de la généralisation du résultat pour le cas semi-stable \cite[Theorem 2.4.]{GK2} au cas semi-stable généralisé réalisée dans \cite[Théorème 5.1.]{J4}.
\end{proof}
\section{La tour de revêtements}
Nous allons d\'efinir la tour de Lubin-Tate construite dans \cite[section 1 et 4]{dr1}.
Soit ${\mathcal{C}}$ la sous-cat\'egorie pleine des ${\mathcal{O}}_{\breve{K}}$-alg\`ebres locales noeth\'eriennes et compl\`etes $A$ telles que le morphisme naturel ${\mathcal{O}}_{\breve{K}}/ \varpi{\mathcal{O}}_{\breve{K}} \to A/ {\mathfrak{m}}_A$ soit un isomorphisme. On consid\`ere, pour $A$ un objet de ${\mathcal{C}}$, l'ensemble des ${\mathcal{O}}_K$-modules formels $F$ sur $A$ modulo isomorphisme. On note $X+_{F} Y\in A\left\llbracket X,Y\right\rrbracket$ la somme et $[ \lambda ]_F X\in A\left\llbracket X\right\rrbracket$ la multiplication par $\lambda$ dans $F$.
Si $A$ est de caract\'eristique $p$, on appelle hauteur le plus grand entier $n$ (possiblement infini) tel que $[ \varpi ]_F$ se factorise par ${\rm Frob}_q^n$. On a le r\'esultat classique (voir par exemple \cite[chapitre III, §2, théorème 2]{frohlforgr}) : %
\begin{prop}
Si $A= \bar{{\mathbb{F}}}$, la hauteur est un invariant total i.e. deux ${\mathcal{O}}_K$-modules formels sont isomorphes si et seulement si ils ont la m\^eme hauteur.
\end{prop}
Fixons un repr\'esentant "normal" $\Phi_d$ de hauteur $(d+1)$ tel que :
\begin{enumerate}
\item $[ \varpi ]_{\Phi_d} X\equiv X^{q^{d+1}} \pmod{X^{q^{d+1}+1}}$,
\item $X +_{\Phi_d} Y\equiv X+Y \pmod{ (X,Y)^2}$,
\item $[ \lambda ]_{\Phi_d} X\equiv \lambda X \pmod{X^2}$ pour $\lambda \in {\mathcal{O}}_K$.
\end{enumerate}
\begin{defi}
On appelle $\widehat{{\mathcal{M}}}^0_{ LT}$ le foncteur qui \`a un objet $A$ dans ${\mathcal{C}}$ associe les doublets $(F, \rho)$ \`a isomorphisme pr\`es o\`u :
\begin{enumerate}
\item $F$ est un ${\mathcal{O}}_K$-module formel sur $A$ de hauteur $d+1$,
\item $\rho : F \otimes A/ {\mathfrak{m}}_A \to \Phi_d$ est une quasi-isog\'enie.
\end{enumerate}
On d\'efinit ${\mathcal{F}}^{0,(h)}_{ LT}$ le sous-foncteur de $\widehat{{\mathcal{M}}}^0_{ LT}$ des doublets $(F, \rho)$ o\`u $\rho$ est une quasi-isog\'enie de hauteur $h$.
\end{defi}
\begin{theo}[Drinfeld \cite{dr1} proposition 4.2]
Le foncteur ${\mathcal{F}}^{0,(0)}_{ LT}$ est repr\'esentable par $\widehat{\lt}^0=\spf (A_0)$ o\`u $A_0$ est isomorphe \`a ${\mathcal{O}}_{\breve{K}} \llbracket T_1, \dots, T_d \rrbracket$.
Le foncteur $\widehat{{\mathcal{M}}}^0_{LT}$ se d\'ecompose en l'union disjointe $\coprod_h {\mathcal{F}}^{0,(h)}_{LT}$, chaque $ {\mathcal{F}}^{0,(h)}_{LT}$ étant isomorphe non-canoniquement à $ {\mathcal{F}}^{0,(0)}_{LT}$.
\end{theo}
D\'efinissons maintenant les structures de niveau.
\begin{defi}
Soit $n$ un entier sup\'erieur ou \'egal \`a 1. Soit $F$ un ${\mathcal{O}}_K$-module formel sur $A$ de hauteur $d+1$, une $(\varpi)^n$-structure de niveau est un morphisme de ${\mathcal{O}}_K$-modules formels $\alpha : ((\varpi)^{-n}/ {\mathcal{O}}_K)^{d+1} \to F \otimes {\mathfrak{m}}_A$ qui v\'erifie la condition :
\[ \prod_{x \in ((\varpi)^{-n}/ {\mathcal{O}}_K)^{d+1}} (X - \alpha(x)) \; | \; [ \varpi^n ](X) \]
dans $A \llbracket X \rrbracket$.
Si $(e_i)_{ 0 \le i \le d}$ est une base de $((\varpi)^{-n}/ {\mathcal{O}}_K)^{d+1}$, le $(d+1)$-uplet $(\alpha(e_i))_i$ est appel\'e syst\`eme de param\`etres formels.
On note $\widehat{{\mathcal{M}}}^n_{LT}$ le foncteur classifiant, pour tout objet $A$ de ${\mathcal{C}}$, les triplets $(F, \rho, \alpha)$ o\`u $(F, \rho) \in \widehat{{\mathcal{M}}}^0_{LT}(A)$ et $\alpha$ est une $(\varpi)^n$-structure de niveau. On d\'efinit de m\^eme par restriction, ${\mathcal{F}}^{n,(h)}_{LT}$.
\end{defi}
\begin{theo}[Drinfeld \cite{dr1} proposition 4.3]
\label{theoreplt}
\begin{enumerate}
\item Le foncteur ${\mathcal{F}}^{n,(0)}_{LT}$ est repr\'esentable par $\widehat{\lt}^n=\spf (A_n)$ o\`u $A_n$ est local de dimension $d+1$, r\'egulier sur $A_0$. Le morphisme $A_0 \to A_n$ est fini et plat de degr\'e $\card(\gln_{d+1}({\mathcal{O}}_K/ \varpi^n {\mathcal{O}}_K))= q^{(d+1)^2(n-1)} \prod_{i=0}^d (q^{d+1}-q^i)$.
\item Si $(F^{univ}, \rho^{univ}, \alpha^{univ})$ est le groupe formel universel muni de la structure de niveau universelle, tout syst\`eme de param\`etres formels $(x_i)_i$ engendre topologiquement $A_n$ i.e. on a une surjection :
\begin{align*}
{\mathcal{O}}_{\breve{K}} \llbracket X_0, \dots, X_d \rrbracket & \to A_n \\
X_i & \mapsto x_i
\end{align*}
\item L'analytification $\lt^n=\widehat{\lt}^{n,rig}$ est lisse sur $\breve{K}$ et le morphisme $\lt^n\to \lt^0$ est un revêtement étale de groupe de Galois $\gln_{d+1} ({\mathcal{O}}_K/\varpi^n {\mathcal{O}}_K)=\gln_{d+1} ({\mathcal{O}}_K)/(1+\varpi^n {\rm M}_{d+1}({\mathcal{O}}_K))$.
\end{enumerate}
\end{theo}
\section{\'Equation des rev\^etements et géometrie de la fibre spécial de $\widehat{\lt}^1=Z_0$ \label{sssectionltneq}}
D'apr\`es le th\'eor\`eme \ref{theoreplt}, on a une suite exacte :
\[ 0 \to I_n \to {\mathcal{O}}_{\breve{K}} \llbracket X_0, \dots, X_d \rrbracket \to A_n \to 0 \]
associ\'ee \`a un syst\`eme de param\`etres formels $x=(x_0, \dots, x_d)$.
Nous allons tenter de d\'ecrire explicitement l'id\'eal $I_n$. D'après la suite exacte précédente, on a une immersion fermée $Z_0\rightarrow \spf {\mathcal{O}}_{\breve{K}} \llbracket X_0, \dots, X_d \rrbracket $ de codimension $1$ entre deux schémas réguliers. L'idéal $I_n$ est donc principal. Pour obtenir un générateur, il suffit d'exhiber un élément de $I_n\backslash {\mathfrak{m}}^2$ avec ${\mathfrak{m}}=(\varpi, X_0,\cdots, X_d)$ l'id\'eal maximal de ${\mathcal{O}}_{\breve{K}} \llbracket X_0 , \dots, X_d \rrbracket$. Nous écrivons aussi $\bar{{\mathfrak{m}}}=(X_0,\cdots, X_d)$ l'idéal maximal de $\bar{{\mathbb{F}}} \llbracket X_0 , \dots, X_d \rrbracket$ et ${\mathfrak{m}}_{A_n}$ l'id\'eal maximal de $A_n$.
Inspirons-nous de \cite[3.1]{yosh}. Si $z=(z_0, \dots , z_m)$ est un $(m+1)$-uplet de points de $\varpi^n$-torsion pour $F^{univ}$ et $a \in ({\mathcal{O}}_K / \varpi^n {\mathcal{O}}_K)^{m+1}$ (par exemple $m=d$), on note \[l_{a, F}(z)=[\tilde{a}_0]_{F^{univ}} (z_0)+_{F^{univ}}\cdots +_{F^{univ}} [\tilde{a}_m]_{F^{univ}} (z_m)\] où $\tilde{a}_i$ est un relevé de $a_i$ dans ${\mathcal{O}}_K$ pour tout $i$. Par d\'efinition de la structure de niveau, on a la relation
\[ [ \varpi^k](T)= U_k(T) \prod_{a \in ({\mathcal{O}}_K/ \varpi^k {\mathcal{O}}_K)^{d+1}} (T- l_{a, F}(x)) \]
pour $k\leq n$, o\`u $U_k$ est une unit\'e telle que $U_k(0) \in 1+ {\mathfrak{m}}_{A_n}$. En comparant les termes constants de $[\varpi^n](T)/ [\varpi^{n-1}](T)$, on obtient
\[ \varpi = (-1)^{q^{n-1}(q-1)}U_n(0)/U_{n-1}(0) \prod_{a \in ({\mathcal{O}}_K/ \varpi^n {\mathcal{O}}_K)^{d+1} \backslash (\varpi {\mathcal{O}}_K/ \varpi^n {\mathcal{O}}_K)^{d+1} )} l_{a, F}(x)=:P. \]
Comme la fl\`eche ${\mathfrak{m}} \to {\mathfrak{m}}_{A_n}$ est surjective, on peut relever $U_n(0)/U_{n-1}(0)$ en un \'el\'ement $\tilde{U}(X_0, \dots, X_d)$ de $1+{\mathfrak{m}}$ et $l_{a,F} (x)$ en une série $l_{a,F}(X_0,\ldots, X_d)$ dans ${\mathcal{O}}_{\breve{K}} \llbracket X_0 , \ldots, X_d \rrbracket$. Par construction, $\varpi-P$ est un élément de $I_n$. Pour prouver qu'il n'est pas dans ${\mathfrak{m}}^2$, on vérifie les congruences suivantes
\begin{prop}[\cite{yosh} Proposition 3.4]
\label{PropConglaF}
\begin{enumerate}
\item Pour tout $a\in({\mathcal{O}}_K/\varpi^n {\mathcal{O}}_K)^{d+1}$, on a \[l_{a,F} (X)\equiv l_{a}(X) \pmod{(\varpi^n,X_0,\cdots,X_d)^2}\] où $l_{a}(X)={a}_0 X_0+\cdots + {a}_d X_d$.
\item Soit $a=ca'\in ({\mathcal{O}}_K/\varpi^n {\mathcal{O}}_K)^{d+1}$ avec $c$ une unité de ${\mathcal{O}}_K/\varpi^n {\mathcal{O}}_K$, alors il existe une unité $u_c$ de ${\mathcal{O}}_{\breve{K}} \llbracket X_0 , \dots, X_d \rrbracket$ telle que $l_{a,F}(X)=u_c(X) l_{a',F}(X)$.
\item S'il existe $j\leq d+1$ tel que $a_{j+1}=\cdots=a_d=0$ (la condition devient vide si $j=d+1$) alors $l_{a,F}(X)\in {\mathcal{O}}_{\breve{K}}\llbracket X_0,\ldots, X_j\rrbracket$.
\end{enumerate}
\end{prop}
Le théorème suivant s'en déduit
\begin{theo}
\label{theolteq}
On a $I_n=(\varpi-P(X))$. Dit autrement, \[A_n={\mathcal{O}}_{\breve{K}} \llbracket X_0 , \dots, X_d \rrbracket/(\varpi-P(X)).\] Ainsi, $\lt^n$ s'identifie à l'hypersurface de la boule unité rigide ouverte de dimension $d+1$ d'équation $\varpi=P(X)$.
\end{theo}
\section{Généralités sur les éclatements}
Nous commençons par donner quelques faits généraux sur les éclatements. Nous pourrons ainsi construire des modèles convenables de $\lt^1$.
\begin{defi}
Soit $X$ un schéma algébrique, ${\mathscr{I}} \subset {\mathscr{O}}_X$ un faisceau d'idéaux et $Y\to X$ l'immersion fermée associée. L'éclatement $\bl_Y (X)$ (ou $\tilde{X}$) de $X$ le long de $Y$ est l'espace propre sur $X$
\[
p: \underline{\proj}_X (\oplus_n {\mathscr{I}}^n) \rightarrow X.
\]
Le fermé $E:=p^{-1}(Y)=V({\mathscr{I}} {\mathscr{O}}_{\tilde{X}})= V^+(\oplus_{n} {\mathscr{I}}^{n+1})$ est appelé le diviseur exceptionnel.
\end{defi}
\begin{defi}\label{defitrans}
Reprenons les notations précédentes, si $Z\to X$ est une immersion fermée, on note
\[
\tilde{Z}=\begin{cases}
p^{-1}(Z) & \text{Si } Z\subset Y, \\
\overline{p^{-1}(Z\backslash Y)} & \text{Sinon}
\end{cases}
\]
Dans le deuxième cas $\tilde{Z}$ est appelé le transformé stricte de $Z$.
\end{defi}
Nous énonçons les résultats principaux sur les éclatements que nous utiliserons:
\begin{theo}
\label{theoBlowUp}
Soit $X$ un schéma algébrique, $Y\to X$ et $Z\to X$ deux immersions fermées, $E\subset \bl_Y(X)=\tilde{X}$ le diviseur exceptionnel. Les points suivants sont vérifiés:
\begin{enumerate}
\item (Propriété universelle) $E\rightarrow \tilde{X}$ est un diviseur de Cartier (i.e. ${\mathscr{I}} {\mathscr{O}}_{\tilde{X}}$ est localement libre de rang $1$). Si $ f:T\to X$ est un morphisme tel que $f^{-1}(Y)$ est un diviseur de Cartier, il existe un unique morphisme sur $X$
\[
g: T\rightarrow \tilde{X}
\]
tel que $g^{-1}(E)=f^{-1}(Y)$.
\item (Compatibilité avec les immersions fermées) Supposons que $Z=V({\mathscr{J}}) \nsubseteq Y$. On a un isomorphisme canonique $\bl_{Z\cap Y}(Z)\cong \tilde{Z}\subset \tilde{X}$. En particulier, $\tilde{Z}=V^+(\bigoplus_n( {\mathscr{J}}\cap {\mathscr{I}}^n))$
\item (Compatibilité avec les morphismes plats) Si $f:T\rightarrow X$ est un morphisme plat, alors on a un isomorphisme canonique $\bl_{f^{-1}(Y)} (T) \cong \bl_{Y}(X)\times_{X,f} T$. En particulier, si $T=X\backslash Y$ la projection $p$ induit un isomorphisme $p^{-1}(X\backslash Y)\rightarrow X\backslash Y$.
\item (Éclatement le long d'une immersion régulière) Si $Y\rightarrow X$ est une immersion régulière alors
$p|_E: E\rightarrow Y$ est une fibration localement triviale en espace projectif de dimension $d-1$ où $d=\dim X -\dim Y$. De plus, si $Z$ est irréductible alors $\tilde{Z}$ l'est aussi.
\end{enumerate}
\end{theo}
\begin{proof}
Le point 1. est prouvé dans \cite[\href{https://stacks.math.columbia.edu/tag/0806}{Tag 0806}]{stp} (voir \href{https://stacks.math.columbia.edu/tag/0805}{Tag 0805} pour le point 3. et \href{https://stacks.math.columbia.edu/tag/080D}{Tag 080D}, \href{https://stacks.math.columbia.edu/tag/0806}{Tag 080E} pour le point 2.). Supposons maintenant que $Y\rightarrow X$ est une immersion régulière. On a
\[
E= \underline{\proj}_X({\mathscr{O}}_{\tilde{X}}/ {\mathscr{I}} {\mathscr{O}}_{\tilde{X}})= \underline{\proj}_X(\bigoplus_{n} {\mathscr{I}}^n/ {\mathscr{I}}^{n+1})=\underline{\proj}_{X}(\underline{\sym}_{X} {\mathscr{N}}_{Y/X}^\vee )
\]
avec ${\mathscr{N}}_{Y/X}^\vee$ le faisceau conormal de $Y$ sur $X$. L'hypothèse de régularité entraîne le caractère localement libre de ${\mathscr{N}}_{Y/X}^\vee$ ce qui montre que $p|_E:E\to Y$ est une fibration localement triviale en espace projectif de dimension $d-1$.
Prenons $Z\subset X$ un fermé irréductible. Si $Z\subset Y$ alors $p^{-1}(Z)\rightarrow Z$ est une fibration localement triviale dont la base et la fibre sont irréductibles. Ainsi $\tilde{Z}=p^{-1}(Z)$ est irréductible. Si $Z\nsubseteq Y$ alors $\tilde{Z}$ est la clôture de $p^{-1}(Z\backslash Y)\cong Z\backslash Y$ (d'après le point 2.). Mais $Z\backslash Y$ est irréductible en tant qu'ouvert de $Z$, d'où l'irréductibilité de $\tilde{Z}$.
\end{proof}
\begin{rem}\label{rembladm}
Si $X$ est une schéma formel localement noethérien muni de la topologie ${\mathscr{I}}$-adique, et $Z\subset X$ est un fermé de la fibre spéciale, on peut définir l'éclatement admissible:
\[
\bl_Z(X)= \varinjlim_k \bl_{Z\times_X V({\mathscr{I}}^k)}(V({\mathscr{I}}^k))
\]
où $V({\mathscr{I}}^k)$ est le schéma fermé définit par ${\mathscr{I}}^k$.
\end{rem}
\section{Transformée stricte et régularité}
Le but de cette section est de comprendre comment se comporte les immersions fermées régulières $Z_1\to Z_2$ lorsque l'on prend la transformée stricte de deux fermés $Z_1$, $Z_2$ d'un espace $X$ que l'on éclate. Le théorème suivant donne des critères précis pour assurer que l'immersion fermée obtenue entre les transformées $\tilde{Z}_1\to \tilde{Z}_2$ reste encore régulière. Il y est donné aussi le calcul de la transformée d'une intersection de fermé dans des cas particuliers.
\begin{theo}\label{theoblreg}
Donnons-nous un schéma algébrique $X$ et des fermés $Y,Z_1,Z_2\subset X$ et écrivons $p:\tilde{X}=\bl_Y (X)\to X$ (en particulier, $\tilde{Y}$ est le diviseur exceptionnel). On a les points suivants :
\begin{enumerate}
\item Si on a $Y\subset Z_1$, $Z_1\subset Z_2$, $Z_2\subset X$ et si ces inclusions sont des immersions régulières, alors les immersions $\tilde{Y}\cap \tilde{Z}_1\subset \tilde{Z}_1$, $\tilde{Z}_1\subset \tilde{Z}_2$ sont régulières de codimension $1$ pour la première et $\codim_{Z_2}Z_1 $ pour la seconde. En particulier, la flèche $\tilde{Z}_1\subset \tilde{X}$ est régulière en considérant le cas $\tilde{Z}_2=\tilde{X}$.
\item Si on a les immersions régulières suivantes $Y\subset Z_1\cap Z_2$, $Z_1\cap Z_2\subset Z_i$, $Z_i\subset X$ alors \[\tilde{Z}_1\cap \tilde{Z}_2=\begin{cases}
\emptyset &\text{Si } Z= Z_1\cap Z_2,\\
\widetilde{Z_1\cap Z_2} &\text{Sinon}
\end{cases}\]
\item Si $Z_1$ et $Y$ sont transverses et si les inclusions $Z_1\cap Y\subset Y$, $Y\subset X$ sont des immersions régulières, alors $\tilde{Z}_1=p^{-1}(Z_1)$ et $\tilde{Z}_1\cap \tilde{Y}\subset \tilde{Y}$ est régulière de codimension $\codim_Y (Z_1\cap Y)$.
\item Si $Z_1$ et $Y$ sont transverses et $Y\subset Z_2$, $Z_2\subset X$ sont des immersions régulières, alors $\tilde{Z}_1\cap \tilde{Y}$ et $\tilde{Z}_2$ sont transverses.
\end{enumerate}
\end{theo}
\begin{rem}\label{remsuitreg}
Les hypothèses des points précédents peuvent se réécrire en termes de suites régulières. Par exemple, les deux premiers points supposent localement l'existence d'une suite régulière $(x_1,\cdots,x_s)$ et de deux sous-ensembles $S_1, S_2 \subset \left\llbracket 1,s\right\rrbracket$ (avec $S_2\subset S_1$ pour le premier point) tel que $Y=V(x_1,\cdots,x_s)$, $Z_i=V((x_j)_{j\in S_i})$ pour $i=1,2$. Pour le troisième point, on demande localement en plus de la suite régulière $(x_1,\cdots,x_s)$ l'existence d'une partition $\left\llbracket 1,s\right\rrbracket=T\amalg S$ tel que $Y=V((x_j)_{j\in T})$ et $Z_1=V((x_j)_{j\in S})$ et le dernier point est une synthèse des cas précédents. Nous laissons au lecteur le soin de trouver des interprétations similaires aux conclusions de l'énoncé en termes de suites régulières locales.
\end{rem}
\begin{proof}
On remarque que les conclusions du théorème peuvent se vérifier localement. De plus, on a pour tout ouvert $U=\spec (A)\subset X$ affine d'après \ref{theoBlowUp} 2., 3.:
\[
p^{-1}(U)=\bl_{U\cap Y} U \text{ et } \tilde{Z}_i\cap p^{-1}(U) =\bl_{Z_i\cap U\cap Y} Z_i\cap U =\widetilde{Z_i\cap U},
\]
on peut se ramener à étudier les objets qui vont suivre quand $X=U=\spec (A)$ et $Y\subset U$. Donnons-nous une suite régulière $(x_1,\cdots, x_s)$ dans $A$, et introduisons pour tout sous-ensemble $S\subset \left\llbracket 1,s\right\rrbracket$, des idéaux $I_S =\sum_{j\in S} x_j A$ et des fermés $Z_S=V(I_S)$. On fixe $S_0$ et on pose $Y=Z_{S_0}$, $p:\tilde{X}=\bl_{Y}(X)\to X$. On construit comme dans \ref{defitrans} les transformées strictes $\tilde{Y}_S$ de $Y_S$ (pour $S_0\nsubseteq S$) et on note $\tilde{{\mathscr{I}}}_S\subset {\mathscr{O}}_{\tilde{X}}$ les idéaux associés. D'après la Remarque \ref{remsuitreg}, il suffit de prouver le résultat local suivant :
\begin{lem}
\label{lemReg2}
En reprenant les notations précédentes $X=\spec A$, $Y$, $Z_S $, $p:\tilde{X}\to X$, $\tilde{Y}$, $\tilde{Z}_S$, on a les points suivantes
\begin{enumerate}
\item Si $S_1$ est une partie de $S_0$ et $n \geq 1$, on a $I_{S_1}\cap I_{S_0}^n= I_{S_1} I_{S_0}^{n-1}$. En particulier, si $S_1$, $S_2$ sont deux parties de $S_0$, on a \[\tilde{Z}_{S_1}\cap \tilde{Z}_{S_0}=\begin{cases}
\emptyset &\text{Si } S_0= S_1\cup S_2,\\
\tilde{Z}_{S_1\cup S_0} &\text{Sinon}
\end{cases}\
\item Si $S_1$ est une partie de $\left\llbracket 1,s\right\rrbracket$ disjointe de $S_0$ et $n \geq 1$, on a $I_{S_1}\cap I_{S_0}^n=I_{S_1} I_{S_0}^n $. Dans ce cas, on a $ \tilde{Z}_{S_1}= p^{-1}(Z_{S_1})$.
\item Il existe un recouvrement affine de $\tilde{X}= \bigcup_{i\in S_0} U_i$ tel que pour tout $i$, il existe une suite régulière $(\tilde{x}_1^{(i)}, \ldots, \tilde{x}_s^{(i)})\in {\mathscr{O}}(U_i)$ vérifiant
\begin{itemize}
\item[•] $\widetilde{Z_{S_1}}\cap U_i =\emptyset$ si $i\in S_1\subsetneq S_0$.
\item[•] $ V((\tilde{x}_j)_{j\in S_1})=\tilde{Z}_{S_1}\cap U_i$ si $S_1\subset S_0\backslash\{i\}$ ou si $S_1\cap S_0 =\emptyset$.
\item[•] $\tilde{Y}\cap U_i= V(\tilde{x}_i)$.
\end{itemize}
\item Si $S_1$ est une partie stricte de $S_0$ et $S_2$ est disjointe de $S_0$, on a $\tilde{Z}_{S_1}\cap \tilde{Z}_{S_2}$ (resp. $\tilde{Z}_{S_0}\cap \tilde{Z}_{S_1}\cap \tilde{Z}_{S_2}$) est de codimension $|S_1|+|S_2|$ (resp. $|S_1|+|S_2|+1$).
\end{enumerate}
\end{lem}
\end{proof}
\begin{proof}
Commençons par prouver pour tout $n>1$, les égalités entre idéaux suivantes pour $S_1\subset\left\llbracket 1,s\right\rrbracket$ :
\[I_{S_1}\cap I_{S_0}^n=\begin{cases} I_{S_1} I_{S_0}^{n-1} &\text{Si } S_1\subset S_0\\
I_{S_1} I_{S_0}^n & \text{Si } S_1\cap S_0 =\emptyset
\end{cases}\]
Nous commençons par cette observation utile qui permettra de réduire la preuve du résultat au cas $n=1$.
\begin{claim}\label{claimreg}
Reprenons la suite régulière $(x_1,\cdots,x_s)$ dans $A$, pour $T_1\subset T_2\subset\left\llbracket 1,s\right\rrbracket$ et n'importe quel idéal $J$, on a l'identité suivante :
\[I_{T_1}J \cap I_{T_2}^n=I_{T_1} (J\cap I_{T_2}^{n-1})\]
\end{claim}
\begin{proof}
Prenons un élément $x\in I_{T_2}$ et écrivons-le sous la forme :
\[
y=\sum_{t\in T_2} x_t a_t.
\]
L'hypothèse de régularité de la suite $(x_1,\cdots,x_s)$ entraîne que l'élément $y$ est dans $I^n_{T_2} $ si et seulement si chaque terme $a_t$ est dans $ I^{n-1}_{T_2}$ (dit autrement $I^{}_{T_2}/I^{n}_{T_2}\cong \bigoplus_{t\in T_2}x_{t} A/I^{n-1}_{T_2}$). En particulier, si chaque $a_t$ est dans $J$ (c'est-à-dire $y\in I_{T_2}J$) et $y\in I_{T_2}^n$ alors nous avons $a_t\in J\cap I_{T_2}^{n-1}$ ce qui montre l'inclusion $(I_{T_2}^{n-1} J )\cap I^n_{T_2} \subset I_{T_2}(J\cap I_{T_2}^{n-1})$ quand $T_1=T_2$. Celle dans l'autre sens étant triviale, on en déduit le résultat dans ce cas.
Si, de plus, $a_t=0$ quand $t\notin T_1$ (ie. $y\in (I_{T_1}J)\cap I_{T_2}^n$) alors $y\in I_{T_1}(J\cap I_{T_2}^{n-1})$) grâce au raisonnement précédent. Le cas plus général s'en déduit.
\end{proof}
Utilisons cette observation pour prouver les égalités précédentes et supposons qu'elle sont vraies au rang $n$ pour $n\ge 1$ (ainsi qu'au rang 1). On a alors pour $S_1\subset \left\llbracket 1,s\right\rrbracket$
\[I_{S_1}\cap I_{S_0}^{n+1}=I_{S_1}\cap I_{S_0}^{n}\cap I_{S_0}^{n+1}= (I_{S_1} I_{S_0}^{n-1})\cap I_{S_0}^{n+1}=I_{S_1} (I_{S_0}^{n-1}\cap I_{S_0}^{n})=I_{S_1} I_{S_0}^{n}\]
si $S_1\subset S_0$ et
\[I_{S_2}\cap I_{S_0}^{n+1}=I_{S_2}\cap I_{S_0}\cap I_{S_0}^{n+1}=(I_{S_0} I_{S_2} )\cap I_{S_0}^{n+1}=I_{S_0} (I_{S_2}\cap I_{S_0}^{n})=I_{S_2} I_{S_0}^{n+1}\] si $S_1\cap S_0 =\emptyset$. On en déduit le résultat au rang $n+1$.
Nous nous sommes ramenés au cas $n=1$ où on a clairement $I_{S_1}\cap I_{S_0}=I_{S_1}$ si $S_1\subset S_0$ ce qui prouve la première équation. Quand $S_1\cap S_0 =\emptyset$, on raisonne par récurrence sur $|S_1|$. Si $|S_1|=0$, on pose $I_{S_1}=0$ et le résultat est trivial. Supposons le résultat vrai pour pour les parties strictes de $S_1\neq\emptyset$ et fixons $j_0\in I_{S_1}$. Par hypothèse de récurrence, on a $I_{S_1\backslash\{j_0\}}\cap I_{S_0}=I_{S_1\backslash\{j_0\}} I_{S_0}$. Il suffit de montrer que la flèche naturelle $ I_{S_1}I_{S_0}/I_{S_1\backslash\{j_0\}}I_{S_0} \to I_{S_1}\cap I_{S_0}/I_{S_1\backslash\{j_0\}}\cap I_{S_0}$ est un isomorphisme par théorème de Noether. Observons le diagramme de $A$-modules commutatif suivant
\[
\begin{tikzcd}
I_{S_1}I_{S_0} \ar[r] & I_{S_1}\cap I_{S_0} \\ I_{S_0} \ar[r] \ar[u, "\times x_{j_0}"]& I_{S_0} \ar[u, "\times x_{j_0}"']
\end{tikzcd}
\]
et montrons qu'il induit un diagramme commutatif (nous allons justifier que les flèches verticales sont bien définies)
\[
\begin{tikzcd}
I_{S_1}I_{S_0}/ I_{S_1\backslash\{j_0\}}I_{S_0} \ar[r] & (I_{S_1}\cap I_{S_0})/( I_{S_1\backslash\{j_0\}}\cap I_{S_0} )\\
I_{S_0}/( I_{S_0}\cap I_{S_1\backslash\{j_0\}}) \ar[r] \ar[u, "\times x_{j_0}"]& (I_{S_0}+I_{S_1\backslash\{j_0\}})/ I_{S_1\backslash\{j_0\}} \ar[u, "\times x_{j_0}"']
\end{tikzcd}
\]
La flèche horizontale inférieure est clairement bijective, et nous allons utiliser la régularité de $x_r$ pour montrer que les flèches verticales sont bien définies et sont des isomorphismes.
Par définition de $I_{S_1\backslash\{j_0\}}$, la flèche $b\in I_{S_0} \mapsto x_{j_0} b\in I_{S_1}I_{S_0}/I_{S_1\backslash\{j_0\}}I_{S_0}$ est surjective. Si $b$ est dans le noyau, $x_{j_0} b\in I_{S_1\backslash\{j_0\}}$ d'où $b\in I_{S_1\backslash\{j_0\}}\cap I_{S_0}=I_{S_1\backslash\{j_0\}}I_{S_0}$ par régularité et hypothèse de récurrence. Ainsi la multiplication par $x_{j_0}$ induit un isomorphisme $I_{S_0}/I_{S_1\backslash\{j_0\}} I_{S_0} \stackrel{\sim}{\rightarrow} I_{S_1}I_{S_0} /I_{S_1\backslash\{j_0\}}I_{S_0}$.
Étudions $M:=(I_{S_1}\cap I_{S_0})/( I_{S_1\backslash\{j_0\}}\cap I_{S_0} )$. On observe le diagramme commutatif dont les deux lignes horizontales sont exactes
\[
\begin{tikzcd}
0 \ar[r] & I_{S_1}\cap I_{S_0} \ar[r] & I_{S_1} \oplus I_{S_0} \ar[r] & I_{S_1}+I_{S_0} \ar[r] & 0 \\
0 \ar[r] & I_{S_1\backslash\{j_0\}}\cap I_{S_0} \ar[r] \ar[u, hook] & I_{S_1\backslash\{j_0\}} \oplus I_{S_0} \ar[r] \ar[u, "\iota_1", hook] & I_{S_1\backslash\{j_0\}}+I_{S_0} \ar[r] \ar[u, "\iota_2", hook]& 0
\end{tikzcd}
\]
Par régularité, on a $\coker \iota_1 = I_{S_1}/I_{S_1\backslash\{j_0\}}=x_{j_0} (A /I_{S_1\backslash\{j_0\}}) $ et $\coker \iota_2\cong x_{j_0} (A/(I_{S_1\backslash\{j_0\}}+I_{S_0}))$. Comme les flèches verticales sont injectives, \[M\stackrel{\sim}{\rightarrow}\ker( A/I_{S_1\backslash\{j_0\}}\fln{}{} A/(I_{S_1\backslash\{j_0\}}+I_{S_0}))= (I_{S_1\backslash\{j_0\}}+I_{S_0})/I_{S_1\backslash\{j_0\}}\] ce qui termine l'argument par récurrence.
Maintenant, terminons la preuve de 1. et 2. grâce aux égalités précédentes. Le résultat pour 2. est clair, passons à 1. Pour cela, raisonnons dans un cadre un peu plus général. Pour $J_1$, $J_2$, $J_3$ des idéaux d'un anneaux $B$, on a toujours $(J_1+J_2)J_3=J_1J_3+J_2J_3$ mais rarement $(J_1+J_2)\cap J_3=J_1\cap J_3+J_2\cap J_3$ sauf si on a par exemple un idéal ${J'_3}$ tel que $(J_1+J_2)\cap J_3=(J_1+J_2) J'_3$ et $J_i\cap J_3=J_i J'_3$ pour $i=1,2$. Un raisonnement similaire au cas général précédent permet d'établir si \[\bigoplus_n I_{S_1\cup S_2}\cap I_{S_0}^n=\bigoplus_n (I_{S_1}+I_{S_2}) I_{S_0}^{n-1}=\bigoplus_n I_{S_1} I_{S_0}^{n-1}+\bigoplus_n I_{S_2} I_{S_0}^{n-1}=\bigoplus_n I_{S_1}\cap I_{S_0}^{n}+\bigoplus_n I_{S_2}\cap I_{S_0}^{n}\] quand $S_1\cup S_2\subset S_0$. En terme de fermé de $\tilde{X}$, cela se traduit par $ \tilde{Y}_{S_1\cup S_2}=\tilde{Y}_{S_1}\cap \tilde{Y}_{S_2}$ si $S_0\neq S_1\cup S_2$. Le cas $S_0= S_1\cup S_2$ sera montré plus tard (pas de risque d'argument circulaire).
Passons au point 3. Notons $\tilde{A}=A\oplus I \oplus I^2\oplus\cdots$ et si $x\in I^n$, on note $x^{[n]}\in \tilde{A}$ l'élément $x$ vu comme un élément homogène de degré $n$. Considérons le recouvrement $\tilde{X}=\bigcup_{i\in S_0} U_i$ avec $U_i=D^+(x_i^{[1]})$. On a \[\widetilde{V(x_j)}=V^+(\bigoplus_n x_jI^{n-1})= V^+(x_j^{[0]}\tilde{A}+ x_j^{[1]} \tilde{A} )\] si $j\in S_0$ d'après le point $1$ et \[\widetilde{V(x_j)}=V^+(\bigoplus_n x_jI^{n})= V^+(x_j^{[0]}\tilde{A})\] si $j\notin S_0$ d'après le point $2$. Ainsi, d'après l'égalité dans $\tilde{A}$: $x_j^{[0]}x_i^{[1]}-x_j^{[1]}x_i^{[0]}$, on a :
\begin{itemize}
\item[•]$\widetilde{V(x_i)}\cap U_i=\emptyset$,
\item[•]$\tilde{Y}_{S_0} =V(x_i^{[0]})$
\item[•]$ \widetilde{V(x_j)}\cap U_i= V\left(\frac{x_j^{[1]}}{x_i^{[1]}}\right) $ pour $j\in S_0\backslash\{i\}$,
\item[•]$ \tilde{Y}_j\cap U_i= V\left({x_j^{[0]}}\right)\cap U_i$ si $j\notin S_0$.
\end{itemize}
Posons alors $\tilde{x}_j^{(i)}=\frac{x_j^{[1]}}{x_i^{[1]}}$ si $j\in S_0\backslash\{i\}$, $\tilde{x}_j^{(i)}=x_j^{[0]}$ si $j\notin S_0$ et $\tilde{x}_i^{(i)}=x_i^{[0]}$. On a par construction \[\tilde{Y}_{\{j\}}\cap U_i= V\left({x_j^{(i)}}\right)\] pour $1\le j\le s$. De plus, on a grâce aux points 1. et 2. pour $S_1\subsetneq S_0\backslash\{i\}$ ou $S_1\cap S_0=\emptyset$ \[\tilde{Y}_{S_1}=\bigcap_{j\in S_1} \tilde{Y}_{\{j\}}=V\left({x_j^{(i)}}\right)_{j\in S_1}\] et \[\tilde{Y}_{S_0}\cap \tilde{Y}_{S_1}=V\left({x_j^{(i)}}\right)_{j\in S_1\cup \{i\}}.\
Il reste à prouver que $(\tilde{x}^{(i)}_j)_{j\in \left\llbracket 1,s\right\rrbracket}$ est régulière pour montrer qu'il s'agit bien de la suite recherchée. Commençons par montrer que la sous-suite $(\tilde{x}^{(i)}_j)_{j\in S_0}$ est régulière par récurrence sur $|S_0|$. Quand $S_0=\{s_0\}$, on a $X=\tilde{X}$ et $\tilde{x}^{(s_0)}_{s_0}=x_{s_0}$ qui est régulier par hypothèse. Supposons le résultat vrai pour $|S_0|-1\geq 1$. On se place en $U_i\subset \tilde{X}$.
Comme $x_{j_0}$ ($j_0\neq i$) n'est pas un diviseur de $0$ dans $A$, donc $x_{j_0}^{[1]}$ ne l'est pas non plus dans $\tilde{A}$ et $\tilde{x}_{j_0}^{(i)}=\frac{x_{j_0}^{[1]}}{x_i^{[1]}}$ est un élément régulier dans ${\mathscr{O}}(U_i)$. On veut montrer que $(\tilde{x}_{j}^{(i)})_{j\in S_0\backslash\{j_0\}}$ est régulier dans ${\mathscr{O}}(U_s)/(\tilde{x}_{j_0})={\mathscr{O}}(U_i\cap \widetilde{V(x_{j_0})})$. Mais
\[
\widetilde{V(x_{j_0})}=\bl_{V((\tilde{x}_{j}^{(i)})_{j\in S_0\backslash\{j_0\}})}(V(x_{j_0}))=:\proj(\widetilde{A/ x_{j_0}})
\]
et $U_i\cap \widetilde{V(x_{j_0})} =D^+(x_i^{[1]}\widetilde{A/ x_{j_0}}) $ est un ouvert standard. On conclut alors par hypothèse de récurrence sur $\widetilde{V(x_{j_0})}$.
Maintenant, nous devons montrer que la suite $(x_j^{(i)})_{j\notin S_0}$ est régulière dans ${\mathscr{O}}(U_i)/(x_j^{(i)})_{j\in S_0}=A/I_{S_0}$ et cela découle de la régularité de $(x_j)_{j\in \left\llbracket 1,s\right\rrbracket}$ dans $A$.
Pour 4., on a d'après le point précédent pour $S_1$ est une partie stricte de $S_0$ et $S_2$ est disjointe de $S_0$, on a \[\tilde{Y}_{S_1}\cap \tilde{Y}_{S_2}=V\left({x_j^{(i)}}\right)_{j\in S_1\cup S_2}\]
et
\[\tilde{Y}_{S_0}\cap\tilde{Y}_{S_1}\cap \tilde{Y}_{S_2}=V\left({x_j^{(i)}}\right)_{j\in S_1\cup S_2\cup\{i\}}. \]
Par régularité de $(\tilde{x}^{(i)}_j)_{j\in \left\llbracket 1,s\right\rrbracket}$, on voit que la codimension de $\tilde{Y}_{S_1}\cap \tilde{Y}_{S_2}$ (resp. $\tilde{Y}_{S_0}\cap\tilde{Y}_{S_1}\cap \tilde{Y}_{S_2}$) est $|S_1\cup S_2|=|S_1|+| S_2|$ (resp. $|S_1\cup S_2\cup\{i\}|=|S_1|+| S_2|+1$).
\end{proof}
\section{Construction de modèles de $\lt^1$}
Rappelons que nous avons construit une tour de revêtement $(\lt^n)_n$ de la boule unitée rigide. Dans toute la suite, on prend $n=1$ et nous construirons une suite de modèles $Z_0=\widehat{\lt}^1,Z_1,\cdots,Z_d$ de $\lt^1$. Nous avons vu que le modèle $Z_0$ était $\spf (A_1)$ avec $A_1$ local et régulier sur ${\mathcal{O}}_{\breve{K}}$. Il est important de préciser que nous munissons $A_1$ de la topologie $p$-adique et non de la topologie engendrée par l'idéal maximal. En particulier, la fibre spéciale de $Z_0$ est de la forme
\[
Z_{0,s}= \spec ( \bar{{\mathbb{F}}} \llbracket X_0,\ldots, X_d \rrbracket )/ (\prod_{a\in {\mathbb{F}}^{d+1}\backslash\{0\}} l_{a,F}(X)).
\]
où on note encore $l_{a,F}$ la réduction modulo $\varpi$. On a alors une décomposition $Z_{0,s}=\bigcup_a Y_a$ où $Y_a=V(l_{a,F})$ \cite[Définition 3.7 + sous-section 3.2., page 11]{yosh}. On a le résultat suivant :
\begin{prop}[\cite{yosh} Proposition 3.9]\label{propltfibrspe}
\begin{enumerate}
\item Si $S\subset {\mathbb{F}}_q^{d+1}\backslash\{0\}$ et $M= \langle S \rangle^{\perp}$, le fermé $\bigcap_{a\in S} Y_a$ ne depends que de $M$ et pas de $S$ et nous le noterons $Y_M$.
\item Si $S$ est minimal (i.e. est une famille libre), alors la suite $(l_{a,F})_{a\in S}$ est regulière.
\item Les composantes irréductibles de $Z_{0,s}$ sont les fermés $Y_M$ où $M=a^\perp$ est un hyperplan. En particulier, chaques composantes irréductibles a multiplicité $q-1$ qui est le cardinal d'une ${\mathbb{F}}$-droite de ${\mathbb{F}}^{d+1}$ à laquelle on a retiré l'élément nul.
\item L'application $M\mapsto Y_M$ est une bijection croissante $G^{\circ}$-équivariante entre l'ensemble des sous-espaces de ${\mathbb{F}}^{d+1}$ et l'ensemble des intersections finies de composantes irréductibles de $Z_{0,s}$. De plus, chaque $Y_M$ est irreductible.
\end{enumerate}
\end{prop}
\begin{proof}
Il s'agit d'une application de \ref{PropConglaF} qui a été faite dans \cite[Proposition 3.9.,Lemma 3.11.]{yosh}.
\end{proof}
On définit
\[
Y^{[h]}=\uni{Y_N}{N\subset {\mathbb{F}}^{d+1}\\ {\rm dim}(N)=h}{}.
\]
Nous allons définir une suite d'espaces $Z_0,\ldots, Z_d$ sur ${\mathcal{O}}_{\breve{K}}$, et des fermés $Y^{[0]}_i\subset \cdots \subset Y_i^{[d]}\subset Z_{i,s}$ tels que $Y_0^{[h]}=Y^{[h]}$ s'inscrivant dans le diagramme commutatif
\[\xymatrix{
Z_i\ar[r]^{\tilde{p}_i} \ar@/_0.75cm/[rrrr]_{p_i} & Z_{i-1}\ar[r]^{\tilde{p}_{i-1}} & \cdots \ar[r]^{\tilde{p}_2} & Z_1 \ar[r]^{\tilde{p}_1} & Z_0}\]
avec $\tilde{p}_i$ un éclatement . Supposons $Z_0, \ldots, Z_{i}$, et $Y_i^{[0]}, \ldots, Y_i^{[d]}$ préalablement construit, on définit $Z_{i+1}$ et $Y_{i+1}^{[h]}$ via le relation de récurrence
\[
Z_{i+1}=\bl_{Y^{[i]}_h}(Z_{i}) \text{ et } Y_{i+1}^{[h]}= \widetilde{Y^{[h]}_i}.
\]
De même, pour tout fermé $Y\subset Z_{0,s}$, on définit pour tout $i$ un fermé $Y_i\subset Z_{i,s}$ via $ Y_0=Y $ et $ Y_{i+1}= \widetilde{Y_i}$. En particulier, nous pourrons nous intéresser à la famille des fermés $Y_{M,i},Y_{a,i}\subset Z_{i,s}$ pour $M\nsubseteq {\mathbb{F}}^{d+1}$ et $a\in {\mathbb{F}}^{d+1}\backslash\{0\}$.
\begin{rem
Notons que les éclatements considérés ici sont bien admissibles au sens de \ref{rembladm} car les fermés que nous avons considérés sont contenus dans la fibre spéciale (ce qui explique le choix de la topologie dans la définition de $Z_0$). Ce fait justifie aussi que chacun des espaces $Z_i$ obtenus sont encore des modèles de $\lt^1$.
Nous pouvons aussi donner la construction de ces modèles dans un cadre plus "algébrique". En effet, on peut construire dans $\spec (A_1)$ un analogue de la stratification $(Y^{[h]})_h$ et considérer les éclatements successifs suivant les fermés de cette stratification. On obtient alors une chaîne d'éclatement qui fournit encore, lorsqu'on complète $p$-adiquement, les modèles $Z_i$ décrits auparavant.
\end{rem}
L'interprétation modulaire de $Z_0$ fournit une action naturelle des trois groupes ${\mathcal{O}}_D^*$ (qui s'identifient aux isogénies de $\Phi^d$), $G^{\circ}$ (en permutant les structures de niveau) et de $W_K$ (sur $Z_0\otimes {\mathcal{O}}_C$ pour ce dernier). Comme les morphismes de vari\'et\'es envoient les irr\'eductibles sur les irr\'eductibles, tout groupe agit en permutant les $Y_a$ et donc les $Y_M$ de même dimension. Ainsi, $Y^{[h]}_0$ est stable sous les actions de $G^\circ$, ${\mathcal{O}}_D^*$ et $W_K$ et $Z_1$ hérite d'une action de ces groupes qui laisse stable $Y^{[h]}_1$ par propriété universelle de l'éclatement. Par récurrence immédiate, on a encore une action de $G^\circ$, ${\mathcal{O}}_D^*$ et $W_K$ sur $Z_i$ qui laisse stable $Y^{[h]}_i$ pour tout $i$.
Dans le cas particulier où $i=1$, $Y^{[0]}_0= Y_{\{0\}}$ est l'unique point fermé de $Z_0$. Le diviseur exceptionnel $Y_{\{0\},1}$ s'identifie à ${\mathbb{P}}^d_{\overline{{\mathbb{F}}}}$ et hérite des actions de $G^\circ, {\mathcal{O}}_D^*$ et $W_K$.
\section{Les composantes irréductibles de la fibre spéciale des modèles $Z_i$}
Nous souhaitons décrire les composantes irréductibles de la fibre spéciale de chacun des modèles intermédiaires $Z_i$. Si $Y$ est une composante irréductible de $Z_{i,s}$, on écrit $Y^{lisse}=Y\backslash \bigcup Y'$ où $Y'$ parcourt l'ensemble des composantes irréductibles de $Z_{i,s}$ différentes de $Y$. Le but de cette section est de démontrer le théorème suivant :
\begin{theo}
\label{TheoZH}
Soit $0\leq i \leq d$ un entier, On a
\begin{enumerate}
\item Les composantes irréductibles de la fibre spéciale de $Z_i$ sont les fermés de dimension $d$ suivants $(Y_{M,i})_{M: \dim M\in \left\llbracket 0,i-1\right\rrbracket\cup\{d\}}$.
\item Les intersections non-vides de composantes irréductibles de $Z_i$ sont en bijection avec les drapeaux $M_1\subset \cdots \subset M_k$ tels que $\dim M_{k-1}< i$ via l'application \[M_1\subset \cdots \subset M_k \mapsto \bigcap_{1\leq j \leq k} Y_{M_j,i}.\]
\item Si $Y_{M,i}$ est une composante irréductible de $Z_{i,s}$ avec $\dim M\neq d$, alors les morphismes naturels $\tilde{p}_h$ avec $j=i+1,\ldots, d$ induisent des isomorphismes $ Y_{M,d}^{lisse} \cong\cdots \cong Y_{M,i+1}^{lisse} \cong Y_{M,i}^{lisse}$.
\item Le changement de base du tube $]Y_{\{0\},d}^{lisse}[_{Z_d} \otimes \breve{K}(\varpi_N)\subset {\rm LT}_1 \otimes \breve{K}(\varpi_N)$ admet un modèle lisse dont la fibre spéciale est isomorphe à la variété de Deligne-Lusztig $\dl_{\bar{{\mathbb{F}}}}$ (ici, $N=q^{d+1}-1$ et $\varpi_N$ est le choix d'une racine $N$-ième de $\varpi$).
\end{enumerate}
\end{theo}
En fait, nous allons prouver le résultat plus technique et plus précis.
\begin{lem}\label{lemirrzh}
Soit $0\leq i \leq d$ un entier fixé, la propriété ${\mathscr{P}}_i$ suivante est vérifiée :
\begin{enumerate}
\item Pour tout sous-espace vectoriel $M\subsetneq {\mathbb{F}}^{d+1}$, $Y_{M,i}$ est irréductible et $Y_{M,i}\subset Z_i$ est une immersion régulière de codimension $1$ si $\dim M<i$ et $\codim M$ sinon.
\item Si $M=\langle S \rangle^{\perp}$ avec $ S=\{a_1,\ldots, a_s\}\subset {\mathbb{F}}^{d+1}$, on a $Y_{a_1,i}\cap \cdots \cap Y_{a_s,i}= \begin{cases} Y_{M,i}, & \text{Si } \dim M\geq i \\
\emptyset & \text{Sinon} \end{cases} $
\item Si $M\subsetneq N\subset {\mathbb{F}}^{d+1}$ avec $\dim M \geq i$, alors $Y_{M,i}\to Y_{N,i}$ (bien définie d'après le point précédent), est une immersion régulière de codimension $\dim N-\dim M$
\item Si $M_1\subset \cdots \subset M_k$ avec $\dim M_{k-1}< i$, $\bigcap_{1\leq j \leq k} Y_{M_j,i}\subset Y_{M_k,i}$ est une immersion régulière de codimension $s-1$. En particulier, $Y_{M_k,i}$ et $\bigcap_{1\leq j \leq k-1} Y_{M_j,i}$ sont transverses.
\item Si $i> i'=\dim M$, on a un isomorphisme $Y_{M,i}= \tilde{p}_{i,i'}^{-1}(Y_{M,i'})$.
\end{enumerate}
\end{lem}
\begin{proof}
Raisonnons par récurrence sur $i$.
Quand $i=0$, ${\mathscr{P}}_0$ correspond à la Proposition \ref{propltfibrspe} (4. et 5. sont vides dans ce cas). Supposons ${\mathscr{P}}_i$ vrai et montrons ${\mathscr{P}}_{i+1}$. On sait (${\mathscr{P}}_i$ 1. et 2.) que $Y_{N,i}\to Z_i$ est régulière pour tout $N$ et $Y_{N_1,i}\cap Y_{N_2,i}=\emptyset$ si $\dim N_1=\dim N_2=i$ et $N_1\neq N_2$. Donc $Y^{[i]}_i= \bigsqcup_N Y_{N,i}\rightarrow Z_i$ est régulière. Considérons le recouvrement $Z_i =\bigcup_{N:\dim N=i} U_N$ avec $U_N:=Z_i\backslash (\bigcup_{N'} Y_{N',i})$ où $N'$ parcours les espaces vectoriels de dimension $i$ différents de $N$. On a par construction $U_N\cap Y^{[i]}_i=Y_{N,i}$ et les résultats \ref{theoBlowUp} 2. et 3. entraînent
\begin{align*}
\tilde{p}_{i+1}^{-1}(U_N)&=\bl_{Y_{N,i}} (U_N)\\
Y_{M,i+1} \cap \tilde{p}_{i+1}^{-1}(U_N)&=\bl_{Y_{M,i}\cap Y_{N,i}} (Y_{M,i}\cap U_N)=\widetilde{Y_{M,i}\cap U_N}
\end{align*}
et on peut appliquer \ref{theoBlowUp} 4., \ref{theoblreg} pour montrer que $Y_{M,i+1} \cap \tilde{p}_{i+1}^{-1}(U_N)$ vérifie les propriétés voulues. Mais comme ces dernières sont des égalités entre des intersections et des propriétés de régularité qui sont locales, on en déduit la propriété ${\mathscr{P}}_{i+1}$ ce qui termine la preuve.
\end{proof}
\begin{proof}[Démonstration du Théorème \ref{TheoZH}]
Pour le premier point, tous les fermés de la forme $Y_{M,i}$ avec $\dim M\in \left\llbracket 0,i-1\right\rrbracket\cup\{d\}$ sont irréductibles d'après le résultat précédent. On a aussi montré qu'ils étaient tous de dimension $d$ et que les intersections de deux éléments étaient ou vide ou de dimension $d-1$. On en déduit qu'il n'existe aucune relation d'inclusion entre deux de ces fermés. Il suffit donc de prouver que les fermés étudiés recouvrent bien la fibre spéciale de $Z_i$. Pour cela, raisonnons par récurrence sur $i$. Au rang $0$, cela provient de \ref{propltfibrspe}. Supposons le résultat vrai au rang $i$. Dans ce cas, l'union $\bigcup_{\dim M\in \left\llbracket 0,i-1\right\rrbracket\cup\{d\}} Y_{M,i}$ contient $Z_i\backslash Y^{[i]}_i$ et on en déduit d'après \ref{theoBlowUp} 2.
\[
Z_{i+1}\backslash Y^{[i]}_{i+1}\subset \bigcup_{\dim M\in \left\llbracket 0,i-1\right\rrbracket\cup\{d\}} Y_{M,i+1}.
\]
De plus, le diviseur exceptionnel $Y^{[i]}_{i+1}$ est l'union des fermés de la forme $Y_{M,i+1}$ avec $\dim M =i$ ce qui prouve que la famille exhibée correspond bien à la décomposition parties irréductibles de la fibre spéciale de $Z_{i+1}$.
Pour le deuxième point, les intersections de composantes irréductibles sont de la forme $\bigcap_{1\leq j \leq k-1} Y_{M_j,i}\cap \bigcap_{a\in S}Y_{a,i}$ avec $S\subset {\mathbb{F}}^{d+1}\backslash\{0\}$ et $\dim M_j<i$. Posons $M_k=\langle S \rangle^{\perp}$, l'intersection précédente est vide si $\dim M_k <i$. Supposons que ce ne soit pas le cas, il s'agit de voir que $\bigcap_{1\leq j \leq k} Y_{M_j,i}$ est non vide si et seulement si $(M_j)_j$ est un drapeau. Si c'en est un, d'après \ref{theoblreg}, l'intersection est de codimension
\[
\begin{cases}
s &\text{Si } \dim M_{k}< i\\
s-1 + \codim M_k &\text{Sinon}
\end{cases}
\]
qui est inférieure ou égale à $d$ ce qui montre que ces intersections sont non-vides. Pour l'autre sens, supposons $M_{j_1}\nsubseteq M_{j_2}$ et $M_{j_1}\nsubseteq M_{j_2}$ pour $j_1\neq j_2$, nous devons montrer $Y_{M_{j_1},i}\cap Y_{M_{j_2},i}=\emptyset$. On peut trouver sous ces hypothèses $i'$ tel que \[\dim M_{j_1}\cap M_{j_2}< i' \le \min (\dim M_{j_1},\dim M_{j_2}) <i.\] On a alors $Y_{M_{j_1},i'}\cap Y_{M_{j_2},i'}=\emptyset$ d'après ${\mathscr{P}}_{i'}$ 2. De plus, si $i_2>\dim M_{j_1}\cap M_{j_2}$ et $Y_{M_{j_1},i_2}\cap Y_{M_{j_2},i_2}=\emptyset$, alors $Y_{M_{j_1},i_2+1}\subset\tilde{p}^{-1}_{i_2+1}(Y_{M_{j_1},i_2})$ (idem pour $Y_{M_{j_2},i_2}$) d'où \[Y_{M_{j_1},i_2+1}\cap Y_{M_{j_2},i_2+1}\subset\tilde{p}^{-1}_{i_2+1}(Y_{M_{j_1},i_2}\cap Y_{M_{j_2},i_2})=\emptyset\] ce qui montre par récurrence $Y_{M_{j_1},i}\cap Y_{M_{j_2},i}=\emptyset$.
Passons à 3. Si $Y$ est irréductible sur $Z_{i,s}$ et n'est pas de la forme $Y_{a,i}$ pour $a\in {\mathbb{P}}^{d}({\mathbb{F}})$, alors $Y^{lisse}$ ne rencontre aucun $Y_{a,i}$ et donc aucun $Y_{M,i}$ avec $\dim M=i$ car ce sont des intersections de tels fermés de la forme $Y_{a,i}$ d'après 1. En particulier, $Y^{lisse}\cap Y^{[i]}_{i}=\emptyset$. De même, $(\tilde{Y})^{lisse}$ ne rencontre pas $ Y^{[i]}_{i+1}$ qui est une union de composantes irréductibles toujours d'après 1. Il reste à prouver que $(\tilde{Y})^{lisse}=\tilde{p}_{i+1}^{-1}(Y^{lisse})$. Cela découle de l'isomorphisme (cf \ref{theoBlowUp} 3.) pour $Y'$ une composante irréductible différente de $Y$ :
\[
\tilde{p}_{i+1}: \widetilde{Y'}\backslash Y^{[i]}_{i+1} \stackrel{\sim}{\rightarrow} Y'\backslash Y^{[i]}_{i}.
\]
Pour le dernier point, cela a été montré dans \cite[Proposition 5.2]{yosh} quand $i=0$. Pour les autres valeurs de $i$, il suffit d'appliquer le point 3.
\end{proof}
\section{Semi-stabilité du modèle $Z_d$}
Notre but dans cette section est de rappeler le théorème principal de \cite[Théorème 4.2]{yosh}. Pour le confort du lecteur, nous donnerons les grandes lignes de la preuve. L'énoncé est le suivant
\begin{theo}[Yoshida]
Pour tout point fermé $x\in Z_d$, l'anneau local complété au point $x$ est isomorphe à
\[
{\mathcal{O}}_{\breve{K}} \llbracket T_0,\ldots, T_d \rrbracket /(T_0^{e_0}\cdots T_r^{e_r}- \varpi )
\]
avec $r\leq d$ et $e_i$ premier à $p$.
\end{theo}
\begin{rem}
Expliquons succinctement la stratégie de la preuve. L'idée est de calculer par récurrence l'anneau local complété en les points fermés des différents éclatements $Z_{i}$. Plus précisément, on montre une décomposition pour cet anneau en un point $z$ sous la forme ${\mathcal{O}}_{\breve{K}}\llbracket X_0,\ldots, X_d \rrbracket/(\prod_j f_j^{m_j}-\varpi)$ où les fermés $(V(f_j))_j$ décrivent l'ensemble des composantes irréductibles rencontrant le point $z$ considéré et l'entier $m_i$ est la multiplicité de la composante associée. D'après le théorème \ref{TheoZH} 2., ces composantes sont de la forme $Y_{M,i}$ avec $\dim {M}< i$ ou $Y_{a,i}$ avec $a\in {\mathbb{F}}^{d+1}\backslash\{0\}$. \'Etant donnée une famille des composantes $(Y_{M_j,i})_{j} \cup (Y_{a_j,i})_j$, les fonctions associées $(f_{M_j})_j\cup f_{a_j})_{j}$ forment une suite régulière si et seulement si les $(a_j)_j$ sont libres (et donc une famille $\bar{{\mathbb{F}}}$-libre dans l'espace cotangent). On peut alors estimer le nombre des composantes de la forme $Y_{a,i}$ et montrer qu'il y en a au plus une quand $i=d$. Dans ce cas, les $(f_j)$ forment une suite régulière vérifiant la propriété de liberté précédente et la semi-stabilité en découle.
Pour pouvoir décrire les fonctions $f_j$ et les $m_j$, on raisonne par récurrence sur $i$. Quand $i=0$, il s'agit du théorème \ref{theolteq}. Si $z$ est un point fermé de $Z_{i+1}$, son image par l'éclatement $\tilde{z}\in Z_{i}$ est un point fermé et $\widehat{{\mathscr{O}}}_z$ se voit comme l'anneau local complété en un point du diviseur exceptionnel d'un éclatement de $\widehat{{\mathscr{O}}}_{\tilde{z}}$. Grâce au lemme \ref{lemReg2}, on peut décrire explicitement le lien entre les composantes irréductibles de $\widehat{{\mathscr{O}}}_{\tilde{z}}$ et celles de $\widehat{{\mathscr{O}}}_{z}$.
\end{rem}
\begin{proof}
Fixons $z\in Z_i$ pour $i\leq d$. Calculons l'anneau local complété dans $z$. Comme $p_i: Z_{i}\rightarrow Z_0$ est propre, $p_i(z)$ est un point fermé et est donc $Y_{\{0\},0}$ par localité de $A_0={\mathscr{O}}(Z_0)$. Ainsi, on a $z\in p_i^{-1}(Y_{\{0\},0})=Y_{\{0\},i}$ d'après le lemme \ref{lemirrzh} 5. D'après le théorème \ref{TheoZH} 2., il existe un drapeau $M_0 \subsetneq M_1\subsetneq \cdots \subsetneq M_{j_0} \subsetneq M_{j_0+1} $ avec $\dim M_{j_0} <i $ tel que les composantes irréductibles rencontrant $z$ sont les $Y_{M_0,i}, \cdots,Y_{M_{j_0,i}}$ ainsi que les fermés $Y_{a,i}$ avec $a^\perp\supset M_{j_0+1} $. De plus, $M_0=\{0\}$ d'après la discussion précédente.
Quitte à translater par un élément de $\gln_{d+1}({\mathbb{F}})$, on peut supposer que $M_j=\langle e_0,\ldots, e_{d_{j}} \rangle$ avec $(e_i)_i$ la base canonique de ${\mathbb{F}}^{d+1}$ (en particulier, $d_i=\dim M_i-1$). Nous voulons prouver par récurrence sur $i$ que l'anneau local complété $\widehat{{\mathscr{O}}}_z$ au point $z$ est isomorphe à
\[
{\mathcal{O}}_{\breve{K}} \llbracket X_0,\ldots, X_d \rrbracket /(uX_{d_0}^{m_0} \cdots X_{d_{j_0}}^{m_{j_0}}(\prod_{a\in M_{j_0+1}^{\perp}\backslash\{0 \} } f_a) - \varpi)
\]
avec
\begin{enumerate}
\item $u$ est une unité et $m_j=|M_j^{\perp} |-1=q^{d-d_j}-1$.
\item $V(f_a)= \spec \widehat{{\mathscr{O}}}_z \times_{Z_i} Y_{a,i}$ et $V(X_{d_j})= \spec \widehat{{\mathscr{O}}}_z \times_{Z_i} Y_{M_j,i}$.
\item Si $a=\sum_{j\geq k } a_j e_j$ avec $k> d_{j_0+1}$, on peut trouver des relevés $\tilde{a}_j\in {\mathcal{O}}_{\breve{K}}$ tel que
\[
f_a= \sum_{i} \tilde{a}_j X_j \mod (X_{k}, \ldots , X_d)^2.\]
\end{enumerate}
Cette description de l'anneau local complété quand $i=d$ établi le théorème. En effet, on a $M_{j_0+1}^{\perp}=0$ et le produit $(\prod_{a\in M_{j_0+1}^{\perp}\backslash\{0 \} } f_a)$ est vide. De plus, d'après le lemme de Hensel, $\widehat{{\mathscr{O}}}_{z}^*$ est $n$-divisible pour $n$ premier à $p$. En posant $\tilde{X}_{0}= u^{1/m_0} X_{0}$ et $\tilde{X}_j=X_j$ pour $j\neq 0$, on voit que $\widehat{{\mathscr{O}}}_z$ est de la forme voulue.
Quand $i=0$, voyons que la description précédente de l'anneau local complété découle de \ref{propltfibrspe}. Le fermé $Y_{\{0\}}$ est l'unique point fermé de $Z_{0}$ et l'anneau local complété est
\[
{\mathscr{O}}(Z_0)= {\mathcal{O}}_{\breve{K}}\llbracket X_0,\ldots, X_d \rrbracket/(u\prod_{a\in {\mathbb{F}}^{d+1}\backslash \{0\}} l_{a,F} -\varpi)
\]
avec $u$ une unité. Décrivons les quantités $j_0, (M_j)_{0\leq j\leq j_0+1} $ et $(f_a)_{a\in M_{j_0+1}^{\perp}\backslash \{0\}}$ pour ${\mathscr{O}}(Z_0)$. On a $j_0={-1}$, $M_{j_0+1}=M_{0}=\{0\}$, en particulier $M_0^\perp= {\mathbb{F}}^{d+1}$ . Posons $f_{a}=l_{a,F}$ pour $a\in {\mathbb{F}}^{d+1}\backslash \{0\}$ et on obtient la formule voulue grâce à \ref{PropConglaF} et l'action transitive de $\gln_{d+1}({\mathbb{F}})$ sur la famille $(l_{a,F})$.
Supposons le résultat pour $i\geq 0$, montrons-le pour $i+1$ et prenons $z\in Z_{i+1}$ un point fermé. Par propreté des éclatements, $\tilde{z}={p}_{i+1}(z)$ est un point fermé de $Z_{i}$. Si $\dim M_{j_0} < i$, alors $\tilde{z}$ n'est pas dans le centre de l'éclatement et $p_{i+1}$ induit un isomorphisme entre $\hat{{\mathscr{O}}}_{\tilde{z}}$ et $\hat{{\mathscr{O}}}_z$. Le résultat dans ce cas s'en déduit par hypothèse de récurrence.
Sinon, toujours par hypothèse de récurrence, on écrit l'anneau local complété sous la forme
\[
\widehat{{\mathscr{O}}}_{\tilde{z}}\cong {\mathcal{O}}_{\breve{K}} \llbracket \tilde{X}_0,\ldots, \tilde{X}_d \rrbracket /(u\tilde{X}_{d_0}^{m_{0}} \cdots \tilde{X}_{d_{j_0-1}}^{m_{j_0-1}}(\prod_{a\in M_{j_0}^{\perp}\backslash\{0 \} } f_a) - \varpi)
\]
où $j_0-1$, $(M_j)_{j\le j_0}$, et $(f_a)_a$ vérifient les propriétés escomptées au rang. Pour relier l'anneau $\hat{{\mathscr{O}}}_{\tilde{z}}$ à l'anneau $\hat{{\mathscr{O}}}_z$, on observe (cf. théorème \ref{theoBlowUp} 3.) l'identité
\[
S: = \bl_{Y_{M_{j_0+1},i}\times_{{Z_i}} {\widehat{Z}_{i, \tilde{z}}} }(\widehat{Z}_{i, \tilde{z}} )= Z_{i+1}\times_{Z_{i}}\widehat{Z}_{i, \tilde{z}}.
\]
avec $\widehat{Z}_{i, z}=\spec \widehat{{\mathscr{O}}}_{\tilde{z}}$ car $\widehat{Z}_{i,\tilde{z}}\rightarrow Z_{i}$ est plat.
D'après 2. et 3. de l'hypothèse de récurrence et \ref{TheoZH} 3., $(\tilde{X}_j)_{j=i,\dots, d}$ est une suite régulière et
\[
S= \bl_{V((f_{e_j})_j)}(\widehat{Z}_{i,\tilde{z}})= \proj( \widehat{{\mathscr{O}}}_{\tilde{z}}[T_i, \ldots, T_{d}]/(T_{j_1} \tilde{X}_{j_2}- T_{j_2} \tilde{X}_{j_1})).
\]
L'anneau $\spf \hat{{\mathscr{O}}}_z$ correspond au complété en un point fermé du diviseur exceptionnel $\proj( \overline{{\mathbb{F}}} [T_i,\ldots, T_d])$, et ces derniers sont en bijection avec ${\mathbb{P}}^{d-i}(\overline{{\mathbb{F}}})$. Mais par hypothèse, $z\notin Y_{e_{i},i+1}$ et le point fermé en question est dans $D^+(T_i)= \spec B$.
Écrivons-le sous la forme \[z=[1,z_{i+1}, \cdots, z_d]=[1,z_{i+1}, \cdots, z_{d_{j_0+1}-1}, 0, \cdots, 0]\in {\mathbb{P}}^{d-i}(\bar{{\mathbb{F}}}).\] On a alors
\[
B:= {\mathcal{O}}_{\breve{K}}\llbracket \tilde{X}_0,\ldots, \tilde{X}_d \rrbracket [\tilde{T}_{i+1}, \ldots, \tilde{T}_{d}]/(\tilde{T}_{j}\tilde{X}_i-\tilde{X}_j, P-\varpi)
\]
avec $\tilde{T}_j=\frac{T_j}{T_i}$ et l'idéal associé à $z$ est \[{\mathfrak{m}}_{z}={\mathfrak{m}}_{\tilde{z}}B+(\tilde{T}_j-\tilde{z}_j)_j,\] où $\tilde{z}_j\in {\mathcal{O}}_{\breve{K}}$ est un relèvement de $z_j\in \overline{{\mathbb{F}}}$, et $P=u\tilde{X}_{d_0}^{m_0} \cdots \tilde{X}_{d_{j_0-1}}^{m_{j_0-1}}(\prod_{a\in M_{j_0}^{\perp}\backslash\{0 \} } f_a) $.
Par noetherianité, on obtient d'après \cite[Prop 10.13]{atimcdo}
\[
\widehat{{\mathscr{O}}}_{z}= {\mathcal{O}}_{\breve{K}}\llbracket X_0, \cdots, X_d \rrbracket/(P(X_0, \cdots,X_i, (X_{i+1}+\tilde{z}_{i+1}) X_i,\cdots, (X_d+\tilde{z}_d) X_i)-\varpi)
\] en posant\footnote{Notons que l'on a pris $\tilde{z}_j=0$ si $j \geq d_{j_0+1}$} \[
X_j= \begin{cases}
\tilde{X}_j & \text{Si } j\leq i \\
\tilde{T}_j-\tilde{z}_j & \text{Sinon}.
\end{cases}
\]
Comme $f_{a}(\tilde{X}_{0},\dots, \tilde{X}_d)\in (\tilde{X}_{i},\dots, \tilde{X}_d)$ d'après 3., alors
\[
f_a( X_0, \cdots,X_i, (X_{i+1}+\tilde{z}_{i+1}) X_i,\cdots, (X_d+\tilde{z}_d) X_i)=: X_i g_a({X}_{0},\dots, {X}_d).
\]
Ainsi, réécrivons $P$ sous la forme
\begin{eqnarray*}
P(X_0, \cdots,X_i, (X_{i+1}+\tilde{z}_{i+1}) X_i,\cdots, (X_d+\tilde{z}_d) X_i) & = & u X_{d_0}^{m_{0}} \cdots X_{d_{j_0-1}}^{m_{j_0-1}} \prod_{a \in M_{j_0}^{\perp}\{ 0 \} } X_i g_a \\
& = & v X^{m_0}_{d_0}\cdots X^{m_{j_0-1}}_{d_{j_0-1}} X_{d_{j_0}}^{m_{j_0}} \prod_{a\in M_{j_0+1}^{\perp}\backslash \{0\} } g_a \\
\end{eqnarray*}
avec $v=u \prod_{a\notin M_{j_0+1}^{\perp}\backslash\{0\}} g_{a}$. D'après \ref{lemReg2} 3., chacun des $g_a$ dans le produit définissant $v$ est inversible car $z\notin Y_{a,i}$. On obtient
\[
\widehat{{\mathscr{O}}}_{z}={\mathcal{O}}_{\breve{K}}\llbracket X_0, \ldots, X_d \rrbracket/((v X_{d_0}^{m_0} \cdots X_{d_{j_0}}^{m_{j_0}} (\prod_{a\in M_{j_0+1}^{\perp}\backslash\{0 \} } g_a) - \varpi ).
\]
Montrons que cette description vérifie les hypothèses demandées. Pour le point 1., c'est clair par construction. Le point 3. montre la régularité de $\tilde{X}_{d_{0}}, \ldots, \tilde{X}_{d_{j_0}}, \tilde{X}_{d_{j_0+1}+1}, \tilde{X}_{d_{j_0+1}+2},\ldots, \tilde{X}_d$. \footnote{on peut aussi raisonner par récurrence sur $i$ grâce à \ref{lemReg2}.} Ainsi, on peut appliquer les arguments de la preuve de \ref{lemReg2} pour obtenir
\[
V(X_{d_j}) = Y_{M_j,i+1}\times_{Z_{i+1}} \widehat{Z}_{i+1,z} \text{ et } V(g_{a}) = Y_{a,i+1}\times_{Z_{i+1}} \widehat{Z}_{i+1,z}
\]
pour $j\leq j_{0}+1$ et $a\in M_{j_0+1}^{\perp} \backslash \{0\}$. Le point 2. est alors vérifié. Pour le point 3. en $z$, cela découle de l'hypothèse analogue en $\tilde{z}$ et de la relation \[g_a({X}_{0},\dots, \tilde{X}_d)=f_a( X_0, \cdots,X_i, (X_{i+1}+\tilde{z}_{i+1}) X_i,\cdots, (X_d+\tilde{z}_d) X_i)/X_i\] pour $a\in M_{j_0+1}^{\perp}\backslash \{0\}$.
\end{proof}
\section{Relation avec un modèle semi-stable d'une variété de Shimura\label{sssectionlt1sh}}
Nous rappelons la construction de modèles entiers pour certaines variétés de Shimura développés dans (\cite[chapitre III.3]{yosh}) et (\cite[chapitre 2.4, chapitre 4]{harrtay}) et nous décrivons le lien entre ces modèles et ceux des sections précédentes.
Soit $F=EF^+$ un corps CM avec $F^+$totalement réel de degré\footnote{noté $d$ dans \cite{harrtay}} $k$ et $E$ quadratique imaginaire où $p$ est décomposé. Fixons $r$ un entier positif et donnons nous $w_1(=w),\cdots,w_r$ des places de $F$ au-dessus de $p$ tel que \footnote{Pour montrer l'existence d'une telle extension $F$, on se ramène au cas où $K/{\mathbb{Q}}_p$ est galoisienne quitte à prendre une clôture galoisienne et à passer aux invariants sous Galois. Par résolubilité de l'extension, on peut raisonner sur des extensions abéliennes cycliques par dévissage. Trouver $F$ revient alors à montrer l'existence de certain caractère par théorie du corps de classe ce qui est réalisé dans \cite[appendice A.2]{blggt} par exemple.} $F_w=K$. Soit $B/F$ une algèbre à division de dimension\footnote{noté $n^2$ dans \cite{harrtay}} $(d+1)^2$ déployé en la place $w$ (voir p.51 de \cite{harrtay} pour les hypothèses supplémentaires imposées sur $B$). Nous appelons $G$ le groupe réductif défini dans \cite{harrtay} p.52-54. Soit $U^p$ un sous-groupe ouvert compact assez petit de $G({\mathbb{A}}^{\infty,p})$ et, $m=(1,m_2,\cdots, m_r)\in {\mathbb{N}}^r$, Nous nous intéresserons au problème modulaire considéré dans \cite[p. 108-109]{harrtay} qui est représentable par ${X}_{U^p,m}$ un schéma sur ${\mathcal{O}}_K$ propre, plat de dimension $d+1$.
Nous n'allons pas décrire en détail le problème modulaire représenté par ${X}_{U^p,m}$ mais nous rappelons seulement que le foncteur associé classifie les quintuplets $(A,\lambda, \iota, \eta^p , (\alpha_i)_i)$ à isomorphisme près où $A$ est un schéma abélien de dimension $k(d+1)^2$ muni d'une polarisation $\lambda$ première à $p$, d'une ${\mathcal{O}}_B$-action et d'une structure de niveau $\alpha_1 :\varpi^{-1}\varepsilon{\mathcal{O}}_{B_w}/\varepsilon{\mathcal{O}}_{B_w}\to \varepsilon A[\varpi]$ où $\varepsilon$ est un idempotent de $\mat_{d+1}({\mathcal{O}}_K)$ (avec quelques compatibilité entre ces données). On rappelle que l'on a une identification $\varepsilon{\mathcal{O}}_{B_w}\cong {\mathcal{O}}_K^{d+1}$ par équivalence de Morita.
Nous écrirons $(A,\lambda,\iota,\eta^p, (\alpha_i)_i)$ le quintuplet universel du problème modulaire $X_{U^p,m}$.
La fibre spéciale $\bar{X}_{U^p,m}={X}_{U^p,m}\otimes {\mathbb{F}}$ admet une stratification par des sous-schémas fermés réduits $\bar{X}_{U^p,m}=\bigcup_{0\le h\le d} \bar{X}_{U^p,m}^{[h]}$ de dimension pure $h\in\llbracket 0,d\rrbracket$. L'espace $\bar{X}_{U^p,m}^{[h]}$ est la clôture de l'ensemble des points fermés $s$ où ${\mathcal{G}}_{A,s}$ a pour hauteur étale inférieure ou égale à $h$ (cf p. 111 dans \cite[Corollary III.4.4]{harrtay}). Chacun de ces espaces admet un recouvrement $\bar{X}_{U^p,m}^{[h]}=\bigcup_M \bar{X}_{U^p,m,M}$ où $M$ parcourt les sous-espaces ${\mathbb{F}}$-rationnels de ${\mathbb{P}}_{{\mathbb{F}}}^d$ de dimension $h$ \cite[3.2 (see Remark 10(2))]{mant}. Nous allons maintenant étendre les scalaires à $\spec({\mathcal{O}}_{\breve{K}})$ et noter ${{\rm Sh}}_{}:={X}_{U^p,m}\otimes {\mathcal{O}}_{\breve{K}}$, $\bar{{\rm Sh}}_{}:=\bar{X}_{U^p,m}\otimes \bar{{\mathbb{F}}}$, $\bar{{\rm Sh}}_{M}:=\bar{X}_{U^p,m,M}\otimes \bar{{\mathbb{F}}}$ et $\bar{{\rm Sh}}_{}^{[h]}:=\bar{X}_{U^p,m}^{[h]}\otimes \bar{{\mathbb{F}}}$ ainsi que $\hat{{\rm Sh}}$ la complétion de ${\rm Sh}$ le long de la fibre spéciale $\bar{{\rm Sh}}$.
Comme dans la section précédente, on construit une suite de modèles entiers $\hat{\rm Sh}_0=\hat{\rm Sh}, \hat{\rm Sh}_1,\cdots,\hat{\rm Sh}_d$ s'inscrivant dans un diagramme \[\xymatrix{
\hat{\rm Sh}_i \ar[r]^{\tilde{p}_i} \ar@/_0.75cm/[rrrr]_{p_i} & \hat{\rm Sh}_{i-1} \ar[r]^{\tilde{p}_{i-1}} & \cdots \ar[r]^{\tilde{p}_2} & \hat{\rm Sh}_1 \ar[r]^{\tilde{p}_1} & \hat{\rm Sh}_0}\] en réalisant successivement les éclatement admissibles le long des transformés stricts de $\bar{{\rm Sh}}^{[i-1]}$ (cela revient aussi à éclater ${\rm Sh}$ puis à compléter $p$-adiquement chaque modèle intermédiaire). On notera aussi $\bar{{\rm Sh}}_{M,i}$ le transformé strict de $\bar{{\rm Sh}}_{M}$. De même si $\bar{s}$ est un point géométrique fermé centré en $s\in \bar{X}_{U^p,m}^{[0]}$ que l'on voit par changement de base comme un point fermé de $\bar{{\rm Sh}}_{}^{[0]}$, on note $\bar{{\rm Sh}}_{\bar{s},i}:=p_i^{-1}(\bar{s})$. En particulier, $\bar{{\rm Sh}}_{\bar{s},1}$ s'identifie à un espace projectif ${\mathbb{P}}^d_{\bar{{\mathbb{F}}}}$. Le théorème suivant relie les constructions relatives au premier revêtement de la tour de Lubin-Tate à celles de la variété de Shimura.
\begin{theo}[Harris-Taylor,Yoshida]\label{theomodeleshltalg}
Soit $\bar{s}$ un point fermé géométrique centré en $s\in\bar{X}^{[0]}_{U^p,m}$ vu comme un point de ${\rm Sh}$.
\begin{enumerate}
\item (\cite[Lemma III.4.1]{harrtay}) On a un isomorphisme \[Z_0\cong \spec(\hat{{\mathscr{O}}}_{{\rm Sh},\bar{s}})\cong \spec(\hat{{\mathscr{O}}}_{\hat{\rm Sh},\bar{s}})\] Il en résulte un morphisme $Z_0\to \hat{\rm Sh}$.
\item (\cite[Lemma 4.4]{yosh}) Via cette application $Z_0\to \hat{\rm Sh}$, on a des isomorphismes \[Y_M\cong {\bar{\rm Sh}}_M \times_{\hat{\rm Sh}}Z_0\text{ et } Y^{[h]}\cong \bar{{\rm Sh}}^{[h]}\times_{\hat{\rm Sh}} Z_0\]
\item (\cite[Lemma 4.6]{yosh}) Pour tout $i\le d$, \[Z_i\cong {\hat{\rm Sh}}_i\times_{\hat{\rm Sh}} Z_0 \text{ et } Y_{\{0\},i}\cong {\bar{\rm Sh}}_{\bar{s},i}\times_{\bar{\rm Sh}_i} Z_i\] De plus, il existe un voisinage étale de $Z_d$ vu comme un fermé de ${\rm Sh}_d$ de réduction semi-stable.
\item $\bar{\rm Sh}_{\bar{s},d}$ est une composante irréductible de $\bar{{\rm Sh}}_d$ et les autres composantes rencontrent $\bar{\rm Sh}_{\bar{s},d}$ exactement en les espaces $\bar{\rm Sh}_{\bar{s},d}\cap \bar{\rm Sh}_{M,d}$ où $M$ est un hyperplan ${\mathbb{F}}$-rationnel de ${\mathbb{P}}^d_{zar,\bar{{\mathbb{F}}}}$.
\item Le tube $]\bar{{\rm Sh}}_{\bar{s},d}^{lisse}[_{\hat{{\rm Sh}}_d}\otimes \breve{K}(\varpi_N)\subset \lt^1 \otimes \breve{K}(\varpi_N)$ au dessus du lieu lisse $\bar{\rm Sh}_{\bar{s},d}^{lisse}:=\bar{\rm Sh}_{\bar{s},d}\backslash \bigcup_Y Y$ (où $Y$ parcourt les composantes irréductibles de $\bar{\rm Sh}_d$ différentes de $\bar{{\rm Sh}}_{\bar{s},d}$) admet un modèle lisse isomorphe à la variété de Deligne-Lusztig $\dl^d_{\bar{{\mathbb{F}}}}$.
\end{enumerate}
\end{theo}
\begin{proof}
Le premier point a été prouvé dans \cite[Lemma III.4.1]{harrtay}. Le premier isomorphisme du point 2. s'obtient en explicitant l'isomorphisme du point 1. et en comparant la définition de ${\rm Sh}_M$ \cite[Lemma 4.4]{yosh}. L'autre isomorphisme du point 2. et ceux du point 3. s'en déduisent par combinatoire et compatibilité du procédé d'éclatement \cite[Lemma 4.6]{yosh} et de complétion $p$-adique. La dernière assertion du troisième point découle de l'argument technique de la preuve de \cite[Proposition 4.8 (i)]{yosh}. Le point 4. résulte de la description de la fibre spéciale de $Z_d$ réalisée dans \ref{TheoZH}. Le dernier point découle de 4. et de \ref{TheoZH} 3. et 4.
\end{proof}
\section{Cohomologie des vari\'et\'es de Deligne-Lusztig\label{ssectiondl}}
Considérons la variété
$$\Omega^d_{{\mathbb{F}}}:={\mathbb{P}}_{{\mathbb{F}}}^d \backslash \bigcup_H H,$$ o\`u $H$ parcourt l'ensemble des hyperplans ${\mathbb{F}}_q$-rationnels. Elle admet une action naturelle de $\gln_{d+1}({\mathbb{F}})$ et un revêtement fini étale $\gln_{d+1}({\mathbb{F}})$-équivariant \[\dl_{{\mathbb{F}}}^d := \{x\in {\mathbb{A}}^{d+1}_{{\mathbb{F}}_q}\backslash \{0\} | \prod_{a\in {\mathbb{F}}_q^{d+1}\backslash \{0\} } a_0x_0++a_dx_d=(-1)^d \}\] de groupe de Galois ${\mathbb{F}}_{q^{d+1}}^*$ via $\zeta\cdot (x_0,\cdots,x_d)=(\zeta x_0,\cdots, \zeta x_d)$.
On note $\dl^d_{\bar{{\mathbb{F}}}}$ l'extension des scalaires de $\dl_{{\mathbb{F}}}^d$ \`a $\bar{{\mathbb{F}}}$. Soit $l\ne p$ un nombre premier, l'intérêt principal de cette construction est l'étude de la partie cuspidale
de la cohomologie $l$-adique à support compact de $\dl^d_{\bar{{\mathbb{F}}}_q}$ dont nous allons rappeler la description.
Soit $\theta: {\mathbb{F}}_{q^{d+1}}^*\to \overline{\mathbb{Q}}_l^*$ un caractère. Si $M$ est un $ \overline{\mathbb{Q}}_l[{\mathbb{F}}_{q^{d+1}}^*]$-module on note
$$M[\theta]={\rm Hom}_{{\mathbb{F}}_{q^{d+1}}^*}(\theta, M).$$
On dit que le caractère $\theta$ est
\emph{primitif}
s'il ne se factorise pas par la norme ${\mathbb{F}}_{q^{d+1}}^*\to {\mathbb{F}}_{q^e}^*$ pour tout diviseur propre $e$ de $d+1$.
Si $\pi$ est une représentation de $\gln_{d+1}( {\mathbb{F}}_q)$, on dit que $\pi$ est \emph{cuspidale} si
$\pi^{N({\mathbb{F}}_q)}=0$ pour tout radical unipotent $N$ d'un parabolique propre de $\gln_{d+1}$.
La théorie de Deligne-Lusztig (ou celle de Green dans notre cas particulier) fournit:
\begin{theo}\label{DLet}
Soit $\theta: {\mathbb{F}}_{q^{d+1}}^*\to \overline{\mathbb{Q}}_l^*$ un caractère.
a) Si $\theta$ est primitif, alors
$\hetc{i}(\dl_{\bar{{\mathbb{F}}}_q}^d, \bar{{\mathbb{Q}}}_l)$ est nul pour $i\ne d$ et
$$\bar{\pi}_{\theta,l}:=\hetc{d}(\dl_{\bar{{\mathbb{F}}}_q}^d, \bar{{\mathbb{Q}}}_l)[\theta]$$
est une $\gln_{d+1}( {\mathbb{F}}_q)$-représentation irréductible, cuspidale, de dimension $(q-1)(q^2-1) \dots (q^d-1)$. Toutes les repr\'esentations cuspidales sont ainsi obtenues.
b) Si $\theta$ n'est pas primitif, aucune repr\'esentation cuspidale n'intervient dans $\oplus_{i}\hetc{i}(\dl_{\bar{{\mathbb{F}}}_q}^d, \bar{{\mathbb{Q}}}_l)[\theta]$.
\end{theo}
\begin{proof}
Voir \cite[cor. 6.3]{DL}, \cite[th. 7.3]{DL}, \cite[prop. 7.4]{DL}, \cite[prop. 8.3]{DL}, \cite[cor. 9.9]{DL}, \cite[Proposition 6.8.(ii) et remarques]{yosh} pour ces résultats classiques.
\end{proof}
Ainsi, la partie cuspidale $\hetc{0}(\dl_{\bar{{\mathbb{F}}}_q}^d, \bar{{\mathbb{Q}}}_l)_{\rm cusp}$ de $\oplus_{i} \hetc{i}(\dl_{\bar{{\mathbb{F}}}_q}^d, \bar{{\mathbb{Q}}}_l)$ est concentrée en degré $d$, où elle est donnée par $\oplus_{\theta} \bar{\pi}_{\theta,l}\otimes\theta$, la somme directe portant sur tous les caractères primitifs.
\begin{rem} (voir \cite[6.3]{DL}) Soit $N=q^{d+1}-1$ et fixons de isomorphismes ${\mathbb{F}}_{q^{d+1}}^*\simeq {\mathbb{Z}}/N{\mathbb{Z}}$ et
${\mathbb{Z}}/N{\mathbb{Z}}^{\vee}\simeq {\mathbb{Z}}/N{\mathbb{Z}}$.
Soient $\theta_{j_1}$ et $\theta_{j_2}$ deux caract\`eres primitifs vus comme des \'el\'ements de ${\mathbb{Z}}/N{\mathbb{Z}}$ via $j_1$, $j_2$, les repr\'esentations $\bar{\pi}_{\theta_{j_1}}$ et $\bar{\pi}_{\theta_{j_2}}$ sont isomorphes si et seulement si il existe un entier $n$ tel que $j_1= q^n j_2$ dans ${\mathbb{Z}}/N{\mathbb{Z}}$ .
\end{rem}
Nous aurons besoin d'un analogue des résultats précédents pour
la cohomologie rigide. Cela a été fait par Grosse-Klönne dans \cite{GK5}.
Si $\theta: {\mathbb{F}}^*_{q^{d+1}}\to \bar{K}^*$ est un caractère, posons
$$\bar{\pi}_{\theta}=\hrigc{*}(\dl_{{\mathbb{F}}_q}^d/ \bar{K})[ \theta]:=\bigoplus_{i}\hrigc{i}(\dl_{{\mathbb{F}}_q}^d/ \bar{K})[ \theta],$$
où
$$\hrigc{i}(\dl_{{\mathbb{F}}_q}^d/ \bar{K}):=\hrigc{i}(\dl_{{\mathbb{F}}_q}^d)\otimes_{W({\mathbb{F}}_q)[1/p]} \bar{K}$$
et où $M[\theta]$ désigne comme avant la composante $\theta$-isotypique de $M$.
\begin{theo}\label{theodlpith}
Fixons un premier $l\ne p$ et un isomorphisme $\bar{K} \cong \bar{{\mathbb{Q}}}_l$. Si $\theta$ est un caract\`ere primitif, alors $$\bar{\pi}_{\theta}:=\hrigc{d}(\dl_{{\mathbb{F}}_q}^d/ \bar{K})[ \theta]$$ est isomorphe en tant que
$\gln_{d+1}({\mathbb{F}}_q)$-module à $\bar{\pi}_{\theta,l}$, en particulier c'est une représentation irréductible cuspidale.
\end{theo}
\begin{proof} Cela se fait en trois étapes, cf. \cite[4.5]{GK5}. Dans un premier temps, on montre \cite[3.1]{GK5} que
les $\bar{K}[\gln_{d+1}({\mathbb{F}}_q) \times {\mathbb{F}}^*_{q^{d+1}}]$-modules virtuels
\[ \sum_i (-1)^i \hetc{i}(\dl_{\bar{{\mathbb{F}}}_q}^d, \bar{{\mathbb{Q}}}_l) \text{ et } \sum_i (-1)^i \hrigc{i}(\dl_{{\mathbb{F}}_q}^d/ \bar{K}) \]
co\"incident.
Il s'agit d'une comparaison standard des formules des traces de Lefschetz en
cohomologies étale $l$-adique et rigide. Dans un deuxième temps (et c'est bien la partie délicate du résultat)
on montre que $\bigoplus_{i}\hrigc{i}(\dl_{{\mathbb{F}}_q}^d/ \bar{K})[ \theta]$ est bien concentré en degré $d$, cf. \cite[th. 2.3]{GK5}. On peut alors conclure en utilisant le théorème \ref{DLet}.
\end{proof}
\section{Automorphismes équivariants des variétés de Deligne-Lusztig}
\begin{lem}\label{lemautdl}
On a
$\aut_{\gln_{d+1}({\mathbb{F}})}(\Omega^d_{\bar{{\mathbb{F}}}})= \{ 1 \}$.
\end{lem}
\begin{proof}
On a ${\mathscr{O}}(\Omega^d_{\bar{{\mathbb{F}}}})= \bar{{\mathbb{F}}} [X_1, \dots, X_d, \frac{1}{\prod_{a \in {\mathbb{P}}^d({\mathbb{F}})} l_a(1,X)}]$. Soit $\psi$ un automorphisme $\gln_{d+1}({\mathbb{F}})$-\'equivariant, $\psi$ est d\'etermin\'e par l'image de $X_i$ pour tout $i$ qui sont des \'el\'ements inversibles et qui s'\'ecrivent en fractions irr\'eductibles de la forme
\[ \psi(X_i)=\frac{P_i(X)}{Q_i(X)} = \lambda_i\prod_{a \in {\mathbb{P}}^d({\mathbb{F}})} l_a(1,X)^{\beta_{a,i}}, \ \ \ \beta_{a,i}\in {\mathbb{Z}}\
On veut montrer que $\psi(X_i)=X_i$ pour tout $i$.
\begin{itemize}
\item Prenons $\sigma : \left\llbracket 1,d \right\rrbracket\to \left\llbracket 1,d \right\rrbracket$ une permutation, on trouve $g_{\sigma} \in \gln_{d+1}({\mathbb{F}})$ tel que $g_{\sigma}.X_i=X_{\sigma(i)}$. La relation $g_{\sigma}.\psi(X_i)=\psi(g_{\sigma}.X_i)$ impose l'\'egalit\'e
\[ \frac{P_{\sigma(i)}(X)}{Q_{\sigma(i)}(X)}= \frac{P_i(\sigma(X))}{Q_i(\sigma(X))}. \]
Nous nous int\'eressons uniquement \`a $\frac{P_1(X)}{Q_1(X)}$. Nous supposons $d \ge 2$ et nous voulons nous ramener au cas $d=1$.
\item On consid\`ere $\stab(X_1) \subset \gln_{d+1}({\mathbb{F}})$. Par \'equivariance de $\psi$, on voit que $\frac{P_1}{Q_1}$ est fix\'e par $\stab(X_1)$. Introduisons $U_1= \bigcup_{i \neq 0,1} D^+(z_i^*) \subset {\mathbb{P}}^d({\mathbb{F}})$ avec $(z_i^*)_{0\le i\le d}$ la base duale de ${\mathbb{F}}^{d+1}$ (on a identifi\'e ${\mathbb{P}}^d({\mathbb{F}})$ avec ${\mathbb{P}}(({\mathbb{F}}^{d+1})^*)$). Sous ces choix, on a $X_i=\frac{z_i^*}{z_0^*}$ pour $1\le i\le d$. On v\'erifie alors que $l_a(1,X)$ est un polynôme en $X_1$ si et seulement si $a \notin U_1$. On a alors une \'ecriture unique en fractions irr\'eductibles :
\[ \frac{P_1(X)}{Q_1(X)}= \frac{\widetilde{P}_1(X_1)}{\widetilde{Q}_1(X_1)} \prod_{a \in U_1}l_a(1,X)^{\beta_{a,1}}.\]
Comme $\stab(X_1)$ agit transitivement \footnote{Soit $a$, $b$ dans $U_1$. Les familles $\{ (1,0, \dots,0), (0,1,0, \dots, 0), a \}$ et $\{ (1,0, \dots,0), (0,1,0, \dots, 0), b \}$ sont libres et on peut trouver un endomorphisme qui fixe $(1,0, \dots, 0)$, $(0,1,0, \dots, 0)$ et qui envoie $a$ sur $b$.} sur $U_1$ et laisse stable $\frac{\widetilde{P}_1(X_1)}{\widetilde{Q}_1(X_1)}$, on obtient l'\'ecriture
\[ \frac{P_1(X)}{Q_1(X)}= \frac{\widetilde{P}_1(X_1)}{\widetilde{Q}_1(X_1)} (\prod_{a \in U_1} l_a(1,X))^n \] ie. $\beta_{a,1}$ ne dépend pas de $a$ lorsque $a\in U_1$.
\item Soit $c$ dans ${\mathbb{F}}^*$, il existe $g_c \in \gln_{d+1}({\mathbb{F}})$ tel que $g_c(X_1)=X_1+c$ et $g_c(X_i)=X_i$ pour $i \neq 1$. $g_c$ laisse stable $U_1$ et on a ainsi, $g_c.(\prod_{a \in U_1} l_a(1,X))^n=(\prod_{a \in U_1} l_a(1,X))^n$. La relation $\psi(g_c.X_1)= g_c.\psi(X_1)$ impose l'\'egalit\'e
\[ c+ \frac{\widetilde{P}_1(X_1)}{\widetilde{Q}_1(X_1)} (\prod_{a \in U_1} l_a(1,X))^n = \frac{\widetilde{P}_1(X_1+c)}{\widetilde{Q}_1(X_1+c)} (\prod_{a \in U_1} l_a(1,X))^n. \]
D'o\`u
\[c.( \prod_{a \in U_1} l_a(1,X))^{-n} =\frac{\widetilde{P}_1(X_1+c)}{\widetilde{Q}_1(X_1+c)}-\frac{\widetilde{P}_1(X_1)}{\widetilde{Q}_1(X_1)} \in \bar{{\mathbb{F}}}(X_1). \]
Ainsi, $n=0$ car $\prod_{a \in U_1} l_a(1,X) \notin \bar{{\mathbb{F}}}(X_1)$ et $c\neq 0$. D'où,
\[ \psi(X_1) \in \bar{{\mathbb{F}}}[X_1, \frac{1}{\prod_{a \in {\mathbb{F}}} (X_1-a)}]^*=\bar{{\mathbb{F}}}^* \times \prod_{a \in {\mathbb{F}}} (X_1-a)^{{\mathbb{Z}}}. \]
\item On s'est ramen\'e \`a $d=1$ et on \'ecrit $X=X_1$. On a l'\'ecriture en fractions rationnelles irr\'eductibles $\psi(X)= \frac{\widetilde{P}_1(X)}{\widetilde{Q}_1(X)}$ dont tous les p\^oles et z\'eros sont dans ${\mathbb{F}}$. De plus, pour tout $c$, d'apr\`es le point pr\'ec\'edent, on a
\[ c+\frac{\widetilde{P}_1(X)}{\widetilde{Q}_1(X)}= \frac{\widetilde{P}_1(X+c)}{\widetilde{Q}_1(X+c)}. \]
Comme $c\widetilde{Q}_1+\widetilde{P}_1$ est encore premier à $\widetilde{Q}_1$, on obtient $\widetilde{Q}_1(X+c) = \widetilde{Q}_1(X)$ pour tout $c$ par unicité de l'\'ecriture en fractions rationnelles irr\'eductibles. Ainsi, $\widetilde{Q}_1(X)= (\prod_{a \in {\mathbb{F}}}(X-a))^k$ car les racines de $\widetilde{Q}_1(X)$ vues dans $\bar{{\mathbb{F}}}$ sont contenues dans ${\mathbb{F}}$ et ${\mathbb{F}}$ agit transitivement dessus par translation. Supposons $k \neq 0$, il existe $c$ tel que $c \widetilde{Q}_1(X) + \widetilde{P}_1(X)$ est non-constant\footnote{Si ce n'est pas le cas, $\widetilde{Q}_1$ et $\widetilde{P}_1$ sont tous deux constants ce qui contredit la bijectivité de $\psi$.} et admet donc une racine dans $\bar{{\mathbb{F}}}$ et donc dans ${\mathbb{F}}$. Cette dernière ne peut être une racine de $\widetilde{Q}_1$ par primalité, ce qui impose $k=0$ et $\widetilde{Q}_1$ est constante
\item On a montr\'e que $\psi(X)=P(X) \in {\mathbb{F}}[X]$. Soit une matrice $g$ telle que $g.X= \frac{1}{X}$, comme $\psi(g.X)=g. \psi(X)$, on a\footnote{Notons que $P(\frac{1}{X})$ signifie que l'on a évalué $P$ en $\frac{1}{X}$ ie. $P(\frac{1}{X})=\sum_i (a_i(\frac{1}{X^i})$ si $P(X)=\sum_i a_i X^i$}
\[\frac{1}{P(X)}=P(\frac{1}{X}).\]
Les z\'eros de $P(X)$ sont les p\^oles de $P(\frac{1}{X})$ qui sont réduits au singleton $\{ 0 \}$ d'o\`u $P(X)=\lambda X^n$ pour un certain $n$. Comme $\psi$ est une bijection, $n=1$ et $P=\lambda X$.
\item D'après ce qui précède, $P(X+c)=P(X)+c$ pour tout $c\in {\mathbb{F}}$ d'où $\lambda c=c$ et $\lambda=1$. Ainsi, $\psi=\id$!
\end{itemize}
\end{proof}
\section{Cohomologie de De Rham du premier rev\^etement\label{sssectionlt1dr}}
Nous sommes maintenant en mesure d'énoncer et de prouver le résultat technique principal de cette article.
\begin{theo}\label{theolt1dr}
On a un isomorphisme $\gln_{d+1}({\mathcal{O}}_K)$-équivariant :
\[\hdr{*} (\lt^1/\breve{K}_N)\cong \hrig{*} (\dl^d_{\bar{{\mathbb{F}}}}/\breve{K}_N)\] Par dualité de Poincaré, on a un isomorphisme semblable pour les cohomologies à support compact.
\end{theo}
\begin{proof}
Comme dans l'énoncé du théorème \ref{theomodeleshltalg}, on fixe un point géométrique $\bar{s}$ de $\bar{\rm Sh}^{[0]}$ ainsi qu'une identification $Z_0\cong \spf (\hat{{\mathscr{O}}}_{{\rm Sh},\bar{s}})$. On se donne de plus un schéma formel p-adique $V$ de réduction semi-stable voisinage de $Z_d$ dans $\hat{\rm Sh}_d$ cf \ref{theomodeleshltalg} 3. et on note $U^{lisse}$ le modèle lisse de $]\bar{\rm Sh}_{d,\bar{s}}^{lisse}[_U\otimes \breve{K}_N$ construit dans \ref{theomodeleshltalg} 4. et $U^{lisse}_s$ sa fibre spéciale. D'après \ref{theomodeleshltalg} 1., 2., on a une suite d'isomorphismes \[\lt^1\cong]\bar{s}[_{\hat{\rm Sh}}\cong ]p_d^{-1}(\bar{s})[_{\rm \hat{Sh}_d} \cong ]\bar{\rm Sh}_{d,\bar{s}}[_{\hat{\rm Sh}_d}\cong ]\bar{\rm Sh}_{d,\bar{s}}[_V\] avec $p_d$ la flèche ${\rm \hat{Sh}_d} \rightarrow \rm{ \hat{Sh} }_0$ obtenue par éclatement. L'identité $p_d^{-1}(\bar{s})= ]\bar{\rm Sh}_{d,\bar{s}}[_{\hat{\rm Sh}_d}$ découle de \ref{lemirrzh} 5. On peut alors appliquer le théorème \ref{theoexcision} dans $V$ qui est $p$-adique de réduction semi-stable et obtenir cette suite d'isomorphismes \[\hdr{*} (\lt^1\otimes \breve{K}_N)\cong \hdr{*} (]\bar{\rm Sh}_{d,\bar{s}}[_V\otimes \breve{K}_N)\cong\hdr{*} (]\bar{\rm Sh}_{d,\bar{s}}^{lisse}[_V \otimes \breve{K}_N)\] Ces morphismes naturels se déduisent d'applications de restriction qui sont $\gln_{d+1}({\mathcal{O}}_K)$-équivariantes (l'espace $]\bar{\rm Sh}_{d,\bar{s}}^{lisse}[_V)$ est stable pour cette action).
D'après \ref{theomodeleshltalg} 4. et \ref{theopurete}, on a \[\hdr{*}(]\bar{\rm Sh}_{d,\bar{s}}^{lisse}[_V\otimes \breve{K}_N)\cong\hrig{*}(U_s^{lisse}/\breve{K}_N)\cong \hrig{*}(\dl^d_{\bar{{\mathbb{F}}}}/\breve{K}_N)\] Comme l'identification entre les fibres spéciales est $\gln_{d+1}({\mathcal{O}}_K)$-équivariant, le morphisme au niveau des cohomologies l'est aussi.
L'isomorphisme de l'énoncé s'obtient en composant chacune de ces bijections équivariantes intermédiaires.
\end{proof}
\section{Actions de groupes sur la partie lisse\label{sssectionlt1dlacgr}}
Nous notons $N= q^{d+1}-1$ et $K_N=K(\varpi_N)$ o\`u $\varpi_N$ est une racine $N$-i\`eme de $\varpi$ et $\Omega^d_{\bar{{\mathbb{F}}}}={\mathbb{P}}^d \setminus \bigcup_{H \in {\mathcal{H}}_1} H=\spec(A)$ et ${\rm DL}^d_{\bar{{\mathbb{F}}}}=\spec(B)$ .
L'interprétation modulaire de $Z_0\otimes {\mathcal{O}}_C$ fournit une action naturelle des trois groupes ${\mathcal{O}}_D^*$, $G^{\circ}$ et de $I_K$ sur cet espace. Par naturalité du procédé d'éclatement, ces actions se prolongent à chaque modèle $Z_i$ et les fl\`eches $p_i : Z_i \to Z_0$ sont \'equivariantes. Ces actions ont pour effet de permuter les composantes irréductibles de la fibre spéciale et leurs intersections à savoir les fermés de la forme $Y_{M,i}$ et ces transformations respectent la dimension des espaces $M$. Ainsi, les trois groupes $G^{\circ}$, ${\mathcal{O}}_D^*$ et $I_K$ laissent stable la composante $Y_{\{0\} ,i}$ et permutent les autres, l'action de se transporte à la composante ouverte (\ref{TheoZH} 3.) $Y_{\{0\} ,d}^{lisse}=\cdots=Y_{\{0\} ,0}^{lisse}$ et donc à la variété de Deligne-Lusztig ${\rm DL}^d_{\bar{{\mathbb{F}}}}$ d'après le théorème \ref{TheoZH} 4. Mais, si l'on fixe des identifications $I_K/ I_{K_N} \cong{\mathbb{F}}_{q^{d+1}}^* \cong {\mathcal{O}}_D^*/1+ \Pi_D {\mathcal{O}}_D$ (on rappelle que ${\mathbb{F}}_{q^{d+1}}^* $ est isomorphe à $\gal(\dl_{\bar{{\mathbb{F}}}}^d/ \Omega_{\bar{{\mathbb{F}}}}^d)$), on obtient alors une autre action des trois groupes sur ${\rm DL}^d_{\bar{{\mathbb{F}}}}$.
\begin{theo}\label{theodlactgdw}
Sous le choix d'identification convenable $I_K/ I_{K_N} \cong{\mathbb{F}}_{q^{d+1}}^* \cong {\mathcal{O}}_D^*/1+ \Pi_D {\mathcal{O}}_D$, les différentes actions décrites plus haut de $G^{\circ}$, ${\mathcal{O}}_D^*$ et $I_K$ sur ${\rm DL}^d_{\bar{{\mathbb{F}}}}$ co\"incident.
\end{theo}
\begin{proof}
Pour l'action de $G^{\circ}$, cela d\'ecoule clairement de l'action de $\gln_{d+1}({\mathbb{F}})$ sur les composantes irr\'eductibles $Y_a$ \ref{TheoZH}4.. Le mod\`ele lisse dont la fibre sp\'eciale est isomorphe \`a ${\rm DL}^d_{\bar{{\mathbb{F}}}}$ est obtenu en \'etendant les scalaires \`a ${\mathcal{O}}_{\breve{K}_N}$ puis en normalisant dans $\lt^1 \otimes \breve{K}_N$. Cette op\'eration a pour effet de changer les variables $X_0, \dots, X_d$ d\'efinies dans la partie \ref{sssectionltneq} en les variables $\frac{X_0}{\varpi_N}, \dots, \frac{X_d}{\varpi_N}$. Ceci explique le fait que l'action de $I_K$ est triviale sur $I_{K_N}$ et que cette action identifie $I_K/ I_{K_N} \cong \gal({\rm DL}^d_{\bar{{\mathbb{F}}}}/ \Omega^d_{\bar{{\mathbb{F}}}}) \cong {\mathbb{F}}_{q^{d+1}}^*$ (cf \cite[Proposition 5.5. (ii)]{yosh}).
Le point le plus d\'elicat est la description de l'action de ${\mathcal{O}}_D^*$.
Celle-ci commute \`a l'action de $I_K$ qui agit par automorphismes du groupe de Galois $\gal(\dl_{\bar{{\mathbb{F}}}}^d/ \Omega_{\bar{{\mathbb{F}}}}^d)$. Comme $A= {\mathscr{O}}(\Omega^d_{\bar{{\mathbb{F}}}})$ est pr\'ecis\'ement l'ensemble des fonctions invariantes sous ce groupe, ${\mathcal{O}}_D^*$ pr\'eserve $A$. La restriction de cette action \`a $A$ d\'efinit des automorphismes $\gln_{d+1}({\mathbb{F}})$-\'equivariants, ils sont triviaux sur $A$ par \ref{lemautdl}. Ainsi, l'action étudiée d\'efinit un morphisme ${\mathcal{O}}_D^* \to \gal(\dl_{\bar{{\mathbb{F}}}}^d/ \Omega_{\bar{{\mathbb{F}}}}^d)$ qui est trivial sur $1+ \Pi_D {\mathcal{O}}_D$ car c'est un pro-$p$-groupe
qui s'envoie sur un groupe cyclique d'ordre premier \`a $p$ d'où une flèche $ {\mathcal{O}}_D^*/(1+ \Pi_D {\mathcal{O}}_D) \to \gal(\dl_{\bar{{\mathbb{F}}}}^d/ \Omega_{\bar{{\mathbb{F}}}}^d)$. Il s'agit de voir que c'est un isomorphisme voire une injection, par \'egalit\'e des cardinaux
D\'eployons $D^*$ dans $\gln_{d+1}(K_{(d+1)})$ avec $K_{(d+1)}$ l'extension non-ramifiée de degré $d+1$. Prenons $b \in {\mathcal{O}}_D^*/(1+ \Pi_D {\mathcal{O}}_D)$ et relevons-le en $\tilde{b} \in {\mathcal{O}}_D^*$ r\'egulier elliptique. En effet, les \'el\'ements r\'eguliers elliptiques forment un ouvert Zariski de ${\mathcal{O}}_D^*$ et sont donc denses pour la topologie $p$-adique. Appelons $\iota(b)$ l'image de $b$ dans $\gal(\dl_{\bar{{\mathbb{F}}}}^d/ \Omega_{\bar{{\mathbb{F}}}}^d)$. La description explicite de la cohomologie des vari\'et\'es de Deligne-Lusztig nous donne\footnote{On a $\tr(\iota(b) | \hetc{\heartsuit}(\dl_{\bar{{\mathbb{F}}}}^d,\bar{{\mathbb{Q}}}_l)[\theta])=\theta(\iota(b))\dim\hetc{\heartsuit}(\dl_{\bar{{\mathbb{F}}}}^d,\bar{{\mathbb{Q}}}_l)[\theta]=\theta(\iota(b))\dim\hetc{\heartsuit}(\Omega_{\bar{{\mathbb{F}}}}^d,\bar{{\mathbb{Q}}}_l)$ pour tout caractère $\theta$. Ainsi, $\tr(\iota(b) | \hetc{\heartsuit}(\dl_{\bar{{\mathbb{F}}}}^d,\bar{{\mathbb{Q}}}_l))=\dim\hetc{\heartsuit}(\Omega_{\bar{{\mathbb{F}}}}^d,\bar{{\mathbb{Q}}}_l)\sum_{\theta}\theta(\iota(b))$ et la dernière somme est nulle ssi $\iota(b)\neq 1$}
\[ \iota(b)=1 \Leftrightarrow \tr(\iota(b) | \hetc{\heartsuit}(\dl_{\bar{{\mathbb{F}}}}^d,\bar{{\mathbb{Q}}}_l)) \neq 0\]
Mais on a la formule de trace \cite[Th\'eor\`eme (3.3.1)]{stra}
\[ \tr(\iota(b) | \hetc{\heartsuit}(\dl_{\bar{{\mathbb{F}}}}^d, \bar{{\mathbb{Q}}}_l)) = \tr(\tilde{b} | \hetc{\heartsuit}(\lt^{1}, \bar{{\mathbb{Q}}}_l))= | \fix(\tilde{b}, \lt^1(C)) | \]
La premi\`ere \'egalit\'e d\'ecoule de l'isomorphisme entre les cohomologies de \cite{yosh}.
Nous expliquons comment calculer le nombre de points fixes sur $\lt^1(C)$ d'un élément $\tilde{b}$ r\'egulier elliptique. Nous avons un morphisme des p\'eriodes analytique ${\mathcal{O}}_D^*$-\'equivariant $\pi_{GH}: \lt^1\to \lt^0 \to {\mathbb{P}}(W)$ \cite{grho} o\`u $W$ s'identifie \`a $C^{d+1}$ et l'action sur $W$ se d\'eduit du choix d'un isomorphisme $D \otimes C \cong \mat_{d+1}(C)$. Ainsi $\tilde{b}$ admet $d+1$ points fixes dans ${\mathbb{P}}(W)$ qui sont les droites propres. Fixons $x \in {\mathbb{P}}(W)$ une de ces droites propres. D'apr\`es \cite[Proposition (2.6.7)(ii)-(iii)]{stra}, il existe alors $g_{\tilde{b}} \in G=\gln_{d+1}(K)$ ayant m\^eme polyn\^ome caract\'eristique que $\tilde{b}$ tel que
\[ \pi_{GH}^{-1}(x) \cong G/G_1\varpi^{{\mathbb{Z}}} \text{ et } b.(hG_1\varpi^{{\mathbb{Z}}}) = (g_bh)G_1\varpi^{{\mathbb{Z}}} \]
avec $G_1=1+ \varpi \mat_{d+1}({\mathcal{O}}_K)$. Ainsi $hG_1\varpi^{{\mathbb{Z}}}$ est un point fixe si et seulement si $h^{-1}g_{\tilde{b}}h \in G_1\varpi^{{\mathbb{Z}}}$.
Supposons maintenant $b\in {\mathcal{O}}_D^*/(1+ \Pi_D {\mathcal{O}}_D)$ non-trivial et montrons $| \fix(\tilde{b}, \lt^1(C)) |=0$ pour $\tilde{b}$ un relèvement r\'egulier elliptique.
Soit $x \in {\mathbb{P}}(W)$ une droite propre pour $\tilde{b}$, la matrice $g_{\tilde{b}}\in G$ décrivant l'action de $\tilde{b}$ sur $\pi_{GH}^{-1}(x)$ et supposons l'existence d'un point fixe $hG_1\varpi^{{\mathbb{Z}}}\in\pi_{GH}^{-1}(x)$ pour $\tilde{b}$ . Dans ce cas, $h^{-1}g_{\tilde{b}}h\in G_1\varpi^{{\mathbb{Z}}}$ est diagonalisable. Comme $b \equiv \zeta \pmod{\Pi_D}$, o\`u $\zeta$ est une racine de l'unit\'e diff\'erente de $1$ dans $K_{(d+1)}$, au moins une valeur propre est dans $\zeta+ {\mathfrak{m}}_{C}$\footnote{On voit que que le produit de toutes les valeurs propres de $\tilde{b}-\zeta\in \Pi_D{\mathcal{O}}_D$ est $\nr(\tilde{b}-\zeta)\in {\mathfrak{m}}_C$. Ainsi, une de ces valeurs propres doit être dans ${\mathfrak{m}}_C$. }. De m\^eme, une matrice dans $G_1\varpi^{{\mathbb{Z}}}$ diagonalisable a ses valeurs propres dans $\varpi^{{\mathbb{Z}}}(1+ {\mathfrak{m}}_{C})$\footnote{Il s'agit de voir que les matrices diagonalisables $M$ dans $\mat_{d+1}({\mathcal{O}}_K)$ ont des valeurs propres dans ${\mathcal{O}}_C$. Raisonnons sur une extension finie $L$ dans laquelle on peut diagonaliser $M$. Prenons un vecteur propre $v$ de $M$ que l'on suppose unimodulaire quitte à le normaliser ie $v$ a au moins une coordonnée dans ${\mathcal{O}}_L^*$. Par Nakayama topologique, on peut trouver une ${\mathcal{O}}_L$-base du réseau standard ${\mathcal{O}}_L^{d+1}\subset L^{d+1}$ contenant $v$. Comme la matrice $M$ préserve ce réseau, on a $Mv=\lambda v\in{\mathcal{O}}_L^{d+1}$ d'où $\lambda\in{\mathcal{O}}_L$. }. L'\'el\'ement $h^{-1}g_bh$ ne peut v\'erifier ces deux conditions en m\^eme temps, ce qui montre qu'il ne peut y avoir de point fixe dans $\pi_{GH}^{-1}(x)$ ni m\^eme dans $\lt^1(C)$. Ainsi, $\tr(\iota(b) | \hetc{\heartsuit}(\dl_{\bar{{\mathbb{F}}}}^d, \bar{{\mathbb{Q}}}_l))=0$ et $\iota$ est injective donc bijective.
\end{proof}
\begin{rem}
On a aussi une donnée de descente sur $\widehat{\lt}^1$ à la Weil (construite dans \cite[(3.48)]{rapzin}) qui induit sur $\dl_{\bar{{\mathbb{F}}}}^d$ la donnée de descente provenant de la forme ${\mathbb{F}}$-rationnelle $\dl_{{\mathbb{F}}}^d$ en suivant l'argument \cite[Lemme (3.1.11)]{wa}.
\end{rem}
D'après la remarque précédente, la structure ${\mathbb{F}}_q$-rationnelle $\dl_{{\mathbb{F}}_q}^d$ de $\dl_{\bar{{\mathbb{F}}}_q}^d$ induit une action du Frobenius $\varphi$ sur $\hetc{*}(\dl_{\bar{{\mathbb{F}}}_q}^d, \bar{{\mathbb{Q}}}_l)$ et $\hrigc{*}(\dl_{{\mathbb{F}}_q}^d/K_0) \otimes_{K_0} \bar{K}$ (cf \cite[2.1 Proposition]{EtSt}
pour ce dernier). De plus, $\varphi^{d+1}$ commute aux actions de $\gln_{d+1}({\mathbb{F}}_q) \times {\mathbb{F}}^*_{q^{d+1}}$ et les $\gln_{d+1}({\mathbb{F}}_q)$-repr\'esentations $\hetc{d}(\dl_{{\mathbb{F}}_q}^d, \bar{{\mathbb{Q}}}_l)[\theta]$ et $\hrigc{*}(\dl_{{\mathbb{F}}_q}^d)[ \theta]$ sont irr\'eductibles pour $\theta$ primitif. Cet op\'erateur agit alors par un scalaire $\lambda_{\theta}$. On a le r\'esultat suivant :
\begin{prop}\label{propdlwk}
$\lambda_{\theta}= (-1)^d q^{\frac{d(d+1)}{2}}$ pour la cohomologie \'etale $l$-adique et pour la cohomologie rigide.
\end{prop}
\begin{proof}
L'argument provient de \cite[section V, 3.14]{DM} \`a quelques twists pr\`es. Nous avons pr\'ef\'er\'e r\'ep\'eter la preuve pour \'eviter toute confusion. Pour simplifier, dans toute la d\'emonstration, $\hhh$ d\'esignera la cohomologie consid\'er\'ee ($l$-adique ou rigide), $\hhh^{\heartsuit}$ la caract\'eristique d'Euler vue comme une repr\'esentation virtuelle, $Y=\dl_{\bar{{\mathbb{F}}}}^d$ et $X=\Omega_{\bar{{\mathbb{F}}}}^d $. Par d\'efinition, on a \[ \lambda_{\theta}= \frac{{\rm Tr}(\varphi^{d+1} | \hhh^{d}(Y)[\theta])}{\dim \hhh^d(Y)[\theta]}= (-1)^d \frac{{\rm Tr}(\varphi^{d+1} | \hhh^{\heartsuit}(Y)[\theta])}{\dim \hhh^\heartsuit(Y)[\theta]}. \]
On dispose du projecteur $p_{\theta}= \frac{1}{N} \sum_{a \in \mu_N} \theta(a^{-1})a$ sur la partie $\theta$-isotypique d'o\`u la suite d'\'egalit\'es
\begin{equation}
\label{eqtrace}
{\rm Tr}(\varphi^{d+1} | \hhh^{\heartsuit}(Y)[\theta])={\rm Tr}(\varphi^{d+1} p_{\theta} | \hhh^{\heartsuit}(Y))= \frac{1}{N} \sum_{a \in \mu_N} \theta(a^{-1}) {\rm Tr}(a\varphi^{d+1} | \hhh^{\heartsuit}(Y)).
\end{equation}
Mais d'apr\`es le th\'eor\`eme des points fixes de Lefshetz (cf \cite[6.2 Théorème]{EtSt} pour la cohomologie rigide), ${\rm Tr}(a\varphi^{d+1} | \hhh^{\heartsuit}(Y))= | Y^{a \varphi^{d+1}} |$. Nous chercherons \`a d\'eterminer ces diff\'erents cardinaux. Remarquons l'\'egalit\'e ensembliste suivante due \`a la $\varphi$-\'equivariance de la projection $\pi : Y \to X$ :
\[ \pi^{-1}(X^{\varphi^{d+1}})= \pi^{-1}(X({\mathbb{F}}_{q^{d+1}}))=\bigcup_{a} Y^{a \varphi^{d+1}} \] où la dernière union est disjointe.
Pour le sens indirect, il suffit d'observer que pour un point ferm\'e $y \in Y^{a \varphi^{d+1}}$, $\pi(y)=\pi(a \varphi^{d+1}(y))=\varphi^{d+1}(\pi(y))$. Pour le sens direct, on raisonne de la m\^eme mani\`ere en se donnant un point ferm\'e $y$ tel que $\pi(y)=\varphi^{d+1}(\pi(y))$. Comme le groupe de Galois du rev\^etement $\pi$ agit librement et transitivement sur les fibres, il existe un unique $a$ dans $\mu_N$ tel que $a \varphi^{d+1}(y)=y$.
Nous allons prouver que $\pi^{-1}(X({\mathbb{F}}_{q^{d+1}}))= Y({\mathbb{F}}_{q^{d+1}})(= |Y^{\varphi^{d+1}}|)$, ce qui montrera l'annulation des autres ensembles de points fixes. Nous venons de montrer l'inclusion indirecte et \'etablissons l'autre inclusion. Prenons $(x_1, x_2, \dots, x_d, t)$ un point de $ \pi^{-1}(X({\mathbb{F}}_{q^{d+1}}))$. Ainsi $(x_1, \dots, x_d)$ appartient \`a ${\mathbb{F}}_{q^{d+1}}^d$ et $t$ v\'erifie l'\'equation\footnote{On voit $Y$ comme le rev\^etement de type Kummer associ\'e \`a $u$.} $t^N-u(x_1, \dots, x_d)=0$ dans une extension finie de ${\mathbb{F}}_{q^{d+1}}$. Mais le polyn\^ome $T^N-u(x_1, \dots, x_d) \in {\mathbb{F}}_{q^{d+1}}[T]$ est scind\'e \`a racines simples dans ${\mathbb{F}}_{q^{d+1}}$ et $(x_1, \dots, x_d,t)$ est ${\mathbb{F}}_{q^{d+1}}$-rationnel. On a montr\'e l'\'egalit\'e voulue et on obtient de plus
\[ | \pi^{-1}(X({\mathbb{F}}_{q^{d+1}})|=|Y({\mathbb{F}}_{q^{d+1}})|= N | X({\mathbb{F}}_{q^{d+1}}) |. \]
Il reste \`a calculer cette derni\`ere quantit\'e. En fixant une ${\mathbb{F}}_q$ base de ${\mathbb{F}}_{q^{d+1}}$, on a un isomorphisme de ${\mathbb{F}}_q$-espaces vectoriels ${\mathbb{F}}_{q^{d+1}}^{d+1} \xrightarrow{\sim} M_{d+1}({\mathbb{F}}_q)$ (on voit chaque \'el\'ement de ${\mathbb{F}}_{q^{d+1}}$ comme un vecteur colonne). Il est ais\'e de voir qu'un vecteur $x \in {\mathbb{F}}_{q^{d+1}}^{d+1} \backslash \{ 0 \}$ engendre une droite de $X({\mathbb{F}}_{q^{d+1}})$ si et seulement si la matrice associ\'ee par l'isomorphisme pr\'ec\'edent est inversible, d'o\`u $|X({\mathbb{F}}_{q^{d+1}})|= \frac{| \gln_{d+1}({\mathbb{F}}_q)|}{N}$.
En rempla\c{c}ant dans \eqref{eqtrace}, on obtient :
\[ {\rm Tr}(\varphi^{d+1} | \hhh^{\heartsuit}(Y)[\theta]) =\frac{ | \gln_{d+1}({\mathbb{F}}_q) |}{N} = (q^{d+1}-q^d) \dots (q^{d+1}-q) \]
d'o\`u $\lambda_{\theta}=(-1)^d q^{\frac{d(d+1)}{2}}$.
\end{proof}
\section{Réalisation de la correspondance de Langlands locale}
Dans cette partie, nous allons d\'ecrire la cohomologie des espaces ${\mathcal{M}}_{LT}^1$ et montrer qu'elle r\'ealise la correspondance de Jacquet-Langlands. On \'etendra les scalaires \`a $C$ pour tous les espaces consid\'er\'es en fibre g\'en\'erique.
On pourra simplifier le produit $G^{\circ} \times D^* $ (resp. $G^{\circ} \times D^* \times W_K$) en $GD$ (resp. $GDW$). On a
une "valuation" $v_{GDW}$ sur $GDW$ :
\[ v_{GDW} : (g,b,w) \in GDW \mapsto v_K( \nr(b) \art^{-1}(w)) \in {\mathbb{Z}}. \]
On introduit alors pour $i=0$ ou $i=d+1$, $[GDW]_{i}=v_{GDW}^{-1}(i{\mathbb{Z}})$ et $[D]_{i}=D^*\cap[GDW]_{i}={\mathcal{O}}_D^*\varpi^{\mathbb{Z}}$, $[W]_{i}=W_K\cap[GDW]_{i}=I_K(\varphi^{d+1})^{\mathbb{Z}}$.
L'espace ${\mathcal{M}}_{LT}^1$ (resp. ${\mathcal{M}}_{LT}^1/ \varpi^{{\mathbb{Z}}}$) s'identifie non canoniquement \`a $\lt^1 \times {\mathbb{Z}}$ (resp. $\lt^1 \times {\mathbb{Z}}/(d+1){\mathbb{Z}}$) et on confondra alors $\lt^1$ avec $\lt^1 \times \{0 \}$ (où $0$ est vu dans ${\mathbb{Z}}$ ou dans ${\mathbb{Z}}/(d+1){\mathbb{Z}}$ suivant si $\lt^1$ est vu comme un sous-espace de ${\mathcal{M}}_{LT}^1$ ou de ${\mathcal{M}}_{LT}^1/ \varpi^{{\mathbb{Z}}}$. Chaque élément $(g,b,w)\in GDW$ (resp $GD$) envoie $\lt^1 \times \{i \}$ si $\lt^1 \times \{i+ v_{GDW} (g,b,w)\}$
On obtient alors une action sur $\lt^1$ de $[GDW]_{d+1}$. On a alors la relation :
\[ \hetc{i}( {\mathcal{M}}_{LT}^1/ \varpi^{{\mathbb{Z}}},\bar{{\mathbb{Q}}}_l) \underset{GDW}{\cong} \cind^{GDW}_{[GDW]_{d+1}} \hetc{i}( \lt^1,\bar{{\mathbb{Q}}}_l)\cong \hdrc{i}( \lt^1,\bar{{\mathbb{Q}}}_l)^{d+1}. \]
\[ \hdrc{i}( {\mathcal{M}}_{LT}^1/ \varpi^{{\mathbb{Z}}}) \cong \hdrc{i}( \lt^1)^{d+1}. \]
Passons aux repr\'esentations qui vont nous int\'eresser. Nous d\'efinissons d'abord des $C$-repr\'esentations sur $[GDW]_{d+1}$ que nous \'etendrons \`a $GDW$ par induction. Fixons $\theta$ un caract\`ere primitif de ${\mathbb{F}}_{q^{d+1}}^*$ et des isomorphismes ${\mathcal{O}}_D^*/1+ \Pi_D {\mathcal{O}}_D \cong {\mathbb{F}}_{q^{d+1}}^* \cong I_K/I_{K_N}$ o\`u $K_N= K(\varpi^{\frac{1}{N}})$ avec $N=q^{d+1}-1$. On pose :
\begin{itemize}
\item $\theta$ sera vu comme une $[D]_{d+1}$-repr\'esentation via $[D]_{d+1}={\mathcal{O}}_D^* \varpi^{{\mathbb{Z}}} \to {\mathcal{O}}_D^* \to {\mathbb{F}}_{q^{d+1}}^*$,
\item $\bar{\pi}_{\theta}$ sera la repr\'esentation associ\'ee \`a $\theta$ sur $\bar{G}=\gln_{d+1}({\mathbb{F}})$ via la correspondance de Green. On la voit comme une $G^{\circ} \varpi^{{\mathbb{Z}}}$-repr\'esentation via $G^{\circ} \varpi^{{\mathbb{Z}}} \to G^{\circ} \to \bar{G}$,
\item $\tilde{\theta}$ sera la repr\'esentation de $[W]_{d+1}$ telle que $\tilde{\theta}|_{I_K}=\theta$ via $I_K \to I_K/I_{K_N} \xrightarrow{\sim} {\mathbb{F}}_{q^{d+1}}^*$ et $\tilde{\theta}(\varphi^{d+1})=(-1)^d q^{\frac{d(d+1)}{2}}$.
\end{itemize}
Par induction, on obtient :
\begin{itemize}
\item une $D^*$-repr\'esentation $\rho(\theta):= \cind_{[D]_{d+1}}^{D^*} \theta$,
\item une $W_K$-repr\'esentation $\sigma^{\sharp}(\theta):= \cind_{[W]_{d+1}}^{W_K} \tilde{\theta}$.
\end{itemize}
Nous souhaitons prouver :
\begin{theo}
Fixons un isomorphisme $C\cong \bar{{\mathbb{Q}}}_l$. Si $\theta$ un caract\`ere primitif, on a :
\[ \homm_{D^*}(\rho(\theta),\hdrc{i}( ({\mathcal{M}}_{LT}^1/ \varpi^{{\mathbb{Z}}})/C))\underset{G^{\circ}}{\cong} \begin{cases} \bar{\pi}_{\theta}^{d+1} & \text{ si } i=d \\ 0 & \text{ sinon.} \end{cases} \]
\[ \homm_{D^*}(\rho(\theta),\hetc{i}( ({\mathcal{M}}_{LT}^1/ \varpi^{{\mathbb{Z}}}),\bar{{\mathbb{Q}}}_l)\otimes C)\underset{G^{\circ} \times W_K}{\cong} \begin{cases} \bar{\pi}_{\theta} \otimes \sigma^{\sharp}(\theta) & \text{ si } i=d \\ 0 & \text{ sinon.} \end{cases} \]
\end{theo}
\begin{rem
Pour la deuxième partie, il a été prouvé dans \cite[Proposition 6.14.]{yosh} le résultat moins précis \[\homm_{I_K}(\tilde{\theta},\hetc{i}( ({\mathcal{M}}_{LT}^1/ \varpi^{{\mathbb{Z}}}),\bar{{\mathbb{Q}}}_l)\otimes C)\underset{G^{\circ} }{\cong} \begin{cases} \bar{\pi}_{\theta}^{d+1} & \text{ si } i=d \\ 0 & \text{ sinon.} \end{cases}.\] L'énoncé que nous obtenons est une conséquence directe de cette égalité et de \ref{theolt1dr} et \ref{theodlactgdw}.
\end{rem}
\begin{proof}
Nous avons d\'ej\`a prouv\'e l'annulation de la cohomologie quand $i \neq d$. Int\'eressons-nous au cas $i=d$. D'après les discussion précédente, on a $\hdrc{i}( {\mathcal{M}}_{LT}^1/ \varpi^{{\mathbb{Z}}}) \cong \hdrc{i}( \lt^1)^{d+1}$ et l'action de $[D]_{d+1}\times G^{\circ}$ respecte cette décomposition en produit. On a alors des isomorphismes $G^{\circ}$-équivariant:
\begin{align*}
\homm_{D^*}(\rho(\theta), \hdrc{d}( ({\mathcal{M}}_{LT}^1/ \varpi^{{\mathbb{Z}}})/C)) & = \homm_{D^*}(\cind_{[D]_{d+1}}^{D^*} \theta, \hdrc{d}( ({\mathcal{M}}_{LT}^1/ \varpi^{{\mathbb{Z}}})/C)) \\
& = \homm_{[D]_{d+1}}(\theta, \hdrc{d}( ({\mathcal{M}}_{LT}^1/ \varpi^{{\mathbb{Z}}})/C)) \\
& = \homm_{{\mathbb{F}}_{q^{d+1}}^*}(\theta, \hrigc{d}( \dl_{\bar{{\mathbb{F}}}}/C))^{d+1} &\text{d'apr\`es \ref{theolt1dr} et \ref{theodlactgdw}} \\
& = \bar{\pi}_{\theta}^{d+1}.
\end{align*}
Le même raisonnement en cohomologie étale entraîne d'après \cite[Proposition 6.16.]{yosh} et \ref{theodlactgdw} \[\homm_{D^*}(\rho(\theta), \hetc{d}( ({\mathcal{M}}_{LT}^1/ \varpi^{{\mathbb{Z}}}),\bar{{\mathbb{Q}}}_l)\otimes C) = \bar{\pi}_{\theta}^{d+1}\] en tant que $G^{\circ}$-repr\'esentation. Plus précisément, on a
\begin{align*}
\homm_{D^*}(\rho(\theta), \hetc{d}( ({\mathcal{M}}_{LT}^1/ \varpi^{{\mathbb{Z}}}),\bar{{\mathbb{Q}}}_l)\otimes C) & = \homm_{D^*}(\cind_{[D]_{d+1}}^{D^*} \theta, \cind^{GDW}_{[GDW]_{d+1}} \hetc{d}( \lt^1,\bar{{\mathbb{Q}}}_l)\otimes C) \\
& = \homm_{[D]_{d+1}}(\theta, \cind^{GW}_{[GW]_{d+1}} \hetc{d}( \lt^1,\bar{{\mathbb{Q}}}_l)\otimes C) \\
& = \cind^{GW}_{[GW]_{d+1}} \homm_{{\mathbb{F}}_{q^{d+1}}^*}(\theta, \hetc{d}(\dl_{\bar{{\mathbb{F}}}},\bar{{\mathbb{Q}}}_l)\otimes C ) &\\
& = \cind^{GW}_{[GW]_{d+1}} \hetc{d}(\dl_{\bar{{\mathbb{F}}}},\bar{{\mathbb{Q}}}_l)[\theta]\otimes C=:\tau_{GW}.
\end{align*}
en tant que $GW$-repr\'esentation.
De plus, on a
\begin{align*}
\homm_{G^{\circ}}(\bar{\pi}_{\theta}, \tau_{GW}) & = \homm_{G^{\circ}}(\bar{\pi}_{\theta},\cind^{W_K}_{[W]_{d+1}} \hetc{d}(\dl_{\bar{{\mathbb{F}}}},\bar{{\mathbb{Q}}}_l)[\theta]\otimes C) \\
&= \cind^{W_K}_{[W]_{d+1}} \homm_{G^{\circ}}(\bar{\pi}_{\theta}, \hetc{d}(\dl_{\bar{{\mathbb{F}}}},\bar{{\mathbb{Q}}}_l)[\theta]\otimes C) \\
&= \cind^{W_K}_{[W]_{d+1}} \tilde{\theta} = \sigma^{\sharp}(\theta) &\text{ d'apr\`es \ref{theodlactgdw},}
\end{align*}
en tant que $W_K$-repr\'esentation, ce qui conclut la preuve.
\end{proof}
\nocite{J1},\nocite{J2}
\bibliographystyle{alpha}
| {
"attr-fineweb-edu": 1.120117,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdHs5qdmC8Q-W7Bv9 |
\section{Sports competitions}
\subsection{Formula One instances}
\label{Attributes of the formula one instances}
\begin{scriptsize}
\begin{center}
\textbf{Properties of the Formula One instances}\\
\vspace*{1.2ex}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c}
season & \#votes & \#candi- & \#dirty & \%dirty & \#maj. & \%maj. & min & max & max & average & red. \\
& & dates & pairs & pairs & pairs & pairs & score & score & range & kt-dist & cands. \\
\hline
1961 & 8 & 9 & 36 & 100.00 & 13 & 36.11 & 96 & 192 & 8 & 15.96 & 0 \\
1962 & 9 & 8 & 28 & 100.00 & 8 & 28.57 & 79 & 173 & 7 & 11.81 & 0 \\
1963 & 10 & 9 & 34 & 94.44 & 12 & 33.33 & 122 & 238 & 8 & 15.20 & 1 \\
1964 & 10 & 9 & 36 & 100.00 & 10 & 27.78 & 143 & 217 & 8 & 17.44 & 0 \\
1965 & 10 & 7 & 19 & 90.48 & 13 & 61.90 & 56 & 154 & 6 & 7.16 & 0 \\
1966 & 9 & 8 & 28 & 100.00 & 7 & 25.00 & 88 & 164 & 7 & 12.58 & 0 \\
1967 & 11 & 8 & 28 & 100.00 & 13 & 46.43 & 97 & 211 & 7 & 11.53 & 0 \\
1968 & 12 & 8 & 28 & 100.00 & 10 & 35.71 & 111 & 225 & 7 & 12.08 & 0 \\
1969 & 11 & 6 & 15 & 100.00 & 4 & 26.67 & 61 & 104 & 5 & 6.47 & 0 \\
1970 & 13 & 10 & 45 & 100.00 & 15 & 33.33 & 216 & 369 & 9 & 20.85 & 0 \\
1971 & 11 & 11 & 55 & 100.00 & 21 & 38.18 & 209 & 396 & 10 & 25.05 & 0 \\
1972 & 12 & 11 & 55 & 100.00 & 23 & 41.82 & 204 & 456 & 10 & 23.08 & 0 \\
1973 & 15 & 12 & 66 & 100.00 & 30 & 45.45 & 306 & 684 & 11 & 27.59 & 0 \\
1974 & 15 & 14 & 91 & 100.00 & 35 & 38.46 & 457 & 908 & 13 & 40.35 & 0 \\
1975 & 14 & 13 & 78 & 100.00 & 31 & 39.74 & 371 & 721 & 12 & 34.82 & 0 \\
1976 & 16 & 13 & 78 & 100.00 & 39 & 50.00 & 410 & 838 & 12 & 33.62 & 0 \\
1977 & 17 & 13 & 78 & 100.00 & 24 & 30.77 & 475 & 851 & 12 & 35.74 & 0 \\
1978 & 16 & 16 & 117 & 97.50 & 69 & 57.50 & 597 & 1323 & 15 & 49.77 & 0 \\
1979 & 15 & 19 & 168 & 98.25 & 61 & 35.67 & 823 & 1742 & 18 & 73.13 & 0 \\
1980 & 14 & 19 & 164 & 95.91 & 91 & 53.22 & 712 & 1682 & 17 & 69.09 & 1 \\
1981 & 15 & 19 & 167 & 97.66 & 80 & 46.78 & 767 & 1798 & 18 & 69.44 & 1 \\
1982 & 16 & 9 & 35 & 97.22 & 21 & 58.33 & 178 & 398 & 8 & 14.33 & 0 \\
1983 & 15 & 24 & 273 & 98.91 & 117 & 42.39 & 1282 & 2858 & 23 & 116.88 & 0 \\
1984 & 16 & 19 & 170 & 99.42 & 89 & 52.05 & 886 & 1850 & 18 & 74.28 & 0 \\
1985 & 16 & 14 & 91 & 100.00 & 53 & 58.24 & 458 & 998 & 13 & 38.45 & 0 \\
1986 & 16 & 21 & 207 & 98.57 & 136 & 64.76 & 965 & 2395 & 20 & 83.89 & 0 \\
1987 & 16 & 21 & 209 & 99.52 & 121 & 57.62 & 1026 & 2334 & 20 & 87.08 & 0 \\
1988 & 16 & 28 & 357 & 94.44 & 252 & 66.67 & 1568 & 4480 & 24 & 137.85 & 0 \\
1989 & 16 & 26 & 289 & 88.92 & 223 & 68.62 & 1285 & 3915 & 24 & 111.34 & 0 \\
1990 & 16 & 24 & 249 & 90.22 & 194 & 70.29 & 1090 & 3326 & 22 & 96.28 & 0 \\
1991 & 16 & 24 & 262 & 94.93 & 173 & 62.68 & 1178 & 3238 & 21 & 100.60 & 0 \\
1992 & 16 & 22 & 229 & 99.13 & 130 & 56.28 & 1141 & 2555 & 21 & 97.60 & 1 \\
1993 & 16 & 18 & 151 & 98.69 & 79 & 51.63 & 775 & 1673 & 17 & 64.68 & 0 \\
1994 & 16 & 16 & 113 & 94.17 & 62 & 51.67 & 558 & 1362 & 15 & 45.33 & 0 \\
1995 & 17 & 16 & 120 & 100.00 & 72 & 60.00 & 611 & 1429 & 15 & 49.40 & 0 \\
1996 & 16 & 19 & 171 & 100.00 & 106 & 61.99 & 834 & 1902 & 18 & 71.78 & 0 \\
1997 & 17 & 18 & 153 & 100.00 & 80 & 52.29 & 849 & 1752 & 17 & 67.49 & 0 \\
1998 & 16 & 21 & 206 & 98.10 & 146 & 69.52 & 889 & 2471 & 20 & 78.53 & 0 \\
1999 & 16 & 19 & 167 & 97.66 & 93 & 54.39 & 847 & 1889 & 18 & 70.19 & 0 \\
2000 & 17 & 22 & 230 & 99.57 & 124 & 53.68 & 1170 & 2757 & 21 & 94.29 & 0 \\
2001 & 17 & 18 & 152 & 99.35 & 67 & 43.79 & 819 & 1782 & 17 & 64.22 & 1 \\
2002 & 17 & 18 & 140 & 91.50 & 88 & 57.52 & 751 & 1850 & 17 & 59.68 & 1 \\
2003 & 16 & 16 & 118 & 98.33 & 79 & 65.83 & 583 & 1337 & 15 & 49.47 & 0 \\
2004 & 18 & 15 & 101 & 96.19 & 73 & 69.52 & 425 & 1465 & 14 & 33.36 & 2 \\
2005 & 19 & 13 & 78 & 100.00 & 58 & 74.36 & 394 & 1088 & 12 & 29.05 & 0 \\
2006 & 18 & 18 & 152 & 99.35 & 100 & 65.36 & 682 & 2072 & 17 & 54.41 & 0 \\
2007 & 17 & 18 & 149 & 97.39 & 108 & 70.59 & 602 & 1999 & 17 & 49.91 & 0 \\
2008 & 18 & 20 & 182 & 95.79 & 111 & 58.42 & 923 & 2497 & 19 & 71.47 & 0 \\
\end{tabular}
\end{center}
\end{scriptsize}
\newpage
\begin{scriptsize}
\begin{center}
\textbf{Running times (in seconds) for the Formula One instances}\\
\vspace*{1.2ex}
\begin{tabular}{c|c|c|c|c|c|c}
season & \texttt{kconsens\_cands} & \texttt{kconsens\_pairs} & \texttt{kconsens\_triples} & \texttt{$4$-kconsens} & \texttt{$5$-kconsens} & \texttt{$6$-kconsens} \\
\hline
1961 & 0.051 & n/a & 108.461 & 32.031 & 59.121 & 147.581 \\
1962 & 0.021 & n/a & 7.371 & 2.221 & 4.281 & 11.811 \\
1963 & 0.041 & n/a & 68.701 & 25.711 & 48.771 & 111.531 \\
1964 & 0.051 & n/a & n/a & 68.721 & 147.401 & n/a \\
1965 & 0.001 & 0.651 & 0.021 & 0.001 & 0.011 & 0.021 \\
1966 & 0.021 & n/a & 13.611 & 3.071 & 6.151 & 18.761 \\
1967 & 0.021 & n/a & 4.201 & 0.981 & 2.231 & 5.851 \\
1968 & 0.021 & n/a & 1.431 & 0.001 & 0.011 & 0.061 \\
1969 & 0.001 & 0.241 & 0.111 & 0.051 & 0.071 & 0.091 \\
1970 & 0.111 & n/a & n/a & 705.771 & n/a & n/a \\
1971 & 0.281 & n/a & n/a & 5831.82 & n/a & n/a \\
1972 & 0.281 & n/a & n/a & 2042 & n/a & n/a \\
1973 & 0.861 & n/a & n/a & n/a & n/a & n/a \\
1974 & 11.351 & n/a & n/a & n/a & n/a & n/a \\
1975 & 2.931 & n/a & n/a & n/a & n/a & n/a \\
1976 & 2.921 & n/a & n/a & n/a & n/a & n/a \\
1977 & 2.931 & n/a & n/a & n/a & n/a & n/a \\
1978 & 111.961 & n/a & n/a & n/a & n/a & 0.241 \\
1979 & 8426.59 & n/a & n/a & n/a & n/a & n/a \\
1980 & 1927.58 & n/a & n/a & n/a & n/a & n/a \\
1981 & 1992.62 & n/a & n/a & n/a & n/a & n/a \\
1982 & 0.051 & n/a & 4.811 & 0.001 & 0.631 & 0.771 \\
1983 & a few hours & n/a & n/a & n/a & n/a & n/a \\
1984 & 8310.57 & n/a & n/a & n/a & n/a & n/a \\
1985 & 10.571 & n/a & n/a & n/a & n/a & n/a \\
1986 & a few hours & n/a & n/a & n/a & n/a & n/a \\
1987 & a few hours & n/a & n/a & n/a & n/a & n/a \\
1988 & a few days & n/a & n/a & n/a & n/a & n/a \\
1989 & a few days & n/a & n/a & n/a & n/a & n/a \\
1990 & a few days & n/a & n/a & n/a & n/a & n/a \\
1991 & a few days & n/a & n/a & n/a & n/a & n/a \\
1992 & a few hours & n/a & n/a & n/a & n/a & n/a \\
1993 & 1962.81 & n/a & n/a & n/a & n/a & n/a \\
1994 & 469.821 & n/a & n/a & n/a & n/a & n/a \\
1995 & 101.911 & n/a & n/a & n/a & n/a & n/a \\
1996 & 8544.33 & n/a & n/a & n/a & n/a & n/a \\
1997 & 1966.39 & n/a & n/a & n/a & n/a & n/a \\
1998 & a few hours & n/a & n/a & n/a & n/a & n/a \\
1999 & 8279.19 & n/a & n/a & n/a & n/a & n/a \\
2000 & a few hours & n/a & n/a & n/a & n/a & n/a \\
2001 & 494.791 & n/a & n/a & n/a & n/a & n/a \\
2002 & 484.911 & n/a & n/a & n/a & n/a & n/a \\
2003 & 114.471 & n/a & n/a & n/a & n/a & 0.201 \\
2004 & 49.161 & n/a & n/a & n/a & n/a & 0.191 \\
2005 & 2.821 & n/a & n/a & n/a & n/a & n/a \\
2006 & 1929.25 & n/a & n/a & n/a & n/a & n/a \\
2007 & 2015.77 & n/a & n/a & n/a & n/a & n/a \\
2008 & a few hours & n/a & n/a & n/a & n/a & n/a \\
\end{tabular}
\\Time values are not available for the search tree algorithms if test runs took more than three hours.
\end{center}
\end{scriptsize}
\newpage
\subsection{Winter sports instances}
\label{Attributes of the winter sports instances}
\begin{tiny}
\begin{center}
\textbf{Properties of the winter sports instances}\\
\vspace*{1.2ex}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c}
competition & \#votes & \#candi- & \#dirty & \%dirty & \#maj. & \%maj. & min & max & max & average & red. \\
& & dates & pairs & pairs & pairs & pairs & score & score & range & kt-dist & cands. \\
\hline
biathlon team men 08/09 & 6 & 15 & 67 & 63.81 & 65 & 61.90 & 124 & 506 & 14 & 30.60 & 0 \\
cross skiing 15km men 08/09 & 4 & 10 & 24 & 53.33 & 37 & 82.22 & 32 & 148 & 8 & 12.33 & 0 \\
cross skiing seasons 06-09 & 4 & 23 & 192 & 75.89 & 190 & 75.10 & 255 & 757 & 19 & 107.17 & 0 \\
\end{tabular}
\end{center}
\end{tiny}
~\\
\begin{tiny}
\begin{center}
\textbf{Running times (in seconds) for the winter sports instances}\\
\vspace*{1.2ex}
\begin{tabular}{c|c|c|c|c|c|c}
competition & \texttt{kconsens\_cands} & \texttt{kconsens\_pairs} & \texttt{kconsens\_triples} & \texttt{$4$-kconsens} & \texttt{$5$-kconsens} & \texttt{$6$-kconsens} \\
\hline
biathlon team men 08/09 & 58.23 & n/a & n/a & 16.47 & 4904 & n/a \\
cross skiing 15km men 08/09 & 0.081 & 30.311 & 0.721 & 0.001 & 0.011 & 0.021 \\
cross skiing seasons 06-09 & a few hours & n/a & n/a & n/a & n/a & n/a \\
\end{tabular}
\\Time values are not available for the search tree algorithms if test runs took more than two hours.
\end{center}
\end{tiny}
\newpage
\section{Randomly generated instances}
\subsection{Parameter values and running times for randomly generated instances}
\label{Parameter values and results for ramdomized instances}
\paragraph{Test series 1}~\vspace{0.15cm}~\\
We generated 10 instances with 200 votes for each parameter set.
The running-times are average running-times.
\begin{tiny}
\begin{center}
\textbf{Running times and properties for test series 1}\\
\vspace*{1.2ex}
\begin{tabular}{r|l|l|l|l|l|l|l|l|l|l|l|l|l|l}
$m=w=d$ & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 \\
\hline
\texttt{kconsens\_cands} & 0.001 & 0.002 & 0.006 & 0.015 & 0.025 & 0.058 & 0.117 & 0.302 & 0.891 & 2.89 & 11.1 & 48.8 & 113 & 488 \\
\texttt{kconsens\_pairs} & 0.001 & 0.013 & 0.469 & 43.21 & - & - & - & - & - & - & - & - & - & - \\
\texttt{kconsens\_triples} & 0.003 & 0.017 & 0.0346 & 3.822 & 132.98 & - & - & - & - & - & - & - & - & - \\
\texttt{4-kconsens} & 0.002 & 0.010 & 0.131 & 1.81 & 37.73 & 546.38 & - & - & - & - & - & - & - & - \\
\# dirty pairs & 6 & 10 & 15 & 21 & 28 & 36 & 45 & 55 & 66 & 78 & 91 & 105 & 120 & 126 \\
\end{tabular}
\end{center}
\end{tiny}
\paragraph{Test series 2}~\vspace{0.15cm}~\\
We generated instances with 14 candidates.
The running times are average running-times.
\begin{scriptsize}
\begin{center}
\textbf{Running times (in seconds) and properties for test series 2}\\
\vspace*{1.2ex}
\begin{tabular}{l|l|l|l|l|l|l|l}
$n$ & $p$ & $d$ & $w$ & \texttt{kconsens\_cands} & \texttt{kconsens\_pairs} & \texttt{kconsens\_triples} & \texttt{4-kconsens} \\
\hline
5 & 5 & 2 & 4 & 17.071 & 0.001 & 0.001 & 0.001 \\
10 & 11 & 2 & 7 & 17.211 & 0.041 & 0.021 & 0.011 \\
16 & 13 & 2 & 11 & 17.501 & 0.171 & 0.031 & 0.011 \\
10 & 26 & 4 & 7 & 17.611 & n/a & 0.061 & 0.021 \\
16 & 29 & 4 & 11 & 17.201 & n/a & 0.981 & 0.021 \\
22 & 34 & 4 & 15 & 17.221 & n/a & 1.601 & 0.021 \\
10 & 43 & 6 & 7 & 17.071 & n/a & 6.311 & 0.031 \\
22 & 45 & 6 & 15 & 13.271 & n/a & 8.761 & 0.031 \\
10 & 52 & 8 & 7 & 17.561 & n/a & n/a & 0.041 \\
10 & 53 & 10 & 7 & 17.621 & n/a & n/a & 0.061 \\
16 & 58 & 8 & 11 & 17.561 & n/a & n/a & 0.061 \\
22 & 59 & 8 & 15 & 17.161 & n/a & n/a & n/a \\
10 & 65 & 12 & 7 & 17.181 & n/a & n/a & n/a \\
22 & 69 & 10 & 15 & 13.271 & n/a & n/a & n/a \\
10 & 74 & 14 & 7 & 17.521 & n/a & n/a & n/a \\
16 & 75 & 12 & 11 & 17.081 & n/a & n/a & n/a \\
16 & 80 & 14 & 11 & 13.351 & n/a & n/a & n/a \\
22 & 86 & 12 & 15 & 13.281 & n/a & n/a & n/a \\
\end{tabular}
\\Time values are not available for the search tree algorithms if test runs took more than 30 minutes.
\end{center}
\end{scriptsize}
\newpage
\section{Search engines}
\subsection{Attributes of the websearch instances}
\label{Attributes of the websearch instances}
Websearches with the search engines google, yahoo, ask, and msn with 300 search results per engine.
\begin{scriptsize}
\begin{center}
\textbf{Properties of the websearch instances}\\
\vspace*{1.2ex}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}
search term & \#candi- & \#dirty & \%dirty & \#maj. & \%maj. & min & max & max & average & red. \\
& dates & pairs & pairs & pairs & pairs & score & score & range & kt-dist & cands. \\
\hline
affirmative+action & 22 & 101 & 43.72 & 203 & 87.88 & 129 & 795 & 16 & 54.33 & 6 \\
alcoholism & 14 & 33 & 36.26 & 76 & 83.52 & 48 & 316 & 8 & 18.00 & 0 \\
amusement+parks & 11 & 31 & 56.36 & 44 & 80.00 & 42 & 178 & 9 & 16.83 & 1 \\
architecture & 17 & 53 & 38.97 & 121 & 88.97 & 68 & 476 & 11 & 27.50 & 0 \\
bicycling & 20 & 85 & 44.74 & 162 & 85.26 & 113 & 647 & 15 & 45.83 & 4 \\
blues & 20 & 77 & 40.53 & 158 & 83.16 & 109 & 651 & 12 & 43.00 & 3 \\
cheese & 18 & 74 & 48.37 & 131 & 85.62 & 96 & 516 & 13 & 39.83 & 3 \\
citrus+groves & 11 & 39 & 70.91 & 46 & 83.64 & 48 & 172 & 9 & 20.17 & 1 \\
classical+guitar & 16 & 62 & 51.67 & 105 & 87.50 & 77 & 403 & 12 & 32.83 & 1 \\
computer+vision & 18 & 115 & 75.16 & 116 & 75.82 & 152 & 460 & 15 & 62.83 & 1 \\
cruises & 18 & 112 & 73.20 & 113 & 73.86 & 152 & 460 & 14 & 62.17 & 1 \\
Death+Valley & 17 & 63 & 46.32 & 120 & 88.24 & 79 & 465 & 13 & 33.17 & 1 \\
field+hockey & 18 & 63 & 41.18 & 130 & 84.97 & 86 & 526 & 11 & 34.50 & 0 \\
gardening & 17 & 49 & 36.03 & 123 & 90.44 & 62 & 482 & 10 & 26.17 & 2 \\
graphic+design & 11 & 21 & 38.18 & 49 & 89.09 & 27 & 193 & 6 & 10.67 & 0 \\
Gulf+war & 18 & 62 & 40.52 & 129 & 84.31 & 86 & 526 & 13 & 34.17 & 2 \\
HIV & 15 & 44 & 41.90 & 93 & 88.57 & 56 & 364 & 11 & 23.00 & 3 \\
java & 17 & 65 & 47.79 & 119 & 87.50 & 82 & 462 & 14 & 34.50 & 4 \\
Lipari & 6 & 4 & 26.67 & 15 & 100.00 & 4 & 56 & 3 & 1.50 & 6 \\
lyme+disease & 20 & 81 & 42.63 & 172 & 90.53 & 99 & 661 & 13 & 42.00 & 0 \\
mutual+funds & 13 & 30 & 38.46 & 69 & 88.46 & 39 & 273 & 7 & 15.50 & 1 \\
National+parks & 13 & 32 & 41.03 & 73 & 93.59 & 37 & 275 & 10 & 15.67 & 6 \\
parallel+architecture & 6 & 8 & 53.33 & 12 & 80.00 & 11 & 49 & 4 & 3.67 & 2 \\
Penelope+Fitzgerald & 14 & 65 & 71.43 & 65 & 71.43 & 91 & 273 & 12 & 36.33 & 1 \\
recycling+cans & 3 & 1 & 33.33 & 3 & 100.00 & 1 & 11 & 2 & 0.50 & 3 \\
rock+climbing & 15 & 84 & 80.00 & 76 & 72.38 & 113 & 307 & 14 & 46.00 & 1 \\
San+Francisco & 6 & 9 & 53.33 & 11 & 73.33 & 12 & 48 & 4 & 4.17 & 1 \\
Shakespeare & 26 & 175 & 53.85 & 265 & 81.54 & 235 & 1065 & 21 & 96.17 & 2 \\
stamp+collection & 9 & 19 & 52.78 & 31 & 86.11 & 24 & 120 & 8 & 9.17 & 3 \\
sushi & 14 & 40 & 43.96 & 77 & 84.62 & 54 & 310 & 10 & 21.83 & 2 \\
table+tennis & 13 & 38 & 48.72 & 68 & 87.18 & 48 & 264 & 9 & 19.83 & 2 \\
telecommuting & 12 & 28 & 42.42 & 53 & 80.30 & 41 & 223 & 7 & 15.17 & 2 \\
Thailand+tourism & 16 & 57 & 47.50 & 101 & 84.17 & 76 & 404 & 11 & 30.67 & 1 \\
vintage+cars & 4 & 2 & 33.33 & 5 & 83.33 & 3 & 21 & 2 & 0.50 & 2 \\
volcano & 20 & 98 & 51.58 & 159 & 83.68 & 129 & 631 & 17 & 53.17 & 0 \\
zen+budism & 11 & 22 & 40.00 & 49 & 89.09 & 28 & 192 & 7 & 11.17 & 1 \\
Zener & 14 & 54 & 59.34 & 74 & 81.32 & 71 & 293 & 13 & 29.00 & 1 \\
\end{tabular}
\end{center}
\end{scriptsize}
\newpage
\begin{scriptsize}
\begin{center}
\textbf{Running times (in seconds) for the websearch instances}\\
\vspace*{1.2ex}
\begin{tabular}{c|c|c|c|c|c|c}
search term & \begin{tiny}\texttt{kconsens\_cands}\end{tiny} & \begin{tiny}\texttt{kconsens\_pairs}\end{tiny} & \begin{tiny}\texttt{kconsens\_triples}\end{tiny} & \begin{tiny}\texttt{$4$-kconsens}\end{tiny} & \begin{tiny}\texttt{$5$-kconsens}\end{tiny} & \begin{tiny}\texttt{$6$-kconsens}\end{tiny} \\
\hline
affirmative+action & 114.931 & n/a & n/a & n/a & 0.051 & 0.211 \\
alcoholism & 10.871 & n/a & 4.161 & 0.011 & 0.021 & 0.101 \\
amusement+parks & 0.081 & 34.501 & 0.001 & 0.011 & 0.001 & 0.041 \\
architecture & 475.921 & n/a & n/a & 0.021 & 21.871 & 0.141 \\
bicycling & 109.101 & n/a & n/a & n/a & 0.101 & n/a \\
blues & 486.071 & n/a & n/a & n/a & 2.941 & n/a \\
cheese & 49.891 & n/a & n/a & 1724.79 & 0.041 & 0.131 \\
citrus+groves & 0.081 & n/a & 6.591 & 0.001 & 0.011 & 0.061 \\
classical+guitar & 50.741 & n/a & n/a & 1087.12 & 0.031 & 0.281 \\
computer+vision & 486.201 & n/a & n/a & n/a & n/a & n/a \\
cruises & 475.591 & n/a & n/a & n/a & n/a & n/a \\
Death+Valley & 109.531 & n/a & n/a & 16.901 & 0.041 & 0.151 \\
field+hockey & 2060.72 & n/a & n/a & n/a & n/a & n/a \\
gardening & 53.381 & n/a & n/a & 0.021 & 0.031 & 0.101 \\
graphic+design & 0.261 & 15.611 & 0.081 & 0.001 & 0.011 & 0.031 \\
Gulf+war & 117.931 & n/a & n/a & 126.071 & 0.041 & 3.021 \\
HIV & 0.811 & n/a & 7.131 & 0.001 & 0.011 & 0.061 \\
java & 2.821 & n/a & n/a & 73.441 & 0.021 & 0.361 \\
Lipari & 0.001 & 0.001 & 0.001 & n/a & n/a & n/a \\
lyme+disease & a few hours & n/a & n/a & n/a & n/a & 0.221 \\
mutual+funds & 0.791 & n/a & 0.851 & 0.001 & 0.011 & 0.041 \\
National+parks & 0.001 & 0.041 & 0.011 & 0.001 & 0.001 & 0.011 \\
parallel+architecture & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\
Penelope+Fitzgerald & 2.841 & n/a & n/a & 0.031 & 136.241 & n/a \\
recycling+cans & 0.001 & 0.001 & 0.001 & n/a & n/a & n/a \\
rock+climbing & 11.831 & n/a & n/a & n/a & n/a & n/a \\
San+Francisco & 0.001 & 0.001 & 0.011 & 0.001 & 0.011 & 0.011 \\
Shakespeare & a few hours & n/a & n/a & n/a & n/a & n/a \\
stamp+collection & 0.001 & 0.051 & 0.011 & 0.001 & 0.001 & 0.011 \\
sushi & 0.801 & n/a & 32.461 & 0.201 & 0.221 & 0.061 \\
table+tennis & 0.261 & n/a & 0.301 & 0.001 & 0.021 & 0.061 \\
telecommuting & 0.091 & 0.261 & 0.021 & 0.001 & 0.011 & 0.041 \\
Thailand+tourism & 51.261 & n/a & n/a & 4407.3 & 4109.55 & n/a \\
vintage+cars & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\
volcano & a few hours & n/a & n/a & n/a & n/a & n/a \\
zen+budism & 0.081 & 35.571 & 0.291 & 0.001 & 0.001 & 0.041 \\
Zener & 2.791 & n/a & n/a & 0.011 & 0.041 & 0.101 \\
\end{tabular}
\\Time values are not available for the search tree algorithms if test runs took more than three hours.
\end{center}
\end{scriptsize}
\newpage
Websearches with the search terms: ``hotel dublin'' ``hotels dublin'' ``rooms dublin'' ``bed dublin'' with 300 search results per engine.
Result urls are reduced to the domain to get greater instances.
\begin{scriptsize}
\begin{center}
\textbf{Properties}\\
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}
search- & \#candi- & \#dirty & \%dirty & \#maj. & \%maj. & min & max & max & average & red. \\
engine & dates & pairs & pairs & pairs & pairs & score & score & range & kt-dist & cands. \\
\hline
ask & 4 & 4 & 66.67 & 5 & 83.33 & 5 & 19 & 3 & 2.17 & 2 \\
google & 13 & 60 & 76.92 & 53 & 67.95 & 85 & 227 & 11 & 33.00 & 0 \\
msnlive & 13 & 14 & 17.95 & 73 & 93.59 & 19 & 293 & 6 & 7.17 & 6 \\
yahoo & 12 & 49 & 74.24 & 50 & 75.76 & 65 & 199 & 11 & 26.50 & 2 \\
\end{tabular}
\end{center}
\end{scriptsize}
\begin{scriptsize}
\begin{center}
\textbf{Running times (in seconds)}\\
\begin{tabular}{c|c|c|c|c|c|c}
search engine & \texttt{kconsens\_cands} & \texttt{kconsens\_pairs} & \texttt{kconsens\_triples} & \texttt{$4$-kconsens} & \texttt{$5$-kconsens} & \texttt{$6$-kconsens} \\
\hline
ask & 0.001 & n/a & n/a & 0.001 & 0.001 & 0.001 \\
google & 2.431 & n/a & n/a & 0.051 & 0.061 & 0.101 \\
msnlive & 0.001 & n/a & n/a & 0.001 & 0.001 & 0.021 \\
yahoo & 0.081 & n/a & n/a & 8.121 & 5.321 & 0.121 \\
\end{tabular}
\\Time values are not available for the search tree algorithms if test runs took more than 30 minutes.
\end{center}
\end{scriptsize}
Websearches with the search term list: Paris, London, Washington, Madrid, Berlin, Ottawa, Wien, Canberra, Peking, Prag, Moskau.
We generated one vote for each of the search engines: google, yahoo, ask, and msn.
The candidates are ranked according to the corresponding number of result pages.
\begin{scriptsize}
\begin{center}
\textbf{Properties}\\
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}
\#votes & \#candi- & \#dirty & \%dirty & \#maj. & \%maj. & min & max & max & average & red. \\
& dates & pairs & pairs & pairs & pairs & score & score & range & kt-dist & cands. \\
\hline
4 & 11 & 19 & 34.55 & 46 & 83.64 & 28 & 192 & 6 & 10.33 & 0\\
\end{tabular}
\end{center}
\end{scriptsize}
\begin{scriptsize}
\begin{center}
\textbf{Running times (in seconds)}\\
\begin{tabular}{c|c|c|c|c|c}
\texttt{kconsens\_cands} & \texttt{kconsens\_pairs} & \texttt{kconsens\_triples} & \texttt{$4$-kconsens} & \texttt{$5$-kconsens} & \texttt{$6$-kconsens} \\
\hline
0.231 & 3.501 & 0.001 & 0.001 & 0.001 & 0.0041 \\
\end{tabular}
\end{center}
\end{scriptsize}
\section{Implementation}
\label{Implementation}
\index{Implementation}
In this chapter, we want to show the effectiveness of the algorithms in practice.
Therefore, a high-performance implementation is necessary.
We decided to use C++ as programming language.
First of all, it is very popular and many programmers are able to read C++.
Furthermore, there are many high-performance libraries available for complex and mathematical computation.
In our implementation we use several libraries of the popular ``boost'' library package~\cite{boost}.
Our project has got 14 classes and 3601 lines of code (without comments) overall.
Besides an intelligent memory management and a high-performance data-structure for the subsets the implementation of \texttt{kconsens\_cands} is implemented straight forward to the description.
The implementations of \texttt{kconsens\_pairs} and \texttt{kconsens\_triples} are close to their original description in ~\cite{BFGNR08a} and Section~\ref{Improved search tree with respect to the parameter ``Kemeny score''}.
For \texttt{$s$-kconsens} we have the pseudo-code in Section~\ref{Pseudo code}, which describes the algorithmic details.
There is some additional information about the implementation of \texttt{$s$-kconsens} in the next paragraph.
\paragraph{Details of the implementation of \texttt{$s$-kconsens}}
Here, we describe some details of the implementation of \texttt{$s$-kconsens}.
Some considerations turned out to be helpful to improve the running time in practice, although their implementation influences the theoretical worst case running time by a polynomial factor to the search tree size.
We again briefly discuss the running time.\\
Unfortunately, the recursive description in the pseudo code is a little bit improper for high performance (in C++).
Thus, we transform the recursive part of the algorithm into an iterative form.
The results for often called queries like ``subscores of a permutation'' or ``is this pair dirty'' are precomputed and stored in a hash map or arrays with an intelligent index system such that requests take constant time.
The stack of the recursive calls is simulated by an array of data structures.
For faster access and memory savings all candidates (names) will be mapped to integers in a preprocessing step.
After the computation of the Kemeny score and consensus list, the original candidate names will be restored.
Beside precomputing the dirty sets, in the implementation \texttt{$s$-kconsens} also precomputes the subscores of each permutation of each dirty set.
There are $s!$ permutations for at most $m \cdot (m-1) / 2$ dirty sets.
Thus, this is done in $O(s! \cdot (m \cdot (m-1) / 2) )$.
As discussed in Observations~\ref{obs_subscorebreak}~and~\ref{obs_consistentbreak}, we have two criteria to discard a branching.
Therefore $L$ was implemented as a data structure that manages for each candidate $x$ two hash sets, one for candidates that are preferred to $x$ and one for the candidates, where $x$ is preferred to them.
The fixing of the relative order of the candidates of one candidate pair with $L$.memorize() now takes $O(m)$ instead of constant time.
(There are $4 m$ hash-sets and each hash-set has to be updated at most once.)
In return, \texttt{$s$-kconsens} checks the consistence in constant time at each search tree nodes.
This improved the performance in practice.
Since it has precomputed the subscores of the permutations, the algorithm sorts them by subscore and tests the permutations with small subscore at first.
Thus, if we discard a branching due to Observation~\ref{obs_subscorebreak}, then we can also skip the permutations with a greater subscore.
Another point is that we call \texttt{$s$-kconsens} with $k$ being the minimal sum of pairwise subscores as a lower bound.
If it returns `no' we increase $k$ by one and call \texttt{$s$-kconsens} again.
At this point we can guarantee, that we do not need to use Lemma~\ref{lemma_majbreak} to discard the branching:
If there are $n_o$ pairs fixed, not ordered according to their $2/3$-majority, then there is no consensus with less than $n_o$ pairs not ordered according their 2/3 majority with less score.
We already know that there is no solution with less score, because we tested theirs existence in the last call of \texttt{$s$-kconsens} and we started with a lower bound for $k$, where it is only possible that all pairs are ordered according to their $2/3$-majority.
\section{Data and experiments}
\label{Experiments}
\index{Experiments}
We use several different sources to get test instances for the algorithms.
The first type are randomly generated instances, which are very useful to produce performance diagrams that show the dependency of the running time on miscellaneous attributes.
The second type are results of sports competitions as also discussed in the introduction.
Besides Formula One, we also used several cross-skiing and biathlon competitions.
In this context, apart from the running time and several attributes of the instances also the comparison of the consensus list with the results of the original point scoring system may be interesting.
Last, but not least, we consider one of the most famous applications of modern rank aggregation, that is, meta search engines.
The result of different search request form the votes of our rank aggregation problem.
We will generate several instances, analyse their properties and test the performance of our algorithms.
\subsection{Randomly generated instances}
\label{Randomized generated instances}
\index{Randomized generated instances}
Generating random data for testing algorithms is very popular and dangerous at the same time.
The significance of the tests depends on the probability space, the parameter values and the way we are using the random data.
There are known cases where algorithms are provable very efficient on randomly generated instances, but do not perform well on general instances.
An example is the \textsc{Hamilton cycle} problem, which is $\complNP$-hard in general, but easy on a special class of random graphs.
It is described in \cite[Section 5.6.2]{MU05}.
Nevertheless, we need several series of parameter values.
The data generation works as follows: We start with generating one reference vote.
Then we use this reference vote to generate all other votes by swaping some candidate pairs.
To this end, we define some parameters:
\begin{enumerate}
\item the number of candidates~($m$)
\item the number of votes~($n$)
\item the expected number of swaps per vote~($w$)
\item the maximum distance of the swap candidates with respect to the reference vote~($d$)
\end{enumerate}
Note that Conitzer~et~al.~\cite{C06} also used some random data to test their algorithms.
They generated a total order representing a consensus ordering.
Then, they generated the voting preferences, where each one of the voters agrees with the consensus ordering regarding the ranking of every candidate pair with some consensus probability.
Using the same probability for each candidate pair to be dirty, like they did, seems to generate ``isolated'' dirty pairs unusually often.
This would be an unrealistic advantage for our search tree algorithms.
Hence, we used our own way of generating the data as described above.
\paragraph{Properties}
For the first test series we generated instances with a growing number of candidates.
Since we want to investigate the relation between the number of candidates and the running time, we fixed the rate of dirty candidate pairs, so that approximately half of all candidate pairs are dirty.
For the second test series we generated instances with a constant number of candidates and a growing number of dirty pairs.
This was done by varying the number of votes and the values of the 3rd and the 4th parameter in the generating process.
The parameter values, used to create the test instances, in the appendix in Section~\ref{Parameter values and results for ramdomized instances}.
The decision for 14 candidates in the second test series is to limit the overall running time of the test series and provide sufficiently large range of possible values for the number of dirty pairs.
Other values lead to similar results.
\paragraph{Results}
\begin{figure}
\begin{center}
\textbf{Test series 1}\\
\begin{tabular}{c}
\includegraphics[scale=1.25]{img/rand_candidates-runningtime.pdf}\\
\end{tabular}
\end{center}
\caption{Randomly generated data: Running time against the number of candidates. For each number of candidates ten instances where generated and tested. In all instances about 50\% of the candidate pairs are dirty. We computed the average values to get more significant results. A single test run was canceled if it took more than one hour. The test series for each algorithm was canceled if the total running time for the instances of with the same number of candidates was greater than two hours.}
\label{Randomized generated Data: Running-time against the number of candidates 1}
\end{figure}
\begin{figure}
\begin{center}
\textbf{Test series 2}\\
\begin{tabular}{c}
\includegraphics[scale=1.25]{img/rand_dirtypairs-runningtime.pdf}\\
\end{tabular}
\end{center}
\caption{Randomly generated data: Running time against number of dirty pairs. Here a test run was canceled if it took more than 30 minutes. All instances have 14 candidates. The number of votes and swaps as well as the swap range can be found in the Appendix.}
\label{Randomized generated Data: Running-time against number of dirty pairs 1}
\end{figure}
For both series the algorithms \texttt{kconsens\_cands}, \texttt{kconsens\_pairs},\linebreak \texttt{kconsens\_triples}, and \texttt{$4$-kconsens} were tested.
One can except for the first test series that \texttt{kconsens\_cands} will be more efficient than the search tree algorithms:
It has a running time of $O(2^m \cdot m^2 \cdot n)$ while the best search tree algorithm takes $O(1.53^k + m^2 \cdot n)$ time.
A lower bound for $k$ is the number of dirty pairs.
Unfortunately, the number of candidate pairs is proportional to $m^2$.
We can see the results from test series one in Figure~\ref{Randomized generated Data: Running-time against the number of candidates 1}.
They come up to one's expectations:
\texttt{kconsens\_cands} is the most efficient.
The algorithm \texttt{kconsens\_pairs} is the least efficient while an improvement of the running time when branching over dirty 4-sets instead of dirty triples is noticeable.
The results for the second test series are illustrated in Figure~\ref{Randomized generated Data: Running-time against number of dirty pairs 1}.
Here, we can see that the search tree algorithms are significantly more efficient for these instances.
Also using greater dirty sets for branching improved the running time considerable.
In summary the tests show that both parameterizations are practicable for specific types of instances.
While one should use \texttt{kconsens\_cands} for all instance with only a few (up to thirty) candidates, the search tree algorithms seems to be very efficient for instances with a low number of dirty pairs.
In the following, we will see that this also applies to real world aggregation data.
\subsection{Sports competitions}
\label{Sports competition}
\index{Sports competition}
\subsubsection{Formula One}
As discussed in the introduction, sports competitions naturally provide ranking data.
One famous sports is motor sports, especially Formula One.
We generated ranking data from the Formula One seasons of the years 1961 till 2008, with one candidate for each driver and one vote for each race.
The preference lists comply with the order of crossing the finish line.
All drivers who fail are ordered behind the others (and their order complies with the elimination order).
For the sake of simplicity, the algorithms were designed to deal only with complete preference lists without ties.
Therefore, we removed the drivers who did not attend all races.
In most of the seasons only about two or three candidates were removes.
\paragraph{Properties}
We analysed the properties, discussed in the introduction of this chapter, for the Formula One instances.
Unfortunately, analysis of the instances showed that 90-100\% of the candidate pairs are dirty.
Moreover, the maximum range of the candidates is circa 95\% of the number of candidates, which is the maximum possible value.
This seems to be hard for the algorithms \texttt{kconsens\_pairs}, \texttt{kconsens\_triples}, and \texttt{$s$-kconsens} as we will see in the results.
The data reduction rule ``Condorcet winner/looser'' could remove one candidate from the instances, created from the Formula One seasons 1963, 1980-81, 1992, 2001-02, and two candidate for the instance, created from the season 2004.
A complete table of properties can be found in the Appendix~\ref{Attributes of the formula one instances}.
\paragraph{Results}
We tested the algorithms \texttt{kconsens\_cands}, \texttt{kconsens\_pairs},\linebreak \texttt{kconsens\_triples}, and \texttt{$s$-kconsens} for $s \in \{4,\ldots,6\}$ for the generated elections.
We are not able to compute the Kemeny Score with the search tree algorithms for many instances in less than three hours.
However, at least with \texttt{kconsens\_cands} we are able to compute the optimal consensus list for almost all Formula One seasons in a few hours.
So, the FIA could use the Kemeny voting system in prospective seasons.
\begin{table}
\begin{center}
\begin{tabular}{r l | l}
Points & FIA ranking & Kemeny consensus \\
\hline
134 & Fernando Alonso & Fernando Alonso \\
121 & Michael Schumacher & Michael Schumacher \\
80 & Felipe Massa & Felipe Massa \\
\hline
72 & \colorbox{yellow}{Giancarlo Fisichella} & \colorbox{green}{Kimi R\"aikkonen} \\
65 & \colorbox{green}{Kimi R\"aikkonen} & \colorbox{yellow}{Giancarlo Fisichella} \\
\hline
56 & Jenson Button & Jenson Button \\
30 & Rubens Barrichello & Rubens Barrichello \\
23 & Nick Heidfeld & Nick Heidfeld \\
\hline
20 & \colorbox{yellow}{Ralf Schumacher} & \colorbox{green}{Jarno Trulli} \\
15 & \colorbox{green}{Jarno Trulli} & \colorbox{green}{David Coulthard} \\
14 & \colorbox{green}{David Coulthard} & \colorbox{yellow}{Ralf Schumacher} \\
\hline
7 & \colorbox{yellow}{Mark Webber} & \colorbox{green}{Vitantonio Liuzzi} \\
4 & \colorbox{yellow}{Nico Rosberg} & \colorbox{green}{Scott Speed} \\
1 & \colorbox{green}{Vitantonio Liuzzi} & \colorbox{yellow}{Mark Webber} \\
0 & \colorbox{green}{Scott Speed} & \colorbox{yellow}{Nico Rosberg} \\
\hline
0 & Christijan Albers & Christijan Albers \\
0 & Tiago Monteiro & Tiago Monteiro \\
0 & Takuma Sato & Takuma Sato
\end{tabular}
\end{center}
\caption{The official ranking of the Formula One season 2006 and the Kemeny consensus.}
\label{tab_f1res_2006}
\end{table}
We compare the Kemeny consensus of the election, created from the result of a season, with the preference list, computed by the point scoring system of the FIA.
For instance, we compare all drivers who attended all races in the season of 2006 in Table~\ref{tab_f1res_2006}.
As we can see here (and also if we compare the preference lists for other seasons), the preference lists are similar, especially for the drivers that get points in most of the races.
Although the world champion would not change in most of the seasons, the Kemeny consensus ranks some of the drivers differently in each season.
For mathematical purposes, the Kemeny consensus is more balanced in the sense that it weights each pairwise comparison equally.
Otherwise, it is of course the decision of the FIA to weight the winner of a race disproportionately high.
We will see all results of the Formula One generated instances in Section~\ref{Attributes of the formula one instances} in the appendix.
\subsubsection{Winter sports}
The properties of the Formula One instances are not very fortunate for the running times of our search tree algorithms.
Especially the high rate of dirty pairs seems to be hard as we have already seen for the randomized data.
Now, we want to investigate whether this holds for other sports competitions.
Therefore, we created three further instances based on different winter sports competitions.
One is generated by using the cross-skiing (15 km men) competitions of the season 2008/2009.
For another instance we use biathlon team results of the season 2008/2009.
And for the third we use the overall results of the seasons 2006-2009 in cross-skiing championship and rank the best 75 sportsmen of each season.
\paragraph{Properties}
We got instances ranging from 10 to 23 candidates.
In contrast to the Formula One elections, all three instances have low rates of dirty pairs (about 50-75\%) and higher rates of majority-non-dirty pairs (about 60-80\%).
The data reduction rule ``Condorcet winner/looser'' could not remove any candidates.
\paragraph{Results}
We computed the Kemeny score efficiently with \texttt{kconsens\_cands},\linebreak \texttt{kconsens\_triples}, or \texttt{4-kconsens}.
All instances could be solved in at most few hours.
So, we found instances that are generated from sports competitions where the search tree algorithms are efficient.
Together with the results from the Formula One instances one can summarize that it seems to depend on the concrete sports competition which is the best algorithm.
More detailed information can be found in the Appendix.
\subsection{Search engines}
\label{Search engines}
\index{Search engines}
Generating ranking data based on web search results can be realized with different methods.
The first method is very intuitive:
We define a search term and query several search engines.
In our case we use the popular search engines google, yahoo, ask, and bing (formerly known as msn live search).
Each search result provides a preference list of web-links.
It is reasonable to remove web-links, that only appear in one single search result.
It it realistic to assume that such search web-links are not of particular interest.
The significance and size of the generated election highly depends on the search term.
In some cases promising candidates will be removed, because different search engines return urls, that variate a little, but result in the same website.
Therefore several filters are helpful to produce more interesting elections.
To demonstrate this, we generated another set of instances, where every url is reduced to its domain.
The generation method can be easily extended if we customize the search parameters of the engines.
One successful example was to request one search term in different languages.
Here it is also possible (and sometimes necessary) to translate the search term.
We use the same search terms as used in \cite{SvZ09}.
The second method is based on the semantic similarity of different search terms.
We define a list of search terms and query the same search engine.
Each search result provides a preference list of web-links again.
Here, it is very important that the search terms are really semantically similar.
Otherwise, we have to request too many results for each search term to find congruent urls.
The last method of generating ranking data from search results breaks with the idea of meta search engines.
We define a list of search terms, each corresponds to one candidate.
We generate the preference lists by requesting the search term of a specific search engine, one for each vote, and sort them according the number of search results.
Thus, ties in the preference lists are possible, but improbable.
An example instance is a list of some metropolises.
\paragraph{Properties}
Now, we consider the properties of the generated data.
Although we use three different methods for the generation of the instances, the properties and results are similar for all three types.
In most cases about 50\% or less of the candidate pairs are dirty.
Moreover, analysing the $2/3$-majorities even more than $75$\% of the candidate pairs are non-dirty.
We have also checked if we could apply the reduction rule ``Condorcet winner/looser'' on the instances.
In ca. 50\% of the cases we could delete between one and six candidates.
In two cases all candidates could be removed:
Searching for ``Lipari'' and ``recycling cans'' generated polynomial-time solvable special cases as discussed in Section~\ref{Improvements}.
The maximum range was about 50-80\% of the maximal possible value in the most instances.
We can also see all results detailed in the appendix~(Section~\ref{Attributes of the websearch instances}).
\paragraph{Results}
We are able to compute the optimal consensus of most instances with up to $30$ candidates efficiently.
For some instances we get remarkable results:
Searching for ``citrus groves'' produced an election with eleven candidates.
The algorithm \texttt{kconsens\_cands} was able to compute the Kemeny score in 0.281 seconds and \texttt{4-kconsens} in 0.21 seconds.
In contrast, \texttt{kconsens\_cands} was substantially more efficient for all randomly generated instances with eleven candidates as well.
This observation is even more clear for the search term ``cheese'' with a running time of more than 200 seconds for \texttt{kconsens\_cands} 52.11 seconds for \texttt{kconsens\_triples} and only 0.291 seconds for \texttt{4-kconsens}, for an election with $18$ candidates.
We can see more such instances (like ``bicycling'') in the results table (Section~\ref{Attributes of the websearch instances}) in the Appendix.
Another interesting observation is that the size of the dirty-sets we use in the search tree influences the running time very much in some instances:
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[scale=1.25]{img/rand_sset-runningtime.pdf}\\
\end{tabular}
\end{center}
\caption{Web search data: Running-time against the size of the dirty set.}
\label{Web search data: Running-time against the size of the dirty set}
\end{figure}
Searching for ``classical guitar'' produced an election, where we compute the Kemeny score with \texttt{kconsens\_cands} in 112 seconds, with \texttt{4-kconsens} in 105 seconds and with \texttt{5-kconsens} in only 0.041 seconds.
Otherwise we get a little bit increased running time if we use \texttt{$t$-kconsens} with $t>5$.
See Figure~\ref{Web search data: Running-time against the size of the dirty set}.
We solve \textsc{Kemeny Score} with \texttt{$t$-kconsens} for $t=\{4,\dots,10\}$ on the elections that are produced by the web searches for ``classical guitar'' and ``java''.
\section{Kemeny's voting scheme}
\label{Kemeny's voting scheme}
\index{Kemeny's voting scheme}
There are many situations, where one has to get an ordered list of \textit{candidates} by aggregating inconsistent information.
For example, in plurality voting systems each voter determines which candidate is the best.
He\footnote{For the sake of simplicity we use male sex for all candidates. This also applies to drivers, politicians, and so on.} cannot affect the order of the remaining candidates among each other.
Our aim is to get an order of the candidates that best reflects the opinion of the voters.
The disadvantage is that the information (which is provided or used) of each vote is incomplete in respect to the solution.
Of course, there are also some advantages:
Sometimes it might be easier for a voter to determine his vote because he only has to know who is the best for him.
There are efficient ways to compute the resulting preference list.
To analyse attributes of different (and more complex) voting systems, we introduce a formal view of a voting system.
The input is an \textit{election} $(V,C)$ consisting of a set $V=\left\lbrace v_1,v_2,...,v_n \right\rbrace $ of votes over a set $C$ of $m$ candidates.
One vote is a \textit{preference list} of the candidates, that is, each vote puts the candidates in an ordered list according to preference.
The solution is a single preference list, whose computation depends on the respective voting system.
Although we can use this formalism already for plurality voting systems, there are many situations with more intricate voting scenarios.
For example, different sports competitions lead to a voting scenario, where we have preference lists anyway.
For instance, the results of each race in one Formula One season form inconsistent information about the skills of the drivers.
At the end of each season we do not only want to see a world champion, but also a complete preference list of drivers refering to their skills.
The FIA\footnote{F\'ed\'eration Internationale de l'Automobile} has used several point-scoring systems~\cite{wiki:f1a} to determine the overall preference list.
None of these systems took the whole race results into account.
As a consequence, the overall result might not fairly reflect the driver's skills.
\begin{example}
\label{ex_f1}
In a fictive season there are the two drivers Adrian and Bob and 14 other drivers.
We use the point-scoring system of the year 2003 till present (2009):
\begin{center}
\begin{tabular}{lllll}
1st place & 10 points & ~ & 5th place & 4 points \\
2nd place & 8 points & ~ & 6th place & 3 points \\
3th place & 6 points & ~ & 7th place & 2 points \\
4th place & 5 points & ~ & 8th place & 1 point \\
\end{tabular}
\end{center}
At the end of the season, the drivers are ranked according the point sums.
Adrian is the last driver who passes the finish line in each race.
In one race eight drivers (inclusive Bob) fail so that Adrian gets one point.
In all other races, Bob is getting the 9th place and no points.
Finally the point-scoring system ranks Adrian better than Bob while it is obvious that Bob was ``more successful'' in that season.
\end{example}
Although this example is overstated, it illustrates the problem of using a voting scenario that only uses a (small) subset of the pairwise relations between the candidates.
Thus, it is desirable to use a voting system that reflects the whole race results.
In this case ``reflecting the whole input information'' means, that each position in the preference list of a vote may affect the solution list.
(It is obvious, that the plurality voting system does not reflect the whole input information.)
Borda is a well-known example among point-scoring systems.
The \textit{Borda} (or Borda count) voting system determines the winner of an election by giving each candidate a certain number of points corresponding to the position in which he is ranked by each voter.
As result, we get a preference list, where all candidate are ranked according to their points sums.
Furthermore, we take another important attribute of voting systems into account.
Informally, the Condorcet winner\footnote{A Condorcet winner will not always exist in a given set of votes, which is known as Condorcet's voting paradox.} is the candidate who would win a two-candidate election against each of the other candidates~(Definition~\ref{Condorcet criterion}).
Unfortunately, there is no guarantee that the Borda winner is also a the Condorcet winner~\cite{Kla05}.
\begin{definition}
\label{Condorcet criterion}
The \textbf{Condorcet winner} of an election is the candidate who, when compared with each other candidate, is preferred to every other candidate in more than half of the votes.
A voting system satisfies the \textbf{Condorcet criterion} if it chooses the Condorcet winner when one exists.
\end{definition}
\begin{example}
\label{Condorcet criterion example}
Consider the election $(V,C)$ with $V=\{v_1,\dots,v_5\}$ and $C=\{a,b,c\}$.
Each voter assigns three points for the most preferred candidate, two points for the secondary most preferred candidate and one point fast the least preferred candidate.
We have the following votes:\\
\begin{center}
\begin{tabular}{ r l }
$v_1:$ & $a>b>c$\\
$v_2:$ & $a>b>c$\\
$v_3:$ & $a>b>c$\\
$v_4:$ & $b>c>a$\\
$v_5:$ & $b>c>a$\\
\end{tabular}\\
\end{center}
In other words, $a$ gets 11 points, $b$ gets 12 points and $c$ gets 7 points.
The Borda winner is $b$ although the Condorcet winner $a$ is in three of five votes better than each other candidate.
\end{example}
One famous voting system that satisfies the Condorcet criterion is \textit{Kemeny's voting scheme}.
It goes back to Kemeny~\cite{Kem59} and was specified by Levenglick~\cite{Lev75} in 1975.
The result of this voting scheme is the so-called \textit{Kemeny consensus}.
It is a preference list $l$, that is ``closest'' to the input preference lists of the votes.
In this case ``closest'' is formally defined as the minimum sum of \textit{Kendall-Tau distance}s between $l$ and each vote $v_i$.
The Kendall-Tau distance between the votes $v$ and $w$ is defined as
\begin{equation}
\ktdist(v,w)=\sum_{\{c,d\}\subseteq C} d_{v,w}(c,d)
\end{equation}
where the sum is taken over all unordered pairs $\{c,d\}$ of candidates, and $d_{v,w}(c,d)$ is set to $0$ if $v$ and $w$ rank $c$ and $d$ in the same relative order, and otherwise it is set to~1.
Using a divide-and-conquer algorithm, one can compute the Kendall-Tau distance in $O(m \cdot \log m)$~\cite{KT06}.
We define the \textit{score} of a preference list $l$ in an election $(V,C)$ as $\sum_{v \in V} \ktdist(l,v)$.
That is, the Kemeny consensus (or Kemeny ranking) of $(V,C)$ is a preference list with minimum score, called the \textit{Kemeny score} of $(V,C)$.
Clearly, there can be more than one optimal preference list.
Altogether, we arrive at the decision problem behind the computation of the Kemeny consensus:
\begin{verse}
\textsc{Kemeny Score}\\
\textit{Input:} An election $(V,C)$ and a positive integer $k$.\\
\textit{Output:} Is the Kemeny score of $(V,C)$ at most $k$?
\end{verse}
All algorithms in this work do not only solve \textsc{Kemeny Score} itself, but also compute the optimal score and a corresponding consensus list for the given election.\\
While using sports competition results to define input preference lists is easy, it seems more difficult to use Kemeny's voting scheme for voting systems with many candidates.
The voters may not be able or not disposed to provide a complete preference list for all candidates.
An example is, when four persons of the human resources department have to determine a ranking of hundred applicants.
Here, the goal might be to select the top five applicants and each human resources person provides a ranking of all applicants.
Of course, there are also special situations with only a few candidates where the voters provide complete preference lists.
In case of local elections in German politics we usually have only five till ten candidates.
(Nevertheless, a majority voting system is used for these candidates.)
However, in politics voting systems that use preference lists as input are very rare at present.
They are for example used to elect members of the Australian House of Representatives, the President of Ireland, the national parliament of Papua New Guinea, and the Fijian House of Representatives~\cite{wiki:iro}.
There are many other scenarios where it is easy to extract a set of preference lists from the input information.
For example, Kemeny's voting scheme is used in genetic analysis~\cite{JSA08}, meta search engines~\cite{DKNS01a}, database applications~\cite{FKSMV04}, or fighting spam~\cite{LZ05,CDN05}.
Therefore the performance of solving \textsc{Kemeny Score} is important.
In the following paragraph we will summarize the state of the art regarding the classical computational complexity of \textsc{Kemeny Score}.
\paragraph{Complexity}
Bartholdi et al.~\cite{BTT89} showed that \textsc{Kemeny Score} is NP-complete. Since \textsc{Kemeny Score} has practical relevance, polynomial-time algorithms are highly desirable.
So there are several studies for approximation algorithms with polynomial running time.
A deterministic approximation algorithm with factor~$8/5$ was shown by van Zuylen et al.~\cite{vZW07}.
With a randomized algorithm it is possible to improve the factor to~$11/7$~\cite{ACN05}.
Recent studies~\cite{KS07} showed that there is also a polynomial-time approximation scheme (PTAS) for \textsc{Kemeny Score}, but the corresponding running time is not practical.
In several applications exact solutions are indispensable.
Hence, a parameterized complexity analysis might be a way out.
That is why we concentrate on methods of parameterized algorithms in the following.
The next paragraph contains a survey of our results.
\paragraph{Survey}
In this work we will analyse and develop algorithms that solve \textsc{Kemeny Score} efficiently when the parameter $k$ being the Kemeny score is small.
More precisely we will provide an algorithm that decides \textsc{Kemeny Score} in $O(1.5079^k + m^2 \cdot n)$ time.
This is an improvement from $O(1.53^k + m^2 \cdot n)$ in previous work~\cite{BFGNR08b}.
We will discuss some tricks and heuristics to improve the running time in practice and develop a polynomial-time data reduction rule.
Together with an implementation of another algorithm in~\cite{BFGNR08b} solving \textsc{Kemeny Score} efficiently when the parameter ``number of candidates'' is small, we will get a framework to compute optimal solutions for real-world instances with up to 30 candidates.
(The number of votes do not affect the running-time noticeable.)
We will show that we can use Kemeny rankings to evaluate sports competitions and to create meta search engines without counting on heuristics and approximative solutions.
Tests on real-world data will show that data reduction rule seems to be useful.
\section{Known results}
\label{Known results}
\index{Known results}
We already know that \textsc{Kemeny Score} is $\complNP$-hard.
At present, this means that computing an optimal Kemeny consensus takes exponential time in worst case.
In several applications, exact solutions are indispensable.
Hence, a parameterized complexity analysis might be a way out.
Here one typically faces an exponential running time component depending only on a certain parameter, cf.\ Section~\ref{Fixed-parameter tractability}.
An important parameter for many problems is the size of the solution.
In case of \textsc{Kemeny Score} this is the ``score of the consensus''.
Betzler et al.~\cite{BFGNR08a} showed that \textsc{Kemeny Score} can be solved in $O(1.53^k+m^2 \cdot n)$ time with $k$ being the score of the consensus.
They also showed that one can solve \textsc{Kemeny Score} in $O((3d+1)! \cdot d \log d \cdot m n)$ time with $d$ being the maximum Kendall-Tau distance between two input votes,
in $O(2^m \cdot m^2 \cdot n)$ time and in $O((3r+1)! \cdot r \log r \cdot m n)$ time with $r$ being the maximum range of candidate positions. Another interesting parameter for parameterized
computational complexity analysis is of course ``number of votes'', but Dwork et al.~\cite{DKNS01a,DKNS01b} showed that the NP-completeness holds even if the number of votes is only four.
Hence, there is no hope for fixed-parameter tractability with respect to this parameter.
In recent studies, Betzler et al.~\cite{BFGNR08b} showed that \textsc{Kemeny Score} can be solved in $O(n^2 \cdot m \log m +16^{d_A} \cdot (16 {d_A}^2 \cdot m + 4d_A \cdot m^2 \log m \cdot n))$ time with
$d_A=\lceil d_a \rceil$ and $d_a$ being the average Kendall-Tau distance.
Furthermore, this is clearly an improved algorithm for the parameterization by the maximum Kendall-Tau distance.
Because the maximum range of candidate positions is at most $2 \cdot d_a$~\cite{BFGNR08b}, we also have an improved algorithm for the parameterization by the maximum range of candidate positions.
In the next subsection, we will examine more closely the parameterized algorithms for the parameter ``score of the consensus''~\cite{BFGNR08a}.
Later on, we will improve the results and describe an algorithm that solves \textsc{Kemeny Score} in $O(1.5078^k+m^2 \cdot n)$.
Independently from this work, an even more improved algorithm was developed in~\cite{S09}.
They use the minimum-weight feedback arc set to provide a quiet similar branching strategy and to get an upper bound for the search tree size in $O(1.403^k)$.
\subsection{Known results for the parameter Kemeny score}
\label{Known results for parameterization by score}
A trivial search tree for \textsc{Kemeny Score} can be obtained by branching on the dirty pairs.
More precisely, we can branch into the two possible relative orders of a dirty pair at each search tree node.
The parameter will be decreased at least by one in both cases.
Actually, it will be decreased by more than one in many cases.
Thus, we get a search tree of size $O(2^k)$.
Since we want to compute the consensus list we also want to know the relative order of the non-dirty pairs.
Fortunately, the relative order of all non-dirty candidates and all non-dirty pairs is already fixed:
\begin{lemma}\label{nondirty-lemma}
Let $(V,C)$ be an election and let $a$ and $b$ be two candidates in $C$. If $a > b$ in all votes, then every Kemeny consensus has $a > b$.
\end{lemma}
The correctness of Lemma~\ref{nondirty-lemma} follows from the Extended Condorcet criterion~\cite{T98}.
The fact of the following lemma is well-known.
For the sake of completeness we provide a proof.
\begin{lemma}\label{two-votes-trivial-lemma}
\textsc{Kemeny Score} is solvable in polynomial time for instances with at most two votes.
\end{lemma}
\begin{proof}
For instances with one vote: Take the vote as consensus; its score is zero.
For instances with two votes: Take one of the votes as consensus.
The score will be $s_v:=\sum_{\{a,b\} \subseteq C} d_{v_1,v_2}(a,b)$.
For each preference list $p$ with $v_1 \neq p \neq v_2$ the score will be at least $s_v$, because for each pair $\{a,b\}$ it holds that $\sum_{v \in \{v_1,v_2\}} d_{p,v}(a,b) \ge d_{v_1,v_2}(a,b)$.
This can be proved by contradiction: Assume that $\sum_{v \in \{v_1,v_2\}} d_{p,v}(a,b) < d_{v_1,v_2}(a,b)$. In this case $d_{v_1,v_2}(a,b)$ has to be $1$.
We show that $\sum_{v \in \{v_1,v_2\}} d_{p,v}(a,b)$ can not be $0$.
Since $v_1$ and $v_2$ rank $a$ and $b$ in a different order, $p$ and $v_1$ can not rank $a$ and $b$ in the same order if $p$ and $v_2$ rank $a$ and $b$ in the same order.
\end{proof}
It follows from Lemma~\ref{two-votes-trivial-lemma} that we are only interested in instances with at least three votes.
Let $\{a,b\}$ denote a dirty pair.
The hardest case for the analysis of the branching number is if $a>b$ in only one vote.
Then, there are at least two votes with $b>a$.
This will help to do a better estimate of the branching vector in the search tree.
As a consequence, having a look at the search tree again, we see that we can decrease the parameter by $2$ in at least one of the two cases.
Thus, it is easy to verify that the search tree size of this trivial algorithm is $O(1.618034^k)$.
Betzler et al.~\cite{BFGNR08a} showed that there is an improved search tree by branching on \textit{dirty triples}, that is a set of three candidates, such that at least two pairs
of them are dirty pairs. The size of the resulting search tree is $O(1.53^k)$.
Intuitively, there is hope that branching on a ``dirty set with more than three candidates'' will decrease the size of the search tree further. This is what we examine next.
\subsection{A closer look on the search trees}
\label{Optimization of the search tree}
Now, we closely examine the search tree algorithms that decide \textsc{Kemeny Score} as described in~\cite{BFGNR08a}.
In the both search trees one computes a consensus list by fixing the relative order of the candidates of each dirty pair in one search tree node.
In the \textit{trivial search tree} we fix the relative order of the candidate of one dirty pair per search tree node.
In the \textit{triple search tree} we fix the relative orders of the candidates of all dirty pairs that are involved in one dirty triple per search tree node.
At the node where we fix the order and at all child nodes of this node, we denote these dirty pairs as \textit{non-ambiguous}.
Intuitively, a pair is called \textit{ambiguous} if the relative order of its candidates was not fixed.
At every search tree leaf, all pairs are non-ambiguous so that the relative order of the candidates of each dirty pair is fixed.
That is, the consensus list is uniquely determined if the fixed orders are consistent.
At this point, we can make some observations:
\begin{observation}
\label{obs_subscorebreak}
At each search tree node, the parameter decreases according to the subscore of the set of fixed pairs.
\end{observation}
We can compute the Kemeny score by summing up the subscores of the sets of fixed pairs.
Clearly, each dirty pair will be fixed only once.
Thus, Observation~\ref{obs_subscorebreak} is correct.
\begin{observation}
\label{obs_consistentbreak}
One has the termination condition:
If the set of non-ambiguous pairs is inconsistent, then discard the branching.
\end{observation}
Let $U$ denote the set of non-ambiguous pairs in a search tree node $u$.
Then, $U$ is clearly a subset of the set of non-ambiguous pairs in each subtree node of $u$.
Clearly, a superset of an inconsistent set is inconsistent, too.
Thus, Observation~\ref{obs_consistentbreak} is correct.
\begin{figure}
\begin{center}
\begin{scriptsize}
We consider the search tree of the $O(2^k)$ algorithm.
The following trees are only small sections of a complete search-tree for an election with at least five candidates $\{a,b,c,x,y\}$,
where at least $\{a,b\}$, $\{b,c\}$, $\{a,c\}$ and $\{x,y\}$ are dirty pairs.\\
\childsidesep{0.3em}
\childattachsep{0.25in}
\synttree[{original tree - showing how relations get fixed} [\color{blue}\boldmath$a>b$ [\color{black}$x>y$ [\color{blue}\boldmath$b>c$[\color{blue}\boldmath$a>c$\color{black}][\color{blue}\boldmath$c>a$\color{black}]]
[\color{blue}\boldmath$c>b$[\color{blue}\boldmath$a>c$\color{black}][\color{blue}\boldmath$c>a$\color{black}]]]
[\color{black}$y>x$ [\color{blue}\boldmath$b>c$[\color{blue}\boldmath$a>c$\color{black}][\color{blue}\boldmath$c>a$\color{black}]]
[\color{blue}\boldmath$c>b$[\color{blue}\boldmath$a>c$\color{black}][\color{blue}\boldmath$c>a$\color{black}]]]]
[\color{blue}\boldmath$b>a$ [\color{black}$x>y$ [\color{blue}\boldmath$b>c$[\color{blue}\boldmath$a>c$\color{black}][\color{blue}\boldmath$c>a$\color{black}]]
[\color{blue}\boldmath$c>b$[\color{blue}\boldmath$a>c$\color{black}][\color{blue}\boldmath$c>a$\color{black}]]]
[\color{black}$y>x$ [\color{blue}\boldmath$b>c$[\color{blue}\boldmath$a>c$\color{black}][\color{blue}\boldmath$c>a$\color{black}]]
[\color{blue}\boldmath$c>b$[\color{blue}\boldmath$a>c$\color{black}][\color{blue}\boldmath$c>a$\color{black}]]]]
]\\
\ \\
We will change the fixing order.
Remark: The leaves of the new tree contain the same combination (of fixed relative orders) as the leaves of the original tree.
Thus, a changed fixing order does not affect the correctness of the algorithm (completeness of the search-tree).
\childsidesep{0.3em}
\childattachsep{0.25in}
\synttree[\color{black}{sorted pair sequence} [\color{blue}\boldmath$a>b$ [\color{blue}\boldmath$b>c$ [\color{blue}\boldmath$a>c$[\color{black}$x>y$][\color{black}$y>x$]]
[\color{blue}\boldmath$c>a$[\color{black}$x>y$][\color{black}$y>x$]]]
[\color{blue}\boldmath$c>b$ [\color{blue}\boldmath$a>c$[\color{black}$x>y$][\color{black}$y>x$]]
[\color{blue}\boldmath$c>a$[\color{black}$x>y$][\color{black}$y>x$]]]]
[\color{blue}\boldmath$b>a$ [\color{blue}\boldmath$b>c$ [\color{blue}\boldmath$a>c$[\color{black}$x>y$][\color{black}$y>x$]]
[\color{blue}\boldmath$c>a$[\color{black}$x>y$][\color{black}$y>x$]]]
[\color{blue}\boldmath$c>b$ [\color{blue}\boldmath$a>c$[\color{black}$x>y$][\color{black}$y>x$]]
[\color{blue}\boldmath$c>a$[\color{black}$x>y$][\color{black}$y>x$]]]]
]\\
\ \\
Instead of the marked subtrees, where we fix the relative orders of the pairs of $\{a,b,c\}$ successively, we create a new vertex, where we fix them at the same time.
Some combinations of fixed relative orders are inconsistent (like $a>b$, $b>c$ and $c>a$).
We can use this to provide only six new vertices in place of eight induced trees.
\childsidesep{0.5em}
\childattachsep{0.35in}
\synttree[{replaced node} [\color{blue}\boldmath$a>b>c$[\color{black}$x>y$][\color{black}$y>x$]]
[\color{blue}\boldmath$a>c>b$[\color{black}$x>y$][\color{black}$y>x$]]
[\color{blue}\boldmath$b>a>c$[\color{black}$x>y$][\color{black}$y>x$]]
[\color{blue}\boldmath$a>b>c$[\color{black}$x>y$][\color{black}$y>x$]]
[\color{blue}\boldmath$b>c>a$[\color{black}$x>y$][\color{black}$y>x$]]
[\color{blue}\boldmath$c>b>a$[\color{black}$x>y$][\color{black}$y>x$]]
]\\
\end{scriptsize}
\end{center}
\caption{Improving the search tree}
\label{Improving the search tree}
\end{figure}
The improvement in the triple search tree uses the following observation:
In the search tree it does not affect the correctness in which sequence the dirty pairs are processed.
In the trivial search tree we process the dirty pairs in arbitrary sequence.
For the triple search tree, we can assume the we process all dirty pairs, involved in the same dirty triple, successively.
We replace all search tree nodes that handle dirty pairs of the same dirty triple, with one new node, where we branch on all possible relative orders of the candidates of the dirty triple (see Figure~\ref{Improving the search tree}~``Improving the search tree'').
This lead to a decreased branching number.
Our aim is to generalize this idea to a ``dirty sets of arbitrary size'' and get a more refined search tree algorithm.
Observations~\ref{obs_subscorebreak} and \ref{obs_consistentbreak} are valid for every branching strategy that fixes the relative order of candidates of each dirty pairs.
\section{Refinement of the search tree}
\label{Improved search tree with respect to the parameter ``Kemeny score''}
\index{Improved search tree with respect to the parameter ``Kemeny score''}
Now we want to design the more refined search tree.
So, we need a concept of a structure of arbitrary size that extends the known terms ``dirty pair'' and ``dirty triple''.
\subsection{Extending the concept of dirtiness}
\label{Extending the concept of dirtiness}
We start with defining ``dirtiness'' for a set of candidates of arbitrary size.
\begin{definition}
\label{dirty n-set}
Let $(V,C)$ be an election with $n$ votes and $m$ candidates.
For a subset $D \subseteq C$ the \textbf{dirty-graph} of $D$ is an undirected graph with $|D|$ vertices, one for each candidate from $D$, such that there is an edge between two vertices if the corresponding candidates form a dirty pair.
The subset $D$ is \textbf{dirty} if the dirty-graph of $D$ is connected.
We say that $D$ is a \textbf{dirty $j$-set} if $|D| = j$ and $D$ is dirty.
\end{definition}
Definition~\ref{dirty n-set} generalizes the concept of \textit{dirty pairs} in Definition~\ref{dirty pair} (which is a dirty 2-set)
and \textit{dirty triples} in Section~\ref{Known results for parameterization by score} (which is a dirty 3-set).
We will generalize the improvement of the search tree algorithm (Figure~\ref{Improving the search tree}) by branching on dirty $j$-sets with $j>3$ instead of dirty triples.
\section{The new search tree algorithm}
\label{Algorithm description}
In this subsection, we will describe the algorithm called \texttt{$s$-kconsens}.
\begin{verse}
\texttt{$s$-kconsens}\\
Program parameter: Maximal size of the analysed dirty sets $s$\\
Input: An election $(V,C)$ and a positive integer $k$\\
Output: A consensus list with a Kemeny score of at most $k$ or `no'
\end{verse}
Basically, \texttt{$s$-kconsens} works as follows.
In a prebranching step it computes the set of all dirty pairs and the corresponding dirty $s$-sets.
Then it branches according to the possible permutations of the candidates in the dirty $s$-sets.
We only branch into cases that are possible due to Observation~\ref{obs_consistentbreak} and decrease the parameter according to Observation~\ref{obs_consistentbreak}.
This part of the algorithm is called branching step.
If all dirty $s$-sets are handled, we fix the order of the candidates in the remaining dirty $t$-sets with $t<s$.
As we will show in Section~\ref{correctness} we can use an order that minimizes the corresponding subscore.
Therefore, it only takes into account permutations with consistent relation sets as discussed in Observation~\ref{obs_consistentbreak}.
When all relative orders are fixed, we can compute the final consensus list in polynomial time. This part of the algorithm is called postbranching step.
The following subsections are organized as follows.
First, we give a more detailed description including high-level information about data structures.
Second, we show the correctness and analyse the running time of \texttt{$s$-kconsens}.
The theoretical analysis of the running time is restricted to the case $s=4$.
\subsection{Pseudo-code}
\label{Pseudo code}
Now, we will describe some details.
The algorithm \texttt{$s$-kconsens} uses an important object~$L$, that stores fixed relative orders of candidates as set of ordered pairs.
We denote this set as $L_O$.
That is, $L_O := \{ (x,y) \mid L$ stored $x>y \}$.
In each storage call, $L$ determines a set of ordered pairs according to the recently fixed relative orders of candidates.
That is, $L$ computes the relation set of a given permutation $L_N$ and adds it to $L_O$.
Analogously to Section~\ref{Optimization of the search tree}, we will denote a pair of candidates $\{a,b\}$ as \textit{ambiguous} if $L$ does not store the relative order of $a$ and $b$.
Otherwise we call it \textit{non-ambiguous}.
In a later section, we will discuss the implementation of $L$.
It provides the following concrete functions:.
\begin{description}
\item[$\bf L$.memorize($l$)] The argument $l$ is a preference list (permutation) of candidates in $C$.
It stores the relative orders of the candidates in $l$ (namely the set $L_N$).
That is, $L_O \leftarrow L_O \cup L_N$.
It returns `yes' if $L_N$ and $L_O$ agree, otherwise it returns `no'.
In addition, if there is any ambiguous pair and only one order of the candidates of this pair agrees with $L_O$ it stores this relative order, too.
For reference, we call this step \textit{ambiguous-check}.
\item[$\bf L$.ambiguous()] This function returns the set of ambiguous pairs.
\item[$\bf L$.getList()] This function returns `no' if there are ambiguous pairs.
Otherwise it returns a preference list $p$ such that $L_O$ agrees with $p$.
\item[$\bf L$.score()] This function returns the score implied by the stored relative orders, that is:
\begin{equation}
\sum_{v \in V}\sum_{\{c,d\}\subseteq C} d_v(c,d)
\end{equation}
where $d_v$ is set $1$ if $v$ ranks $c$ and $d$ in a different order than $L$ stored, and else $d_v$ is set $0$.
In other words $L$.score() computes $subscore(\bar{D})$ with $\bar{D}$ being the set of non-ambiguous pairs.
Clearly, if there are no ambiguous pairs it returns the score of the uniquely determined consensus list.
\end{description}
\begin{figure}
\begin{algorithmic}[1]
\Procedure{$s$-kconsens}{}
\State create new and empty $L$
\For{each unordered pair $\{a,b\}$}
\If{all votes in $V$ rank $a>b$}
\State $L$.memorize($a>b$)
\EndIf
\EndFor
\State return $s$-kconsens\_rek($L$)
\EndProcedure
\end{algorithmic}
\caption{
In the initialization (prebranching step) we store the relative orders of the candidates of all non-dirty pairs.
So $L$.ambiguous will return the set of dirty pairs after initialization.
This initialization is correct due to Lemma~\ref{nondirty-lemma}.
}
\label{init_kconsens_4set}
\end{figure}
\begin{figure}
\begin{algorithmic}[1]
\Function{$s$-kconsens\_rek}{$L$}
\If{$L$.score() $>$ $k$}
\State return `no'
\EndIf
\State $D \gets L$.ambiguous()
\If{$D$ contains a dirty $s$-set $D_s$}
\For{each permutation $l$ of candidates in $D_s$}
\State $L_N \gets L$
\If{$L_N$.memorize($l$) $=$ `yes'}
\State result $\gets$ $s$-kconsens\_rek($L_N$)
\If{result $\neq$ `no'}
\State return result
\EndIf
\EndIf
\EndFor
\State return `no'
\Else
\For{$t = s-1$ downto $2$}
\For{each dirty $t$-set $D_t$}
\State best\_perm($L$,$D_t$)
\EndFor
\EndFor
\EndIf
\If{$L$.score() $>$ $k$}
\State return `no'
\Else
\State return $L$.getlist()
\EndIf
\EndFunction
\end{algorithmic}
\caption{
In the recursion part, we fix the relative order of the candidates by storing them in $L$.
There are two cases:
\textit{Case 1 branching step} (lines 6-16): There is a dirty $s$-set.
We try to store the relation set of each permutation separately.
If it was possible to store the relative order of the candidates of the permutation, we call the function recursively.
Otherwise (the recursive call returns `no'), we try another permutation.
If no recursive call returns `yes' we will return `no'.
\textit{Case 2 postbranching step} (lines 17-24): There is no dirty $s$-set.
We fix the relative orders of the candidates of each dirty $s-1$-set.
Thereafter we fix the relative orders of the candidates of each dirty $s-1$-set and so on.
Finally we can return the consensus list if the score is not greater than $k$, else we return `no'.
}
\label{kconsens_4set}
\end{figure}
\begin{figure}
\begin{algorithmic}[1]
\Function{perm}{$D_t$,$i$}
\State Return the $i$'th permutation of the candidates in $D_t$.
\EndFunction
\end{algorithmic}
\begin{algorithmic}[1]
\Function{best\_perm}{$L,D_t$}
\State scoreB $\gets$ $\infty$
\For{$i$ = 1 to $t!$}
\State $L_i$ $\gets$ $L$
\If{$L_i$.memorize(perm($D_t$,$i$)) = `yes'}
\If{$L_i$.score() $<$ scoreB}
\State $L_B$ $\gets$ $L_i$
\State scoreB $\gets$ $L_i$.score()
\EndIf
\EndIf
\EndFor
\State $L$ $\gets$ $L_B$
\EndFunction
\end{algorithmic}
\caption{
The support function \texttt{best\_perm} stores the relation set of the permutation of $D_t$ with the best subscore for the input election,
but of course it only accounts for sets that agree with $L_O$.
}
\label{kconsens_4set_support}
\end{figure}
The pseudo code of the algorithm is subdivided into three parts.
It consists of an initialization part (Figure~\ref{init_kconsens_4set}), a recursive part for the search tree (Figure~\ref{kconsens_4set}), and some supporting functions (Figure~\ref{kconsens_4set_support}).
Now, we are able to analyse the algorithm.
\subsection{Analysis of the search tree algorithm}
\label{Analysis of the algorithm}
\subsubsection{Correctness}
\label{correctness}
We already proved in Section~\ref{Improving the search tree} that branching according to the permutations of the candidates in all dirty sets solves \textsc{Kemeny Score}.
In the new algorithm, we only branch into the permutations of the candidates in all dirty $s$-sets, and compute the relative orders of the
candidates in the dirty $t$-sets for $t<s$ in the search tree leaves.
We have to show that it is correct to compute the best order of candidates in each dirty $t$-set without branching, that is:
\begin{lemma}
\label{lemma_fix_t-1sets}
The postbranching step of \texttt{$s$-kconsens} works correctly.
\end{lemma}
\begin{proof}
In the postbranching step \texttt{$s$-kconsens} handles all dirty $t$-sets with $t<s$ independently, that is, it chooses the permutation with the local minimum score.
We will show that for two maximal dirty $t$-sets $D_1$ and $D_2$, it must hold that for every $d_1 \in D_1$ with $d_1 \notin D_2$, $d_2 \in D_2$ with $d_2 \notin D_1$ the relative order of $d_1$ and $d_2$ is already fixed.
Assume that the relative order of $d_1$ and $d_2$ is not fixed.
Thus, $D_1 \cup \{d_2\}$ and $D_2 \cup \{d_1\}$ are dirty.
This conflicts with the maximality of $D_1$ and $D_2$.
\end{proof}
In the following, we analyse the running time of \texttt{$s$-kconsens}.
Therefore, we will start to find an upper bound for the search tree size.
Then we will analyse the running time in the search tree nodes.
\subsubsection{Search tree size.}
As mentioned before, we analyse the search tree size for $s=4$.
We can get an upper bound for the search tree size by analysing the branching number (see Section~\ref{Fixed-parameter tractability}).
As we already know from Section~\ref{Known results for parameterization by score}, the parameter decreases depending on the order of the candidates involved in dirty pairs at each search tree node.
So, to get the branching vector for the search tree of \texttt{$s$-kconsens}, we do a case distinction on the number of dirty pairs in the dirty 4-set $D_4=\{a,b,c,d\}$.
Since $D_4$ is dirty, its dirty graph is connected.
Each dirty pair corresponds to an edge in a connected graph with four vertices.
Thus, the minimal number of dirty pairs is three and the maximal number is six.
In each case we take a look at one search tree node. The algorithm branches according to the dirty 4-set. Depending on the number of involved dirty pairs, there is a fixed number
of branching cases (possible permutations for the candidates of $D_4$). We need to analyse how much the parameter decreases in each branching case.
As result of Lemma~\ref{two-votes-trivial-lemma} in Section~\ref{Known results for parameterization by score}, we have at least three votes.
For each dirty pair, there are two cases to fix the relative order of its candidates.
In the first case, the parameter will be decreased by only one.
We will say, that the pair is \textit{ordered badly} because this is the worst case for the analysis.
In the second case, the parameter will be decreased by at least two.
We will say, that the pair is \textit{ordered well}.
We start with discussing the cases that are easier to handle.
Note, that in some cases, it would be relatively easy to find a better upper bound.
We omit this since it is only reasonable to find an upper bound that is better than the upper bound in the worst case, given in Lemma~\ref{lemma_3dirty}.
We will get a branching number of $1.50782$ in that case.
\begin{lemma}
\label{lemma_5dirty}
If we have five dirty pairs in $D_4$, then the branching number is at most $1.48056$.
\end{lemma}
\begin{proof}
If we have five dirty pairs, one pair must have the same relative position in all votes.
So, we have twelve possible permutations for the candidates of $D_4$, because in half of permutations the candidates of this pair have another relative order.
In the worst case all dirty pairs are ordered badly.
In this case, the parameter is decreased by $ (5\cdot1) = 5$.
Choosing one out of five pairs we have five cases, where one dirty pair is ordered well and four pairs are ordered badly.
The parameter is decreased by $(1\cdot2) + (4\cdot1) = 6$ in these cases.
For all other cases the parameter is decreased by at least $(2\cdot2) + (3\cdot1) = 7$, because we have at least two well ordered pairs and at most three badly ordered pairs.
Thus, we have the branching vector $(5, 6, 6, 6, 6, 6, 7, 7, 7, 7 , 7 , 7)$.
The corresponding branching number is $1.48056$.
\end{proof}
To prove the next lemma, we introduce a new type of auxiliary graph.
\begin{definition}
Let $(V,C)$ be an election.
For a subset $D \subseteq C$ the \textbf{relation graph} of $D$ is an directed graph with $|D|$ vertices, one for each candidate from $D$, such that there is an arc from vertex $x$ to vertex $y$ if the candidate corresponding to $x$ is preferred to the candidate corresponding to $y$ in each vote.
\end{definition}
\begin{observation}
\label{P3observation}
The relation graph of $D$ is acyclic and contains no induced $P_3$.
\end{observation}
Since the first part of Observation~\ref{P3observation} is trivial, we will only prove the second part:
An induced $P_3=(\{x,y,z\},\{(x,y),(y,z)\})$ in a relation graph would imply that $\{x,y\}$ as well as $\{y,z\}$ have the same order $x > y$ and $y > z$ in the preference lists of all votes.
Thus we have $x > z$ in all votes and also the edge $(x,z)$ in the relation graph.
This conflicts with our assumption that the relation graph contains $P_3$ (and so not $(x,z)$).
\begin{lemma}
\label{lemma_6dirty}
If we have six dirty pairs in $D_4$, then the branching number is $1.502792$.
\end{lemma}
\begin{proof}
If all pairs of $D_4$ are dirty, we have to take into account all $4! = 24$ permutations of the candidates.
Now, we will analyse how much the parameter will decrease depending on the numbers of well ordered and badly ordered pairs.
\begin{description}
\item[Case 1:] Every branching possibility contains at least one well ordered pair.\\
Choosing one out of six pairs there are at most six cases with only one well ordered pair and five pairs are ordered badly.
The parameter is decreased by $(1\cdot2) + (5\cdot1) = 7$ in these cases.
Choosing two out of six pairs, we have 15~other cases with two well ordered and four badly ordered pairs.
The parameter will decrease by $(2\cdot2) + (4\cdot1) = 8$.
In the remaining three cases, the parameter is decreased by at least $(3\cdot2) + (3\cdot1) = 9$.
This causes a branching vector of $(7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9)$.
So, the branching number is $1.502792$.
\item[Case 2:] There is a branching possibility that contains only badly ordered pairs.\\
Trivially, choosing one out of six pairs there are at most six cases with only one well ordered pair.
We will now show that there are at most four such cases.
We get all possible cases by assuming that all pairs are ordered badly and flipping the order of one single pair.
We will show, that this is only possible for four of the six pairs:
We already know that ordering all pairs badly will cause no cycle in the relations graphs of the subsets of $D_4$: $\{a, b, c\}$, $\{a, d, c\}$ and $\{b, c, d\}$.
\\
\textit{Claim:} Flipping the order of (at least) two of the six pairs causes a cycle in the relation graph of $D_4$.
\\
\textit{Proof:} For each of the three sets above flipping the order of one pair causes a cycle in the relations-representing
graph\footnote{Assume that $x>y>z$. This implies $x>y$, $y>z$, and $x>z$.
We can flip $y>z$ to $z>y$ so that $x>z>y$ and we can flip $x>y$ to $y>x$ so that $y>x>z$.
Flipping only $x>z$ to $z>x$ would mean a cycle in the relations graph of $\{x,y,z\}$: $z \rightarrow x \rightarrow y \rightarrow z$.
This contradicts Observation~\ref{P3observation}.}.
We denote this pair as \textit{cycle pair}.
\begin{description}
\item[Case 2.a] For $\{a, b, c\}$ and $\{a, d, c\}$ the cycle pair is the same.
Thus, it must hold $s_1 =\{a, c\}$.
Then, for $\{b, c, d\}$ we have another pair $s_2 \in \{\{b, c\}, \{c, d\}, \{b,d\}\}$ with $s_2 \neq s_1$.
\item[Case 2.b] For $\{a, b, c\}$ and $\{a, d, c\}$ the cycle pair is not the same. Then, we already have two different pairs.
\end{description}
Thus, flipping the order of at most four pairs will cause a consistent relation set.
Hence, we actually have at most four cases, where only one pair is ordered well.
Analogously to Case 1, we have another 15 cases with two well ordered pairs.
Thus, we have the branching vector $(6, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9)$.
The corresponding branching number is $1.502177$.
\end{description}
\end{proof}
\begin{lemma}
\label{lemma_4dirty}
If we have four dirty pairs in $D_4$, then the branching number is $1.496327$.
\end{lemma}
\begin{figure}
\begin{center}
\begin{tabular}{ r r r }
1. $a > b > c > d$ & 9. $c > a > d > b$ & 17. $b > c > d > a$ \\
2. $a > b > d > c$ & 10. $d > a > c > b$ & 18. $b > d > c > a$ \\
3. $a > c > b > d$ & 11. $c > d > a > b$ & 19. $c > b > a > d$ \\
4. $a > d > b > c$ & 12. $d > c > a > b$ & 20. $d > b > a > c$ \\
5. $a > c > d > b$ & 13. $b > a > c > d$ & 21. $c > b > d > a$ \\
6. $a > d > c > b$ & 14. $b > a > d > c$ & 22. $d > b > c > a$ \\
7. $c > a > b > d$ & 15. $b > c > a > d$ & 23. $c > d > b > a$ \\
8. $d > a > b > c$ & 16. $b > d > a > c$ & 24. $d > c > b > a$
\end{tabular}
\end{center}
\caption{Table of permutations}
\label{Table of permutations}
\end{figure}
\begin{proof}
Two pairs must have the same preference list in all votes, because only four of six pairs are dirty.
Now we will look at the relation graphs of $D_4$~(Figure~\ref{Relation graphs of $D_4$ if two pairs have fixed order.}).
According to Observation~\ref{P3observation}, there is no induced $P_3$.
Thus, there are up to isomorphism only 3 possible relation graphs of $D_4$.
Either the pairs are independent (see $G_1$) or they share one candidate (see $G_2$ and $G_3$).
\begin{figure}
\begin{center}
\begin{tabular}{l|l|l}
{}$G_1$ & $G_2$ & $G_3$\\
\includegraphics[scale=1.0]{img/fourgra01.pdf} & \includegraphics[scale=1.0]{img/fourgra02.pdf} & \includegraphics[scale=1.0]{img/fourgra03.pdf}\\
\end{tabular}
\end{center}
\caption{Relation graphs of $D_4$ if two pairs have fixed order.}
\label{Relation graphs of $D_4$ if two pairs have fixed order.}
\end{figure}
Now we will analyse each possible relation graph of $D_4$.
\begin{description}
\item[$G_1$:] The relative orders $a > d$ and $b > c$ are fixed.\\
For this case the permutations $3,5,\dots,12,16,\dots,24$ (see Figure~\ref{Table of permutations}) are not possible.
Only six permutations are left over.
A simple calculation analogous to Lemma~\ref{lemma_5dirty} gives the branching vector $(4, 5, 5, 5, 5, 6)$.
Thus, the branching number is $1.437259$.
\item[$G_2$:] The relative orders $a > d$ and $a > c$ are fixed.\\
For this case the permutations $7,\dots,12,15,\dots,24$ (see Figure~\ref{Table of permutations}) are not possible.
\item[$G_3$:] The relative orders $d > a$ and $c > a$ are fixed.\\
For this case the permutations $1,\dots,10,13,\dots,16,19,20$ (see Figure~\ref{Table of permutations}) are not possible.
\end{description}
In both graphs $G_2$ and $G_3$ only eight permutations are left over.
Analogous to $G_1$, we get the branching vector $(4, 5, 5, 5, 5, 6, 6, 6)$.
The branching number is $1.496327$
\end{proof}
For the proof of the next lemma, we introduce another type of auxiliary graph.
\begin{definition}
For a subset $D \subseteq C$ the \textbf{election multigraph} of $D$ is a directed multigraph with $|D|$ vertices, one vertex for each candidate from $D$, such that for each vote there is an arc from vertex $x$ to vertex $y$ if the candidate corresponding to $x$ is preferred to the candidate corresponding to $y$.
\end{definition}
\begin{lemma}
\label{lemma_3dirty}
If there are three dirty pairs in $D_4$, then the branching number is $1.50782$.
\end{lemma}
\begin{proof}
In this case, three pairs are fixed and three pairs are dirty.
So, up to isomorphism, there is only one dirty graph of $D_4$ (see Figure~\ref{Dirty graph of $D_4$ and relation graph of $D_4$ if three pairs have fixed order}),
since the graph has to be connected.
Further, all three non-dirty pairs have the same order in all votes and induced $P_3$s are not allowed due to Observation~\ref{P3observation}.
More precisely, if $a$ is preferred to $c$ then $a$ must be preferred to $d$, because $\{d,c\}$ is dirty.
Furthermore, $b$ must be preferred to $d$, because $\{b,a\}$ is dirty.
Assuming $c>a$ leads to an isomorph graph.
Thus, up to isomorphism, there is only one relation graph of $D_4$ (see Figure~\ref{Dirty graph of $D_4$ and relation graph of $D_4$ if three pairs have fixed order}).
\begin{figure}
\begin{center}
\begin{tabular}{l|l}
Dirty graph of $D_4$ & Relation graph of $D_4$\\
\includegraphics[scale=1.0]{img/threedirty.pdf} & \includegraphics[scale=1.0]{img/threerel.pdf}\\
\end{tabular}
\end{center}
\caption{The dirty graph of $D_4$ and the relation graph of $D_4$ for three pairs with fixed order.}
\label{Dirty graph of $D_4$ and relation graph of $D_4$ if three pairs have fixed order}
\end{figure}
According to this relation graph of $D_4$, all votes rank $a > c$, $a > d$ and $b > d$.
If we have these relations fixed, only the following five permutations are left over.\\
\begin{center}
$P_1 : a > b > c > d$\\
$P_2 : a > b > d > c$\\
$P_3 : a > c > b > d$\\
$P_4 : b > a > c > d$\\
$P_5 : b > a > d > c$\\
\end{center}
Now we want to analyse the branching vector.
To this end, we show that the branching vector is at least as good as $(3, 4, 4, 4, 5)$ for all inputs.
We will use again the fact that there are more than two votes.
\begin{figure}
\begin{tabular}{lll}
{}$G_1$ (possible) & $G_2$ (possible) & $G_3$ (possible)\\
\includegraphics[scale=1.0]{img/relvotgra01.pdf} & \includegraphics[scale=1.0]{img/relvotgra02.pdf} & \includegraphics[scale=1.0]{img/relvotgra03.pdf}\\
{}$G_4$ (impossible) & $G_5$ (possible) & $G_6$ (possible)\\
\includegraphics[scale=1.0]{img/relvotgra04.pdf} & \includegraphics[scale=1.0]{img/relvotgra05.pdf} & \includegraphics[scale=1.0]{img/relvotgra06.pdf}\\
{}$G_7$ (impossible) & $G_8$ (impossible) &\\
\includegraphics[scale=1.0]{img/relvotgra07.pdf} & \includegraphics[scale=1.0]{img/relvotgra08.pdf} &\\
\end{tabular}
\caption{The simplified election multigraph of $D_4$ for three pairs with fixed order.
Since we use the fact that we have at least three votes, we will draw a thin arrow from $d_i$ to $d_j$ if there is
only one vote in $V$ with $d_i > d_j$. A fat arrow from $d_i$ to $d_j$ denotes that there are at least two votes with $d_i > d_j$
and grey arrows denote that $d_i > d_j$ in all votes.
To see which graph is possible, take a look at Figure~\ref{Votes multi-graphs of $D_4$, if three pairs have fixed order}.}
\label{Votes graphs of $D_4$, if three pairs have fixed order}
\end{figure}
\begin{figure}
\begin{tabular}{ccc}
{}$G_1$ (possible) & $G_2$ (possible) & $G_3$ (possible)\\
\includegraphics[scale=1.0]{img/votesgra01.pdf} & \includegraphics[scale=1.0]{img/votesgra02.pdf} & \includegraphics[scale=1.0]{img/votesgra03.pdf} \\
\color{blue}\boldmath\begin{footnotesize}$a > c > b > d$\end{footnotesize}\unboldmath & \color{blue}\boldmath\begin{footnotesize}$a > c > b > d$\end{footnotesize}\unboldmath & \color{blue}\boldmath\begin{footnotesize}$a > c > b > d$\end{footnotesize}\unboldmath\color{black}\\
\color{orange}$a > b > c > d$\color{black} & \color{orange}$a > b > d > c$\color{black} & \color{orange}$a > c > b > d$\color{black}\\
\color{black}$b > a > d > c$\color{black} & \color{black}$b > a > d > c$\color{black} & $b > a > d > c$\color{black}\\
\ & \ &\\
{}$G_4$ (impossible) & $G_5$ (possible) & $G_6$ (possible)\\
\includegraphics[scale=1.0]{img/votesgra04.pdf} & \includegraphics[scale=1.0]{img/votesgra05.pdf} & \includegraphics[scale=1.0]{img/votesgra06.pdf}\\
\ & \color{blue}\boldmath\begin{footnotesize}$b > a > c > d$\end{footnotesize}\unboldmath & \color{blue}\boldmath\begin{footnotesize}$b > a > d > c$\end{footnotesize}\unboldmath\color{black}\\
\ & \color{orange}$b > a > d > c$\color{black} & \color{orange}$b > a > d > c$\color{black}\\
\ & \color{black}$a > c > b > d$\color{black} & \color{black}$a > c > b > d$\color{black}\\
\ & \ &\\
{}$G_7$ (impossible) & $G_8$ (impossible) &\\
\includegraphics[scale=1.0]{img/votesgra07.pdf} & \includegraphics[scale=1.0]{img/votesgra08.pdf} &\\
\ & \ &\\
\end{tabular}
\caption{The election multigraphs of $D_4$ for three pairs with fixed order.
For each possible election multigraph it must be possible to assign one arc between each vertex pair to each vote.
In $G_4$, $G_7$, and $G_8$ it is not possible to assign the arcs to the votes without assigning a cycle.}
\label{Votes multi-graphs of $D_4$, if three pairs have fixed order}
\end{figure}
We do a case distinction on the election multigraphs of $D_4$ to get the worst case branching number.
There are five possible election multigraphs of $D_4$.
To see this, take a look at Figure~\ref{Votes graphs of $D_4$, if three pairs have fixed order}.
We can simply count for each permutation for each pair how many votes rank them in a different order.
The following table shows how much the parameter decreases for each permutation ($P_1$-$P_5$) for each election multigraph of $D_4$:\\
\\
{\hspace*{\fill}}$\begin{array}{l|lllll}
& P_1 & P_2 & P_3 & P_4 & P_5\\
\hline
G_1 & 3 & 4 & 4 & 4 & 5\\
G_2 & 4 & 3 & 4 & 5 & 4\\
G_3 & 4 & 5 & 3 & 5 & 6\\
G_5 & 4 & 5 & 5 & 3 & 4\\
G_6 & 5 & 4 & 6 & 4 & 3
\end{array}${\hspace*{\fill}}\\
\\
The worst case branching vector is $(3, 4, 4, 4, 5)$. Thus, the branching number is at most $1.50782$.
\end{proof}
\begin{lemma}
\label{lemma_treesize}
The search tree size is $O(1.50782^k)$ with $k$ being the Kemeny score.
\end{lemma}
\begin{proof}
Due to the Lemmas~\ref{lemma_5dirty}-\ref{lemma_3dirty}
the worst case yields the branching number $1.50782$. We get a search tree of size $O(1.50782^k)$.
\end{proof}
This is an improvement from $O(1.53^k)$ to $O(1.5078^k)$ for the worst-case search tree size for dirty $4$-sets.
There is hope that branching on dirty $s$-sets with $s>4$ will improve the worst-case running time further.
We will test the algorithm in practice in the next chapter.
Notably, the implementation is much easier than its theoretical analysis.
There is no big overhead for arbitrary $s$ (maximum size of analysed dirty sets) for the polynomial factor of the running time as we discuss next.
\subsubsection{Running time}
At this point, we will analyse the running times of the prebranching, branching (polynomial part) and postbranching steps of the algorithm to get the overall running time.
In the prebranching step, the algorithm enumerates the dirty pairs and precomputes the subscore of each pair.
There are $m \cdot (m-1)$ ordered pairs and $n$ votes.
Thus, this is done in $O(m^2 \cdot n)$ time.
To improve the running time of the branching-step, \texttt{$s$-kconsens} precomputes the set of dirty sets in this step.
It finds the dirty sets by iterating over all dirty pairs and builds up the sets step by step.
Because it only has to mark each (unordered) dirty pair once, this takes $O(m^2)$ time.
In the branching-step we have to analyse the polynomial running time in the search tree nodes.
At each search tree node, \texttt{$s$-kconsens} fixes the relative orders of at most $s \cdot (s-1) /2$ dirty pairs for one permutation.
This is done in constant time for one pair.
Thus, the running time at each search three node is constant, for fixed $s$.
In the postbranching step, \texttt{$s$-kconsens} fixes the relative orders of the dirty pair involved in dirty $t$-sets for $t<s$.
Therefore it checks less than $O(m^2)$ possible permutations (isolated).
Building up a consensus list from the fixed relations is also done in $O(m \cdot (m-1))$ time.
Summarizing, this leads together with Lemma~\ref{lemma_treesize} to the following theorem:
\begin{theorem}
The algorithm \texttt{$s$-kconsens} computes \textsc{Kemeny Score} in $O(1.5079^k + m^2 \cdot n)$.
\end{theorem}
\section{Data reduction}
\label{Improvements}
In this section, we want to analyse some additional improvements.
Therefore, we will examine data reduction rules, another discard criterion for the search tree, and a polynomial-time solvable special case.
Betzler et al.~\cite{BGKN09} used in very recent studies another characterization of dirtiness.
We will use it and rename this concept as \textit{majority-dirtiness}.
Let $(V,C)$ be an election as discussed before.
An unordered pair of candidates $\{a,b\} \subseteq C$ with neither $a>b$ nor $a<b$ in more than $2/3$ of the votes is called \textit{majority-dirty pair} and $a$ and $b$ are called \textit{majority-dirty} candidates.
All other pairs of candidates are called \textit{majority-non-dirty pairs} and candidates that are not involved in any majority-dirty pair are called \textit{majority-non-dirty candidates}.
Let $D_M$ denote the set of majority-dirty candidates and $n_M$ denote the number of majority-dirty pairs in $(V,C)$.
For two candidates $a,b$, we write $a>_{2/3}b$ if $a>b$ in more than $2/3$ of the votes.
Further, we say that $a$ and $b$ are \textit{ordered according to the $2/3$ majority} in a preference list $l$, if $a>_{2/3}b$ and $a>b$ in $l$.
Betzler et al.~\cite{BGKN09} showed the following theorem:
\begin{theorem}
\label{theorem_nomajdirty}
\cite{BGKN09}~\textsc{Kemeny Score} without majority-dirty pairs is solvable in polynomial time.
\end{theorem}
If an instance has no majority-dirty pair, then all candidate pairs in every Kemeny consensus are ordered according to $2/3$-majority.
We can easily find the corresponding consensus list in polynomial time.
This means, we have another polynomial-time solvable special case.
We can identify this case in $O(m^2 \cdot n)$.
Therefore we check for each candidate pair if the relative order of the candidate is the same in more than $2/3$ of the votes.
It would be interesting to use this result in the search tree, too.
A promising possibility would be that a search tree algorithm fixes the relative order of the candidates of each majority-dirty pair.
All majority-non-dirty pairs would be ordered according the $2/3$-majority.
Unfortunately, it is not clear that there should be any preference list that agrees with the resulting set of fixed ordered pairs.
However, in the following lemma, we note another interesting fact, leading to an idea for a data reduction rule.
To this end, we need the term \textit{distance} between two majority-non-dirty candidates.
For a majority-non-dirty pair $\{c,c'\}$ we define $\dist(c,c'):=|\{b \in C:b$ is majority-non-dirty and $c>_{2/3}b>_{2/3}c'\}|$ if $c>_{2/3}c'$ and $\dist(c,c'):=|\{b \in C:b$ is majority-non-dirty and $c'>_{2/3}b>_{2/3}c\}|$ if $c'>_{2/3}c$.
\begin{lemma}
\label{lemma_majnondirtycand}
\cite{BGKN09}~Let $(V,C)$ be an election and $D_M$ its set of majority-dirty candidates.
If for a majority-non-dirty candidate $c$ it holds that $dist(c,c_d)>2n_M$ for all $c_d \in D_M$, then in every Kemeny consensus $c$ is ordered according to the $2/3$-majority with respect to all candidates from $C$.
\end{lemma}
However, note that the argumentation of Lemma~\ref{lemma_majnondirtycand} cannot obviously be carried over to the case that the order of some candidate pairs is already fixed.
(See constrained ranking \cite{vZW07,vZHJW07})
Thus, it is not possible to apply the corresponding data reduction rule at the search tree nodes.
This can also be seen by the following example.
\begin{figure}
\begin{center}
\begin{tabular}{ll}
$v_1$: & $y>a>b>c>d>x$ \\
$v_2$: & $y>a>b>c>d>x$ \\
$v_3$: & $c>d>x>y>a>b$ \\
$v_4$: & $a>d>x>y>b>c$ \\
$v_5$: & $a>b>x>y>c>d$ \\
$v_6$: & $b>c>x>y>a>d$ \\
$v_7$: & $a>b>c>d>x>y$
\end{tabular}
\end{center}
\caption{Votes of the election in Example~\ref{ex_flip23}.}
\label{fig_flip23inst}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c c}
Majority relations & Kemeny consensus \\
\includegraphics[scale=1.0]{img/flip23majority.pdf} & \includegraphics[scale=1.0]{img/flipped.pdf}
\end{tabular}
\end{center}
\caption{We have a directed graph with $7$ vertices, one for each candidate from Example~\ref{ex_flip23}, such that there is an arc from vertex $x$ to vertex $y$ if the candidate corresponding to $x$ is preferred to the candidate corresponding to $y$ in $2/3$ of the votes.}
\label{fig_flip23gra}
\end{figure}
\begin{example}
\label{ex_flip23}
We have an election $(V,C)$ with $V=\{v_1,v_2,\dots,v_7\}$ and $C=\{a,b,c,d,x,y\}$.
The votes are given in Figure~\ref{fig_flip23inst}.
The candidates $a,b,c,d$ and $y$ are majority-dirty.
The majority-non-dirty pairs are illustrated in Figure~\ref{fig_flip23gra}.
We see that only the candidate $a$ is majority-non-dirty.
Due to transitivity, there is only one preference list, that agrees with the order of the majority-non-dirty pairs.
($a>b>c>d>x>y$)
This preference list has a score of $34$.
In contrast, the Kemeny score of that election is $33$, for example with $y>a>b>c>d>x$.
This means, that the majority-non-dirty candidate $x$ is not ordered according the 2/3-majority with respect to all candidates from $C$ (only with respect to the candidates $a,b,c$ and $d$).
\end{example}
We can see in Example~\ref{ex_flip23} that there are instances, where each optimal consensus has at least one majority-non-dirty pair that is not ordered according its $2/3$ majority.
This example also confirms the existence of constrains in Lemma.~\ref{lemma_majnondirtycand}.
However, there is another approach that we can use to improve our search tree algorithm:
\begin{lemma}
\label{lemma_majbreak}
\cite{BGKN09}~For an election containing $n_M$ majority-dirty pairs, in every optimal Kemeny consensus at most $n_M$ majority-non-dirty pairs are not ordered according to their $2/3$-majorities.
\end{lemma}
We can use this as another criterion to discard some possibilities in the branching early.
More precisely we can stop branching by trying all possibilities with at most $n_M$ majority-non-dirty pairs that are not ordered according to their $2/3$-majorities.
In the next chapter, we will see that there is a heuristic that also guarantees that we will never fix more than $n_M$ majority-non-dirty pairs not ordered according to their $2/3$-majority.
This means, we do not have to implement an additional termination condition for Lemma~\ref{lemma_majbreak}.
Lemma~\ref{lemma_majnondirtycand} says that in every Kemeny consensus all pairs, containing a majority-non-dirty candidate, are ordered according to the $2/3$-majority under certain conditions.
Another possibility for restricting conditions is to check if there are Condorcet winners or Condorcet loosers and remove them from the instance.
This leads to the following reduction rule.
\paragraph{Reduction rule ``Condorcet winner/looser''}~\\
Let $c$ be a non-dirty candidate.
If $c$ is most preferred (least preferred) in more than half of the votes, then delete $c$ and decrease the Kemeny score by the subscore of the set of candidate pairs containing $c$.
Note that using this reduction rule also covers the special case of Theorem~\ref{theorem_nomajdirty}.
After exhaustively applying the reduction rule on an instance without majority-dirty pairs, the number of candidates is zero.
Thus, we have solved that instance without branching.
The reduction rule works correctly, because Kemeny's voting scheme satisfies the Condorcet criterion.
It is trivial to see, that it takes polynomial time to apply the data reduction rule:
It's take $O(m^2 \cdot n)$ to get all non-dirty pairs.
Thereafter, it takes $O(m)$ for each candidate, to check if he is most preferred or least preferred in half of the votes.
In further studies, it should be possible to extend this reduction rule by searching for a ``set of winners'' (``set of loosers'').
\section{Fixed-parameter tractability}
\label{Fixed-parameter tractability}
\index{Fixed-parameter tractability}
Many interesting problems in computer science are computationally hard problems.
The most famous class of such hard problems like \textsc{Kemeny Score} is the class of $\complNP$-hard problems.
The relation between $\complP$ (which includes the ``efficient solvable problems'') and $\complNP$ is not completely clear at the moment\footnote{G.~Woeginger maintains a collection of scientific papers that try to settle the ``P versus NP'' question (in either way)~\cite{pnp}}.
Even if $\complP=\complNP$ it is not self-evident that we are able to design \textit{efficient} polynomial-time algorithms for each $\complNP$-hard problem.
But we have to solve $\complNP$-hard problems in practice.
Thus, according to the state of the art of computational complexity theory, $\complNP$-hardness means that we only have algorithms with exponential running times to solve the corresponding problems exactly.
This is a huge barrier for practical applications.
There are different ways to cope with this situation: heuristic methods, randomized algorithms, average-case analysis (instead of worst-case) and approximation algorithms.
Unfortunately, none of these methods provides an algorithm that computes an optimal solution in polynomial time in the worst case.
Since there are situations where we need both, another way out is needed.
Fixed-parameter algorithms provide a possibility to redefine problems with several input parameters.
The main idea is to analyse the input structure to find parameters that are ``responsible for the exponential running time''.
The aim is to find such a parameter, whose values are constant or ``logarithmic in the input size'' or ``usually small enough'' in the problem instances of your application.
Thus, we can say something like ``if the parameter is small, we can solve our problem instances efficiently''.
\\
We will use the two dimensional parameterized complexity theory \cite{DF99,Nie06,FG06} for studying the computational complexity of \textsc{Kemeny Score}.
A \textit{parameterized problem} (or language) $L$ is a subset $L \subseteq \Sigma^* \times \Sigma^*$ for some finite alphabet $\Sigma$.
For an element $(x,k)$ of $L$, by convention $x$ is called \textit{problem instance}\footnote{Most parameterized problems originate from classical complexity problems.
You can see $x$ as the input of the original/non-parameterized problem.} and $k$ is the \textit{parameter}.
The two dimensions of parameterized complexity theory are the size of the input $n := |(x,k)|$ and the parameter value $k$, which is usually a non-negative integer.
A parameterized language is called \textit{fixed-parameter tractable} if we can determine in $f(k) \cdot n^{O(1)}$ time whether $(x,k)$ is an element of our language,
where $f$ is a computable function only depending on the parameter $k$. The class of fixed-parameter tractable problems is called $\complFPT$.
Summarizing, the intention of parameterized complexity theory is to confine the combinatorial explosion to the parameter.
The parameter can be nearly anything so that not all parameters are very helpful.
Thus, it is very important to find good parameters.
\\
In the following sections, we need two of the core tools in the development of parameterized algorithms~\cite{Nie06}: data reduction rules (kernelization) and search trees.
The idea of \textit{kernelization} is to transform any problem instance $x$ with parameter $k$ in polynomial time into a new instance $x'$ with parameter $k'$ such that the size of $x'$ is bounded from above by some function only depending on $k$ and $k' \leq k$, and $(x,k) \in L$ if and only if $(x',k') \in L$.
The reduced instance $(x',k')$ is called \textit{problem kernel}.
This is done by \textit{data reduction rules}, which are transformations from one problem instance to another.
A data reduction rule that transforms $(x,k)$ to $(x',k')$ is called \textit{sound} if $(x,k) \in L$ if and only if $(x',k') \in L$.
\\
Besides kernelization we use (depth-bounded) \textit{search trees algorithms}.
A search algorithm takes a problem as input and returns a solution to the problem after evaluating a number of possible solutions.
The set of all possible solutions is called the search space.
Depth-bounded search tree algorithms organize the systematic and exhaustive exploration of the search place in a tree-like manner.
Let $(x,k)$ denote the instance of a parameterized problem.
The search tree algorithm replaces $(x,k)$ by a set $H$ of smaller instances $(x_i,k_i)$ with $|x_i| < |x|$ and $k_i < k$ for $1 \leq i \leq |H|$.
If a reduced instance $(x',k')$ does not satisfy one of the \textit{termination conditions}, the algorithms recursively applies the replacing procedure to $(x',k')$.
The algorithms terminates if at least one termination condition is satisfied or the replacing procedure is no longer applicable.
Each recursive call is represented by a search tree node.
The number of search tree nodes is governed by linear recurrences with constant coefficients.
There are established methods to solve these recurrences~\cite{Nie06}.
When the algorithm solves a problem instance of size $s$, it calls itself to solve problem instances of sizes $s - d_1, \dots, s - d_i$ for $i$ recursive calls.
We call $(d_1,\dots, d_i)$ the \textit{branching-vector} of this recursion.
So, we have the recurrence $T_s = T_{s - d_1} + \dots + T_{s - d_i}$ for the asymptotic size $T_s$ of the overall search tree.
The roots of the characteristic polynomial $z^s=z^{s - d_1} + \dots + z^{s - d_i}$ with $d=max\{d_1, \dots, d_i\}$ determine the solution of the recurrence relation.
In out context, the characteristic polynomial has always a single root $\alpha$, which has the maximum absolute value.
With respect to the branching vector, $|\alpha|$ is called the \textit{branching number}.
In the next chapter, we will analyse search tree algorithms that solve \textsc{Kemeny Score}.
\section{Preliminaries}
\label{Preliminaries}
\index{Preliminaries}
Some basic definitions were already given in Section~\ref{Kemeny's voting scheme}. Now, we will define further terms that are fundamental for the next sections.
Let the \textit{position} of a candidate~$a$ in a vote~$v$ be the number of candidates who are better than~$a$ in~$v$.
Thus, the best (leftmost) candidate in~$v$ has position~$0$ and the rightmost candidate has position~$m-1$. Then~$\pos_v(a)$ denotes the position of candidate~$a$ in~$v$.
\begin{definition}
\label{dirty pair}
Let~$(V,C)$ be an election. Two candidates~$a,b \in C, a \neq b$, form a \textbf{dirty pair} if there exists one vote in~$V$ with~$a > b$ and there exists another vote in~$V$ with~$b > a$.
A candidate is called \textbf{dirty} if he is part of a dirty pair, otherwise he is called \textbf{non-dirty}.
\end{definition}
This definition is very important for the next sections. Later we will extend this concept of ``dirtiness'' to analyse the complexity of an algorithm.
We illustrate the definition by Example~\ref{ex_dirty}.
\begin{example}
\label{ex_dirty}
We have an election $(V,C)$ with $V=\{v_1,v_2,v_3\}$ and $C=\{a,b,c,d,y\}$.
\begin{center}
\begin{tabular}{ r l }
$v_1:$ & $a>b>y>c>d$\\
$v_2:$ & $b>a>y>c>d$\\
$v_3:$ & $a>b>y>d>c$\\
\end{tabular}
\end{center}
The relative orders of the pairs $\{a,c\}$, $\{a,d\}$, $\{b,c\}$, $\{b,d\}$ and $\{x,y\}$ for $x \in \{a,b,c,d\}$ are the same in all votes,
but there is at least one vote for each possible relative order of $\{a,b\}$ and $\{c,d\}$. Thus, we have two dirty pairs
$\{a,b\}$ and $\{c,d\}$ and one non-dirty candidate $y$. All other candidates are dirty.
\end{example}
In this work the terms ``preference list of candidates'' and ``permutation of candidates'' are used equivalently.
This means, for example, that the preference list $a>b>c>d$ is equivalent to the permutation $(a,b,c,d)$.
Later on will will analyse algorithms that fix the relative order of some candidates.
We have to consider that not all combinations of fixed relative orders are consistent.
An example for an inconsistent combination of pairwise relative orders is as follows:
\begin{example}
\label{ex_consfam}
Take three candidates $a$, $b$, and $c$, where each pair is dirty. A consensus can not have $a>b$, $b>c$ and $c>a$, because $a>b$ and $b>c$ implies $a>c$.
\end{example}
For the purpose of analysis we introduce a concept of \textit{consistence} for a set of ordered pairs:
\begin{definition}
\label{def_consistent}
Let $(V,C)$ denote an election and let $O$ denote a set of ordered pairs of candidates in $C$.
Furthermore, let $p$ denote a preference list of the candidates in $C$.
We say $O$ and $p$ \textbf{agree} if $x>y$ in $p$ for each $(x,y) \in O$.
If there exists a preference list $p$ and $O$ agrees with $p$, we call $O$ \textbf{consistent}.
We call $O$ the \textbf{relation set} of $p$ if $O$ agrees with $p$ and for each pair of candidates $\{x,y\}$ either $(x,y)$ or $(y,x)$ is in $O$.
Finally, let $X$ and $Y$ denote two sets of ordered pairs. We say $X$ and $Y$ \textbf{agree}, if there is no ordered pair $(k,l)$ in $X$ with $(l,k)$ in $Y$.
\end{definition}
Of course, the relation set of each preference list is uniquely determined.
\begin{example}
If we transfer the relations from Example~\ref{ex_consfam} to a relation set we get the inconsistent set $O_1:=\{(a,b),(b,c),(c,a)\}$.
Otherwise, the subset $O_2:=\{(a,b),(b,c)\}$ is consistent.
It agrees for example with $p:=a>b>c$.
The relation set of $p$ is $O_3:=\{(a,b),(b,c),(a,c)\}$.
Trivially $O_1$ and $O_2$ agree as well as $O_2$ and $O_3$, but $O_3$ and $O_1$ do not agree.
\end{example}
\begin{observation}
\label{obs_notagreeincons}
Let $X$ and $Y$ denote two sets of ordered pairs. If $X$ and $Y$ do not agree, then $X \cup Y$ is not consistent.
\end{observation}
For later analysis purposes, we define the concept of the \textit{subscore} of a set $O$ of ordered pairs of candidates for an election $(V,C)$:
\begin{equation}
\subscore(O) = \sum_{v \in V}\sum_{\{c,d\} \subseteq C} d_v(c,d)
\end{equation}
where $d_v$ is set $1$ if the relative order of $c$ and $d$ in $v$ is not an element of the set $O$, and else $d_v$ is set $0$.
The following observation is trivial:
\begin{observation}
\label{obs_subscorelist}
Let $(V,C)$ be an election, let $p$ denote a preference list of candidates from $C$, and let $P=\{ (x,y) \mid \pos_p(x) < \pos_p(y) \}$.
Then, $\subscore(P' \subseteq P) \le \score(p)$.
\end{observation}
It follows from Observation~\ref{obs_subscorelist} that one can use the subscore to estimate the score of a preference list.
In the following, we will use this observation to discard some branching cases and improve the running time in practice.
| {
"attr-fineweb-edu": 1.560547,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUaPTxK6-gDz87NMYm | \section{Introduction}
Swimming analytics are valuable for both swimmers and coaches in improving short and long term performance. They provide measurable methods of quantifying improvement and ability over the course of a swimmers career. Such data can be used at the individual level, when making training plans to improve performance in places where data indicates weakness. It can also be used on the team level. Every swimmer's career comes down to making some kind of a team. Such teams range from simply making a collegiate team to making the country's Olympic National Team. Each team will have coaches whose job is to pick the best swimmers to maximize the team's performance. At all levels, the coaches seek the best swimmers for the present and the future. These decisions are never easy, as all teams have limited space and specific needs in terms of being competitive with other teams. Except in the case of relays, swimming is often perceived an individual sport. But it is important to recognize that it is also a team sport, especially with the addition of the International Swim League (ISL), the world's first league where swimmers compete against other swimmers as a team, much like in the NBA, NHL, and NFL.
The motivation for this work is to help produce a system that will automate the collection of swimming analytics in swim competition videos across all participants, using image-based processing methods and tracking algorithms. This would save coaches and athletes many hours analyzing post race videos manually. There is plenty of useful swimming data collected every day. For example, RaceTek \cite{raceteck} provides Canadian swimmers with racing data, as seen in Figure~\ref{fig:racetek}, for all major and some minor competitions; RaceTek has been doing so for many years now. There are other organizations that provide similar services in practice environments, such as Form Swim and TritonWare~\cite{form_swim,tritonwear}. These businesses have been very successful over the past few years.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{RaceTech.jpg}
\caption{A modified example report from RaceTek \cite{raceteck}, found under Video Race Analysis (VRA)}
\label{fig:racetek}
\end{figure}
The basic services provided by RaceTek are stroke rate analysis, swim velocity analysis and turn analysis collected from videos recorded at swim competitions. All calculations are done by hand using video footage to analyze swimming. Theses tasks are time-consuming and can be automated. The goal of this paper is to discuss the path towards building an automated swimming analytics engine to provide useful data to coaches and swimmers. Specifically, we address the need to create a dataset for training swimmer detection and tracking models, which could then support automatic generation of currently used analytics~\cite{victor}, as well as create new ones.
\section{Previous Work}
Much work has been done to collect swimming analytics in the training environment. For example,~\cite{zecha} used computer vision methods to estimate swimmer stroke counts and stroke rates on footage collected from a swim channel. The channel was used to control their environment; in addition, they also collected their footage underwater. Thus, in their case, only one swimmer is in the video at any given time. This, however, would be impractical in case of multiple swimmers in a competition. Another example is analytics with accelerometers placed on the body of the swimmers~\cite{Mooney,tritonwear,form_swim}. For the task of collecting swimming analytics across all races, wearable devices are too impractical due to weight, the water dynamics of wearable devices, and the regulations regrading swimming with such devices.
This leaves analyzing overhead race video of all swimmers as the most widely-available and least intrusive way to collect swimming analytics. Little published research on automated swimming analytics of competition video is available. Some of the main contributions are~\cite{sha,sha_understanding,victor,victor_new}. In~\cite{sha}, the authors used a complex assortment of tracking algorithms to determine the current state of a swimmer and thus the best way to locate their current position. The main problem with their work was that, while it did produce good results, the results were only valid for the one pool their work was dedicated to. If such a system were to be employed in a different pool, it would have to be recalibrated. A more recent work~\cite{victor} gives the most practical results for collecting swimming analytics in a competition environment. Unfortunately, this work can only collect analytics using a video tracking a single swimmer, which means additional detection and video processing is needed to extract such segments from the common multi-swimmer video. The task of counting strokes relies heavily on proper tracking~\cite{victor_new}. In the above works, the main challenge is finding swimmers in a multi-swimmer video scene and tracking their movement using one or just a few camera sources. This is made difficult as swimmers start on a block, dive into the water, turn on a wall, and swim under the water.
There are few works on tracking humans in aquatic environments, the main ones being~\cite{sha,Chan}. In~\cite{Chan}, the authors performed detection in an aquatic environments using dense optical flow motion map and intensity information. These works use background subtraction to determine the location of swimmers based on a model of the aquatic environment. The methods of swimmer detection proposed in these papers are valid candidates to use to assist in tracking. However, they do not generalize well, as they are heavily dependent on the pools for which they were developed. For their proposed methods to be used at other pools would require recalibration and re-estimating background parameters.
Fortunately, there is plenty of work on general object tracking in diverse environments, such as~\cite{kalal}. This work assumed that nothing was known about what was going to be tracked and notes that when some prior information is available about objects to be tracked, it is possible to build a more accurate and robust tracker. Due to the very structured nature of swimming competitions, a fairly accurate tracker seems possible. An example of another tracking system using this principle is~\cite{alvar}, which is built upon the You Only Look Once (YOLO) detection system~\cite{Redmon} to aid their tracking algorithm.
Unfortunately, training such systems requires annotated datasets of swimmers in swimming environments, which currently do not seem to be openly available.
Collecting annotated data is well researched. The creators of the PASCAL VOC challenge~\cite{everingham} spent time creating a well-defined procedure for collecting useful data from online footage. In summary, VOC strove to collect images that accurately represented all possible instances of objects to be detected. This means that all examples of objects have adequate variance in terms of object size, orientation, pose, illumination, position and occlusion. The data also needs to be annotated in a consistent, accurate and exhaustive manner. The VOC challenge used images from the website Flickr, which allowed for classes to be created fulfilling the variance requirements noted. No such set of picture examples exist for swimmers in a racing environment. Fortunately, there is an abundance of race video footage which, when processed, can be used as examples. This race footage can be found mainly on YouTube, as well as through private organizations such as the Canadian Sports Institute or USA Swimming Productions. Currently, there seems to be no open dataset of annotated swimmers.
\section{Swim Race Footage Variability}
In this section we provide an in-depth look at the pool environment and what one can expect when observing footage of swim competitions. It is important to know such information in order to collect a proper dataset of swimmers, because such knowledge aids in the collection of scenarios with sufficiently large variability of conditions. There are three main contributors to variance when it comes to capturing swimming races in pool environments. These contributors are: venue, camera angle, and swimming itself.
\subsection{Venue}
The venue refers to the place of competition, the pool in which the racing occurs. Each venue, or even the same venue, can vary according to many factors, which include the course of the pool, the number of lanes, the lighting, architecture, lane ropes, and the presence of flags.
The \textit{course of a pool} refers to the distance the swimmers must travel in order to complete one pool length. For example, in a long course meter (LCM) pool, swimmers must complete fifty meters before they encounter a wall. There are three main pool courses used competitively in swimming around the world: LCM, short course meters (SCM), and short course yards (SCY). The bulk of SCY competitions are held in the USA. The course of a pool has a big effect on where cameras are placed. In an SCM or SCY race, one camera can easily capture an entire race. In an LCM race, however, the pool is long enough to require multiple cameras to avoid reducing the relative size of the swimmers to a very small fraction of the frame. Another point worth considering is that one venue can host all three courses depending on how the pool is built. Less than optimal performance could be achieved if a model were to be trained on one configuration and then be tested on another, even in the same pool, without other similar training data to support it.
Different pools can have different numbers of \textit{lanes}. Typically, there is an even number of lanes, ranging from six to ten. However, in some situations, competitions can be held in LCM pools but raced as SCY or SCM. This can result in up to a twenty-lane race. Causing even more confusion, some competitions have swimmers racing in one half of the pool, but warming up in the other half. An example would be having lanes 0 to 9 with swimmers racing while having lanes 10 through 19 open to swimmers who are not racing.
Pool \textit{lighting} is usually kept as constant as possible in high-level competitions to avoid blinding, as swimmers must swim on their front and on their back. Competitions can be indoors or outdoors, resulting in a wide variety of lighting sources and conditions. In competitions where the pool is indoors, the lighting usually comes from the roof, the ceilings are high, and the lighting is even across an entire pool, but each pool can be more or less illuminated than others. In outdoor pools during daytime, light generally comes from the sun and this can result in very bright reflections and very drastic shadows. At nighttime in an outdoor pool, lighting comes from underwater and from above, with varying degrees of illumination. This results in three important factors to consider: illumination, shadow, and glare.
Another important consideration is pool \textit{architecture}, which includes the depth of the pool, what is it made from, the presence of bulkheads, the style of blocks, any markings that may be on the pool bottom, and deck space. Pool depth will affect the relative position of the observed markings at the bottom of the pool. Lane markings are always present and run down the center of each lane. Pools can be made with tile, metal, or other synthetic materials. The composition of materials is less relevant, but the way they absorb and reflect light will affect glare, lighting, and hue in the pool image. Bulkheads are floating walls in the pool that can be moved to change the course of the pool. They look different from the end of the pool mainly because there can be water on the other side of a bulkhead. There are many different types of blocks and they can look drastically different. This is important because a machine learning model could learn that a swimmer is ``on-blocks'' by learning the structure of the blocks rather than the position of the swimmer.
Besides lane markers, which are generally present in all competition pools, there may be other marks on the bottom of pools. For example, at TYR competitions there are large TYR logos at the bottom of the pool. Such big markings can make swimmers hard to identify when they are underwater. Lastly, one must consider the amount of deck space available in the pool and how it is being utilized. A pool with little deck space and many swimmers walking around it will result in many occlusions of the swimmers in the closest lanes, while a pool with more deck space and fewer people on deck will have fewer occlusions.
\textit{Lane ropes} are the division between lanes. They run parallel to the pool edges and stop a swimmer from getting too close to another swimmer while also dampening waves. On the world stage there are very strict rules on how lane ropes may look and what color pattern they must adhere to. In general competition however, pools will use whatever lanes ropes are available and thus their detailed structures are of little use for swimmer detection. They are useful in a broad sense as they are fairly straight and define areas where swimmers can and can not be. In a training set, it would be desirable to have a wide variety of examples of pools with different styles of lane ropes.
\textit{Flags} are objects that are always above the swimmers in a competition setting. They are a very consistent source of occlusions in swim competition. The flags indicate to the swimmers when the wall is approaching; they are generally five meters or yards away from each end of the pool and span the width of the pool. In more prestigious competitions, when races that do not require flags occur, the flags are removed for better viewing of the swimmers. Thus, it is important to produce examples of races with and without flags in a training set.
This concludes the list of sources of venue variance. While not all sources of variance may be captured in a given training set, it is desirable to be aware of their existence when dealing with specific pool examples.
\subsection{Camera angle}
When collecting stroke data, the most useful angles are the ones where the swimmers are racing in a direction perpendicular to the view of the camera. A Cartesian coordinate system can be applied to the pool to describe the location of cameras and thus the camera angles that can be accomplished. The x-axis will be defined as the closest wall in the view of the camera such that the swimmers are swimming parallel to it. The z-axis of this coordinate system is parallel to the vertical direction relative to the pool, and the y-axis is parallel to the direction of the blocks or the wall that swimmers turn on.
In general, the position of the camera along the z-axis is usually at the \textit{pool level} or the \textit{viewing level}. Pool level is roughly the height of a standing person on deck and viewing level is a height at which all swimmers can be put in view by the camera. Ideally, all races are recorded at viewing level. In media designed for the viewer, most races have multiple cameras and thus, races are recorded with both pool-level footage and view-level footage. The advantage to having pool-level footage is that quick and complex movements such as dives can be more easily captured, but the disadvantage is not having a good view of all swimmers. The advantage of view-level footage is that all swimmers can be seen, but swimmers appear smaller in the scene and so it is more difficult to see what a swimmer is doing.
The camera position along the x-axis, i.e., along the length of the pool, usually captures one of the following three views: (1) the \textit{dive view}, which means the camera is anywhere before the ten-meter mark closest to the blocks; (2) the \textit{turn view}, which means the camera is past the ten-meter mark of the turn end of the race; and (3) a \textit{mid-pool view}, which is anywhere between the dive view and the turn view. Typically, three main (x,z) camera positions are used: pool-level at dive view, viewing-level at mid-pool view, and pool-level at turn view. It is easier to find footage for these three typical combinations than for the others. This is important when creating a training set. All typical combinations of camera positions must be included for a model to properly generalize the locations of swimmers in a race environment.
\subsection{Swimming}
What the swimmers are doing in the water is another variable to consider in swimming footage. A model will need to generalize how swimmers look in a pool when they are racing and so it will need examples of all circumstances. There are four different strokes and twenty-nine different races in swimming, not including the different flavors of race when comparing LCM and SCM. Many races are subsets of other races and so each race need not be exclusively collected to get a good representation of that pool. Both genders take part in the sport of swimming and thus variance can come from the different genders. Women race with more suit material than men and so in high definition footage women look different than men in the water. As swimmers age they tend to increase in size and strength and thus a swimmer who is very young will look drastically different in the water than someone who is older. In summary, it is important that all four strokes are included in training video, a roughly even distribution of male and female swimmers, and an even distribution of age groups of swimmers with different speeds.
\subsection{Summary}
We summarize this section by presenting an exhaustive list of sources of variance that should be considered when building a data set of swim race video footage.
\subsubsection{Adequate Variety of Venues}
\begin{itemize}
\item Pool lighting, i.e. big shadows or good lighting
\item Pools with and without bulkheads
\item Pools with different block styles
\item Pools with different depths
\item Pool courses (LCM, SCM, SCY)
\item Lanes, including unusual examples
\end{itemize}
The more combinations and variations of these items across the entire dataset, the better. As already mentioned, the same venue can have different pool courses and lane numbers at different competitions. Hence, we propose collecting a variety of footage for each venue as follows.
\subsubsection{Variety for Each Venue}
A collection of all of the following variants is required across the entire dataset.
\begin{itemize}
\item Presence/absence of occlusions from people and flags
\item Variety of camera angles, especially the three typical camera positions mentioned
\item Swimming stroke and race distance
\item Gender and age
\end{itemize}
For each venue it is important that an example of all four stokes are included and races with sprints (roughly thirty to sixty second races) to distance (roughly eight to fifteen minute races) are collected. Each venue should have data captured with all available camera angles. For a given venue, there is no need to have examples from all of the twenty-nine different races with each gender and age group, so long as a variety of ages and genders is present across the entire dataset. It is also not imperative that every competition moves the flags.
\subsubsection{Footage Augmentation}
Some features created by pool venue and camera angle can be effectively simulated by augmenting the dataset. Such suggested augmentations are as follows.
\begin{itemize}
\item Flipping images across the y-axis
\item Shearing or perspective transformation of images to simulate different camera angles
\item Changing overall brightness, hue and contrast
\end{itemize}
\section{Proposed Method}
We now describe our proposed method of data collection and the creation of a dataset of swimmers racing. As outlined in~\cite{everingham}, it is important that every labelled swimmer is labelled consistently, so that a machine learning algorithm can learn effectively.
\subsection{Obtaining Examples of Swimmers}
The type of data collected is very important to creating a reliable detection system; Ideally there is an adequate variance of situations described in the section above. There are many potential sources for collecting this data, such as \cite{swim_usa,Aust_Institute_Sport,Can_Institute_Sport}. Swim USA's YouTube page alone has roughly five-hundred races published on it a year. As a first step, examples of swimmers were collected from race footage taken from only one pool; that being the 2019 TYR Pro Swim Series - Bloomington, posted on Youtube. A bigger dataset of pools and races from different sources is more desirable it was decided that before more work was done, some preliminary work would be completed to understand how a model would preform with one pool.
\subsection{Classes of Swimmers}
Referring to the paper \cite{sha}, six classes of swimmers will be collected when creating a swimmer dataset. These classes are as follows: on-blocks, diving, underwater, swimming, turning, and finishing. These six classes, if detected correctly, are valuable in automated collection of swimming metrics such as dive reaction speed, distance off the wall, as well collecting splits, race times, and more. Thus, these six classes will be used in the creation of a dataset. There are points in a race when a swimmer must transition from one class to the next, these transitions must be well defined so that annotations are collected consistently.
The following is a guide on how to choose the swimmers class in a frame, given knowledge of the race and the distance completed in the race. This will not include illegal transitions such as underwater to turn.
\begin{enumerate}
\item \textbf{``On-blocks''}
\begin{itemize}
\item ``On-blocks'' is always first
\item Class starts at the point when the swimmers are on the blocks at the start of the race
\item The transition to the next class will be defined as the point when the swimmer is no longer touching the blocks
\end{itemize}
\item \textbf{Diving}
\begin{itemize}
\item Diving is always after ``On-blocks''
\item Defined as the point when the swimmer is in mid air and not on the block or underwater or swimming yet
\item The transition out of diving will be defined as the point when the entire swimmer becomes occluded by the water and splash of the dive entry, in the case that a swimmer fails to completely submerge themselves, skip the underwater class completely and start annotating as swimming
\end{itemize}
\item \textbf{Underwater}
\begin{itemize}
\item Underwater can only happen after a turn or diving
\item Defined as any point in the race when the swimmer is completely submerged, not touching a wall and not swimming
\item The transition out of underwater will be defined as the point when the swimmer breaks the water with any part of their body to start swimming
\item Don't annotate a swimmer if they can't be seen, i.e. 90\% of swimmer is hidden due to angle, lane ropes and refraction of water
\end{itemize}
\item \textbf{Swimming}
\begin{itemize}
\item Swimming comes after underwater, diving or turning
\item Defined as any point in the race when the swimmer is completing legal stroke cycles and not touching a wall
\item The transition out of swimming into turning can occur on a touch turn or on a flip turn.
\item When preforming a touch turn, turning commences when the swimmer touches the wall
\item When preforming a flip turn, the turn commences when the swimmer is on their front and the head is submerged due to the the flip
\item The transition out of swimming into finishing is when the swimmer touches the wall and the races has concluded
\end{itemize}
\item \textbf{Turning}
\begin{itemize}
\item Turning only happens after swimming
\item Defined as any point in a race when the swimmer comes to a stop near enough to a wall in order to touch or push off the wall
\item The transition out of turning to underwater is when the swimmer's feet or possibly, last body part leaves the wall
\item The point when a swimmer is completely straight can also signify the transition to underwater
\item In the case that a swimmer fails to completely submerge themselves after a touch, skip the underwater class completely and start annotating as swimming
\item There should be no point at which a turn should not be boxed unless it is cut off by the camera or camera angle as the swimmer is somewhere on the wall
\end{itemize}
\item \textbf{Finish}
\begin{itemize}
\item Finishing only happens after swimming
\item Defined as any point after the conclusion of the race distance
\item Finishing is always the final class of swimmer
\end{itemize}
\end{enumerate}
\subsection{Class Annotation Details}
This section outlines how to assign a bounding box to each example of a swimmer. In general the box must be the smallest possible box containing the entire swimmer, "Except where the bounding box would have to be made excessively large to include a few additional pixels ($\leq$5\%)" \cite{Annotation_Guidelines}. If 80\% - 90\% of a swimmer is cut off by the camera, do not give them a box. Put a box around a swimmer that can be identified in any way, unless it is cut off by the camera or camera angle. Because there are a variety of situations where this statement becomes ambiguous there will be some general guidelines for specific classes.
\begin{figure*}[ht!]
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=1.2in]{on_block.png}
\captionsetup{justification=centering}
\caption[center]{Swimmers on blocks}
\label{fig:on_blocks}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=1.2in]{swimming.png}
\captionsetup{justification=centering}
\caption{Swimmers swimming}
\label{fig:swimming}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=1.2In]{under_water.png}
\captionsetup{justification=centering}
\caption{Swimmers underwater}
\label{fig:underwater}
\end{subfigure}
\caption{Examples of swimmers states}
\end{figure*}
\subsubsection{``On-blocks''}
For swimmers in the farther lanes and behind other swimmers, add the tightest box possible around all visible parts of the swimmer. If the tip of a foot is visible from behind another swimmer, for example, do not make the box excessively larger than the majority of the swimmer visible. An example would be the swimmer above the annotated swimmer in figure \ref{fig:on_blocks}.
\subsubsection{Swimming}
Stretch the box to include arms and feet. Center the end of the box with the swimmers feet around the splash produced by the kick if the feet are not visible.
\subsubsection{Underwater}
When a swimmer is visible, create the smallest possible box that encompasses the swimmer, see figure \ref{fig:underwater}. When a swimmer becomes too difficult to box accurately do not annotate the swimmer, see the top three swimmers in figure \ref{fig:underwater}.
\subsubsection{Turning}
The smallest box shall be made around the swimmer such that it encompasses the swimmer, for all swimmers, regardless of how visible the swimmer is in terms of occlusions. If more than ninety percent of the swimmer is out of camera view then do not annotate the swimmer.
\subsubsection{Finishing}
As a swimmer finishes, they generally look to the clock to see their time. As this happens, they transition from being in horizontal body position, to being in a vertical body position. Due to the refraction of water and bubbles formed by the swimmer, the body of the swimmer becomes invisible to the camera. Thus, a minimal box around what is viable is all that is required.
\subsubsection{Diving}
It can be difficult to determine exactly which swimmer is being annotated. This is because the minimal box including the entirety of one swimmer could also require that the swimmer below is included. Create a minimal box around the swimmer being annotated even if this means the box created also includes a large portion of the other swimmer.
\begin{figure*}[ht!]
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=1.2in]{turn.png}
\captionsetup{justification=centering}
\caption[center]{Swimmers turning}
\label{fig:on_blocks}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=1.2in]{diving.png}
\captionsetup{justification=centering}
\caption{Swimmers diving}
\label{fig:swimming}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=1.2In]{finish.png}
\captionsetup{justification=centering}
\caption{Swimmers finishing}
\label{fig:underwater}
\end{subfigure}
\caption{Additional examples of swimmers states}
\end{figure*}
\section{Experiments Using One Pool}
When annotating video, many frames are highly correlated, and thus are redundant for training a detection or tracking algorithm. For this reason, tests were performed to reduce the amount of redundant annotations collected. In these experiments, we examined how to limit annotation to avoid collecting redundant data while still capturing sufficient data variety to train a successful model.
\subsection{Collecting Swimmer Data for Tracking}
To illustrate the importance of efficiently and effectively collecting data, a simple and somewhat extreme example is considered. regard a single race video of a fifteen-hundred freestyle, at thirty frames per second, with eight swimmers, and with a length of sixteen minutes (regarded as a slow men's time one the world stage). The resulting video if annotated in full would result in more than 230,000 examples of swimmers, preforming all six classes of swimming in all its frames. When one considers the nature of a fifteen-hundred freestyle it is obvious these examples do not contain the right proportion of swimmer examples. This is evident as the examples will not contain all four strokes, their respective turns or the both genders of swimmers to say the least. There are other problems with collecting one single race and this is mentioned in the summary of swim race footage variability. Regardless, Using a custom-built labelling system, annotations of swimmers required an average of two seconds per bounding box. This means labelling the entirety of the aforementioned video would take over five days of continuous work. Because of the high redundancy in these images, such annotation would be an inefficient use of time. The following describes the experiments conducted in order to find a better annotation procedure.
\subsection{Extraction of Swimming Video Features}
The method used to test what frames in race video to annotate and what frames to skip was as follows. Using footage found on Swim USA's YouTube page \cite{swim_usa}, data was collected from a few videos of one competition, 2019 TYR Pro Swim Series - Bloomington. All possible strokes, turns and dives where present in the data collected. One in every three frames of the footage was annotated, as suggested in \cite{victor}. An exception was made with footage containing diving, in which case video was annotated frame by frame. This exception was due to the large amount of movement a dive contains and its short duration in time. The result was three-thousand frames of data with 25,000 examples of swimmers in various classes, the exact values can be seen in Table~\ref{tab:collected_data}.
\begin{table}
\centering
\begin{tabular}{l|r|r}
Class & \# Annotations & Percent of Total\\
\hline \hline
``On-blocks'' & 2,344 \hspace{0.45cm} & 10\% \hspace{0.5cm} \\
Diving & 1,124 \hspace{0.45cm} & 5\%\hspace{0.6cm} \\
Swimming & 13,009 \hspace{0.45cm} & 53\%\hspace{0.6cm} \\
Underwater & 2,997 \hspace{0.45cm} & 12\%\hspace{0.6cm} \\
Turning & 1,558 \hspace{0.45cm} & 6\%\hspace{0.6cm} \\
Finishing & 3,534 \hspace{0.45cm} & 14\%\hspace{0.6cm} \\
\hline
Total & 24,566 \hspace{0.45cm} & 100\%\hspace{0.6cm} \\
\end{tabular}
\caption{The amount of collected data for each class}
\label{tab:collected_data}
\end{table}
After the collection of this data, multiple models were trained with different subsets of this collected data to find the amount and distribution of data that produces optimal results. Optimal results being reducing the amount of redundancy in the dataset, while still obtaining detection results that are good enough. The first method of creating subsets was to randomly select a specified percentage of the three-thousand frames. The second method of subset creation was to randomly select a specified percentage of each class of the three-thousand frames. This method guaranteed that there will always be the same "percent of total" in all classes. Tests of the models using the second methods data will show if a certain class should have had more annotations in the initial collection phase.
\section{Results}
The Darknet-53, YOLOv3-416 model \cite{yolov3} and Darknet-15, YOLOv3-tiny-416 model \cite{tiny_yolo} was considered for testing. It was found that their results where almost identical and so the following tests were completed with the tiny model. In total, fifteen models where trained with 1\%, 2\%, 5\%, 10\%, 25\%, 50\%, 75\% and 100\% of the data collected. For each test the models where given the exact same architecture and parameters; the only way they differed was in the datasets they were trained on. Their performance was tested against a dataset of five-hundred frames that were not used in training. Half of the test set was obtained from the same pool used for training and the other half was obtained from a different pool but with similar conditions. These pools are designated as Bloomington and Winter National, respectively. The performance was gauged using mean average precision (mAP) for each class, for more details, see \cite{everingham}. The mAP of tracking was also collected. Tracking disregards the classes, the mAP value of tracking represents how good the model is at identifying the position of a swimmer in a pool.
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{results_plot.png}
\caption{Results from the data collection test}
\label{fig:res_plot}
\end{figure}
Figure \ref{fig:res_plot} shows a condensed breakdown of the results. The top plots represents testing of footage from the same pool, Bloomington. The bottom plots represents testing of footage from a different pool, Winter National. The x-axis represents the percent of data used for training and the y-axis represents the mAP value. Each line in a plot is either a different class or the tracking results. The plots on the left represent the first subset distribution and the plots on the right represent the second subset distribution.
The first thing to notice from this test is that based on the top two graphs, using roughly twenty percent of the data collected was sufficient to produce comparable results. That is collecting data every fifteen frames and every five frames for diving. After reducing the data collected to less than twenty percent in a steep decline in overall performance is observed.
Next thing to notice is the extremely poor performance of the model when predicting diving at the Winter National pool and the less than optimal performance in the turning and underwater. These results are due to the difference in camera angles from one pool to the other, as can be seen in Figure \ref{fig:dive_comp}. This could have been partially fixed by flipping the training images horizontally but the camera angle between pools is different even with the horizontal flip. That being said, the swimming in Winter National was captured at roughly the same position as Bloomington. This is confirmed in figure \ref{fig:res_plot} as the Bloomington and Winter National swimming plots have the same profile. This is in contrast to the rest of the Winter National plots that have drastically different profiles than the Bloomington results.
\begin{figure}[ht]
\centering
\includegraphics[width=6cm]{Dive_comp.jpg}
\caption{Difference between dive camera angles: Winter National (top) and Bloomington (bottom)}
\label{fig:dive_comp}
\end{figure}
Lastly, there seems to be no significant difference in the amount data for which mAP value sharply drops when comparing across all classes. This might indicate that collecting annotations once every fifteen frames (once every five for diving) is a good enough approximation. However, this conclusion is based on a relatively small test set. If more insight is to be gained on the distribution of classes collected from swimming footage, more tests need to be conducted.
\section{Summary}
In this paper we presented the first step in a project for the automation of swimming analytics. We began construction of a dataset through identifying the important aspects of data collection and annotation. The results suggest that data can be efficiently collected from a video by annotating frames at two or three frames a second (six frames a second for diving). Such analysis provided validation that under optimal circumstances a detection system can exist. Lastly, this experiment gave a general intuition of how deep learning detection models such as Darknet \cite{Redmon} respond to swimmer data. Specifically, a lighter detection model, such as Darknet-15 performs roughly the same as Darknet-53 for the detection of swimmers \cite{yolov3,tiny_yolo}. lastly, there is no reason to believe that a system such as this one should not work in a general sense, once given more training examples of swimmers in different competitions.
With the tools put forth by this paper we are able to begin the next steps in automating swimming analytics. We will use the procedure presented here to collect more data from a variety of sources, creating an annotated dataset for swimming. Next, we will build better tracking solutions incorporating swimmer dynamics such as in~\cite{bewley2016simple}, and finally we will build metric collection solutions to automatically derive common swimming metrics such as stroke count and stroke length. The beauty of this work is that is very modular and as such can be built upon once the ground work has been completed. Upon completion, we are confident that this project will greatly help simplify the collection and use of swimming analytics, assisting coaches and athletes across all levels of swimming and even possibly help to increase viewer interest of swimming.
\section{Introduction}
Swimming analytics are valuable for both swimmers and coaches in improving short and long term performance. They provide measurable methods of quantifying improvement and ability over the course of a swimmers career. Such data can be used at the individual level, when making training plans to improve performance in places where data indicates weakness. It can also be used on the team level. Every swimmer's career comes down to making some kind of a team. Such teams range from simply making a collegiate team to making the country's Olympic National Team. Each team will have coaches whose job is to pick the best swimmers to maximize the team's performance. At all levels, the coaches seek the best swimmers for the present and the future. These decisions are never easy, as all teams have limited space and specific needs in terms of being competitive with other teams. Except in the case of relays, swimming is often perceived an individual sport. But it is important to recognize that it is also a team sport, especially with the addition of the International Swim League (ISL), the world's first league where swimmers compete against other swimmers as a team, much like in the NBA, NHL, and NFL.
The motivation for this work is to help produce a system that will automate the collection of swimming analytics in swim competition videos across all participants, using image-based processing methods and tracking algorithms. This would save coaches and athletes many hours analyzing post race videos manually. There is plenty of useful swimming data collected every day. For example, RaceTek \cite{raceteck} provides Canadian swimmers with racing data, as seen in Figure~\ref{fig:racetek}, for all major and some minor competitions; RaceTek has been doing so for many years now. There are other organizations that provide similar services in practice environments, such as Form Swim and TritonWare~\cite{form_swim,tritonwear}. These businesses have been very successful over the past few years.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{RaceTech.jpg}
\caption{A modified example report from RaceTek \cite{raceteck}, found under Video Race Analysis (VRA)}
\label{fig:racetek}
\end{figure}
The basic services provided by RaceTek are stroke rate analysis, swim velocity analysis and turn analysis collected from videos recorded at swim competitions. All calculations are done by hand using video footage to analyze swimming. Theses tasks are time-consuming and can be automated. The goal of this paper is to discuss the path towards building an automated swimming analytics engine to provide useful data to coaches and swimmers. Specifically, we address the need to create a dataset for training swimmer detection and tracking models, which could then support automatic generation of currently used analytics~\cite{victor}, as well as create new ones.
\section{Previous Work}
Much work has been done to collect swimming analytics in the training environment. For example,~\cite{zecha} used computer vision methods to estimate swimmer stroke counts and stroke rates on footage collected from a swim channel. The channel was used to control their environment; in addition, they also collected their footage underwater. Thus, in their case, only one swimmer is in the video at any given time. This, however, would be impractical in case of multiple swimmers in a competition. Another example is analytics with accelerometers placed on the body of the swimmers~\cite{Mooney,tritonwear,form_swim}. For the task of collecting swimming analytics across all races, wearable devices are too impractical due to weight, the water dynamics of wearable devices, and the regulations regrading swimming with such devices.
This leaves analyzing overhead race video of all swimmers as the most widely-available and least intrusive way to collect swimming analytics. Little published research on automated swimming analytics of competition video is available. Some of the main contributions are~\cite{sha,sha_understanding,victor,victor_new}. In~\cite{sha}, the authors used a complex assortment of tracking algorithms to determine the current state of a swimmer and thus the best way to locate their current position. The main problem with their work was that, while it did produce good results, the results were only valid for the one pool their work was dedicated to. If such a system were to be employed in a different pool, it would have to be recalibrated. A more recent work~\cite{victor} gives the most practical results for collecting swimming analytics in a competition environment. Unfortunately, this work can only collect analytics using a video tracking a single swimmer, which means additional detection and video processing is needed to extract such segments from the common multi-swimmer video. The task of counting strokes relies heavily on proper tracking~\cite{victor_new}. In the above works, the main challenge is finding swimmers in a multi-swimmer video scene and tracking their movement using one or just a few camera sources. This is made difficult as swimmers start on a block, dive into the water, turn on a wall, and swim under the water.
There are few works on tracking humans in aquatic environments, the main ones being~\cite{sha,Chan}. In~\cite{Chan}, the authors performed detection in an aquatic environments using dense optical flow motion map and intensity information. These works use background subtraction to determine the location of swimmers based on a model of the aquatic environment. The methods of swimmer detection proposed in these papers are valid candidates to use to assist in tracking. However, they do not generalize well, as they are heavily dependent on the pools for which they were developed. For their proposed methods to be used at other pools would require recalibration and re-estimating background parameters.
Fortunately, there is plenty of work on general object tracking in diverse environments, such as~\cite{kalal}. This work assumed that nothing was known about what was going to be tracked and notes that when some prior information is available about objects to be tracked, it is possible to build a more accurate and robust tracker. Due to the very structured nature of swimming competitions, a fairly accurate tracker seems possible. An example of another tracking system using this principle is~\cite{alvar}, which is built upon the You Only Look Once (YOLO) detection system~\cite{Redmon} to aid their tracking algorithm.
Unfortunately, training such systems requires annotated datasets of swimmers in swimming environments, which currently do not seem to be openly available.
Collecting annotated data is well researched. The creators of the PASCAL VOC challenge~\cite{everingham} spent time creating a well-defined procedure for collecting useful data from online footage. In summary, VOC strove to collect images that accurately represented all possible instances of objects to be detected. This means that all examples of objects have adequate variance in terms of object size, orientation, pose, illumination, position and occlusion. The data also needs to be annotated in a consistent, accurate and exhaustive manner. The VOC challenge used images from the website Flickr, which allowed for classes to be created fulfilling the variance requirements noted. No such set of picture examples exist for swimmers in a racing environment. Fortunately, there is an abundance of race video footage which, when processed, can be used as examples. This race footage can be found mainly on YouTube, as well as through private organizations such as the Canadian Sports Institute or USA Swimming Productions. Currently, there seems to be no open dataset of annotated swimmers.
\section{Swim Race Footage Variability}
In this section we provide an in-depth look at the pool environment and what one can expect when observing footage of swim competitions. It is important to know such information in order to collect a proper dataset of swimmers, because such knowledge aids in the collection of scenarios with sufficiently large variability of conditions. There are three main contributors to variance when it comes to capturing swimming races in pool environments. These contributors are: venue, camera angle, and swimming itself.
\subsection{Venue}
The venue refers to the place of competition, the pool in which the racing occurs. Each venue, or even the same venue, can vary according to many factors, which include the course of the pool, the number of lanes, the lighting, architecture, lane ropes, and the presence of flags.
The \textit{course of a pool} refers to the distance the swimmers must travel in order to complete one pool length. For example, in a long course meter (LCM) pool, swimmers must complete fifty meters before they encounter a wall. There are three main pool courses used competitively in swimming around the world: LCM, short course meters (SCM), and short course yards (SCY). The bulk of SCY competitions are held in the USA. The course of a pool has a big effect on where cameras are placed. In an SCM or SCY race, one camera can easily capture an entire race. In an LCM race, however, the pool is long enough to require multiple cameras to avoid reducing the relative size of the swimmers to a very small fraction of the frame. Another point worth considering is that one venue can host all three courses depending on how the pool is built. Less than optimal performance could be achieved if a model were to be trained on one configuration and then be tested on another, even in the same pool, without other similar training data to support it.
Different pools can have different numbers of \textit{lanes}. Typically, there is an even number of lanes, ranging from six to ten. However, in some situations, competitions can be held in LCM pools but raced as SCY or SCM. This can result in up to a twenty-lane race. Causing even more confusion, some competitions have swimmers racing in one half of the pool, but warming up in the other half. An example would be having lanes 0 to 9 with swimmers racing while having lanes 10 through 19 open to swimmers who are not racing.
Pool \textit{lighting} is usually kept as constant as possible in high-level competitions to avoid blinding, as swimmers must swim on their front and on their back. Competitions can be indoors or outdoors, resulting in a wide variety of lighting sources and conditions. In competitions where the pool is indoors, the lighting usually comes from the roof, the ceilings are high, and the lighting is even across an entire pool, but each pool can be more or less illuminated than others. In outdoor pools during daytime, light generally comes from the sun and this can result in very bright reflections and very drastic shadows. At nighttime in an outdoor pool, lighting comes from underwater and from above, with varying degrees of illumination. This results in three important factors to consider: illumination, shadow, and glare.
Another important consideration is pool \textit{architecture}, which includes the depth of the pool, what is it made from, the presence of bulkheads, the style of blocks, any markings that may be on the pool bottom, and deck space. Pool depth will affect the relative position of the observed markings at the bottom of the pool. Lane markings are always present and run down the center of each lane. Pools can be made with tile, metal, or other synthetic materials. The composition of materials is less relevant, but the way they absorb and reflect light will affect glare, lighting, and hue in the pool image. Bulkheads are floating walls in the pool that can be moved to change the course of the pool. They look different from the end of the pool mainly because there can be water on the other side of a bulkhead. There are many different types of blocks and they can look drastically different. This is important because a machine learning model could learn that a swimmer is ``on-blocks'' by learning the structure of the blocks rather than the position of the swimmer.
Besides lane markers, which are generally present in all competition pools, there may be other marks on the bottom of pools. For example, at TYR competitions there are large TYR logos at the bottom of the pool. Such big markings can make swimmers hard to identify when they are underwater. Lastly, one must consider the amount of deck space available in the pool and how it is being utilized. A pool with little deck space and many swimmers walking around it will result in many occlusions of the swimmers in the closest lanes, while a pool with more deck space and fewer people on deck will have fewer occlusions.
\textit{Lane ropes} are the division between lanes. They run parallel to the pool edges and stop a swimmer from getting too close to another swimmer while also dampening waves. On the world stage there are very strict rules on how lane ropes may look and what color pattern they must adhere to. In general competition however, pools will use whatever lanes ropes are available and thus their detailed structures are of little use for swimmer detection. They are useful in a broad sense as they are fairly straight and define areas where swimmers can and can not be. In a training set, it would be desirable to have a wide variety of examples of pools with different styles of lane ropes.
\textit{Flags} are objects that are always above the swimmers in a competition setting. They are a very consistent source of occlusions in swim competition. The flags indicate to the swimmers when the wall is approaching; they are generally five meters or yards away from each end of the pool and span the width of the pool. In more prestigious competitions, when races that do not require flags occur, the flags are removed for better viewing of the swimmers. Thus, it is important to produce examples of races with and without flags in a training set.
This concludes the list of sources of venue variance. While not all sources of variance may be captured in a given training set, it is desirable to be aware of their existence when dealing with specific pool examples.
\subsection{Camera angle}
When collecting stroke data, the most useful angles are the ones where the swimmers are racing in a direction perpendicular to the view of the camera. A Cartesian coordinate system can be applied to the pool to describe the location of cameras and thus the camera angles that can be accomplished. The x-axis will be defined as the closest wall in the view of the camera such that the swimmers are swimming parallel to it. The z-axis of this coordinate system is parallel to the vertical direction relative to the pool, and the y-axis is parallel to the direction of the blocks or the wall that swimmers turn on.
In general, the position of the camera along the z-axis is usually at the \textit{pool level} or the \textit{viewing level}. Pool level is roughly the height of a standing person on deck and viewing level is a height at which all swimmers can be put in view by the camera. Ideally, all races are recorded at viewing level. In media designed for the viewer, most races have multiple cameras and thus, races are recorded with both pool-level footage and view-level footage. The advantage to having pool-level footage is that quick and complex movements such as dives can be more easily captured, but the disadvantage is not having a good view of all swimmers. The advantage of view-level footage is that all swimmers can be seen, but swimmers appear smaller in the scene and so it is more difficult to see what a swimmer is doing.
The camera position along the x-axis, i.e., along the length of the pool, usually captures one of the following three views: (1) the \textit{dive view}, which means the camera is anywhere before the ten-meter mark closest to the blocks; (2) the \textit{turn view}, which means the camera is past the ten-meter mark of the turn end of the race; and (3) a \textit{mid-pool view}, which is anywhere between the dive view and the turn view. Typically, three main (x,z) camera positions are used: pool-level at dive view, viewing-level at mid-pool view, and pool-level at turn view. It is easier to find footage for these three typical combinations than for the others. This is important when creating a training set. All typical combinations of camera positions must be included for a model to properly generalize the locations of swimmers in a race environment.
\subsection{Swimming}
What the swimmers are doing in the water is another variable to consider in swimming footage. A model will need to generalize how swimmers look in a pool when they are racing and so it will need examples of all circumstances. There are four different strokes and twenty-nine different races in swimming, not including the different flavors of race when comparing LCM and SCM. Many races are subsets of other races and so each race need not be exclusively collected to get a good representation of that pool. Both genders take part in the sport of swimming and thus variance can come from the different genders. Women race with more suit material than men and so in high definition footage women look different than men in the water. As swimmers age they tend to increase in size and strength and thus a swimmer who is very young will look drastically different in the water than someone who is older. In summary, it is important that all four strokes are included in training video, a roughly even distribution of male and female swimmers, and an even distribution of age groups of swimmers with different speeds.
\subsection{Summary}
We summarize this section by presenting an exhaustive list of sources of variance that should be considered when building a data set of swim race video footage.
\subsubsection{Adequate Variety of Venues}
\begin{itemize}
\item Pool lighting, i.e. big shadows or good lighting
\item Pools with and without bulkheads
\item Pools with different block styles
\item Pools with different depths
\item Pool courses (LCM, SCM, SCY)
\item Lanes, including unusual examples
\end{itemize}
The more combinations and variations of these items across the entire dataset, the better. As already mentioned, the same venue can have different pool courses and lane numbers at different competitions. Hence, we propose collecting a variety of footage for each venue as follows.
\subsubsection{Variety for Each Venue}
A collection of all of the following variants is required across the entire dataset.
\begin{itemize}
\item Presence/absence of occlusions from people and flags
\item Variety of camera angles, especially the three typical camera positions mentioned
\item Swimming stroke and race distance
\item Gender and age
\end{itemize}
For each venue it is important that an example of all four stokes are included and races with sprints (roughly thirty to sixty second races) to distance (roughly eight to fifteen minute races) are collected. Each venue should have data captured with all available camera angles. For a given venue, there is no need to have examples from all of the twenty-nine different races with each gender and age group, so long as a variety of ages and genders is present across the entire dataset. It is also not imperative that every competition moves the flags.
\subsubsection{Footage Augmentation}
Some features created by pool venue and camera angle can be effectively simulated by augmenting the dataset. Such suggested augmentations are as follows.
\begin{itemize}
\item Flipping images across the y-axis
\item Shearing or perspective transformation of images to simulate different camera angles
\item Changing overall brightness, hue and contrast
\end{itemize}
\section{Proposed Method}
We now describe our proposed method of data collection and the creation of a dataset of swimmers racing. As outlined in~\cite{everingham}, it is important that every labelled swimmer is labelled consistently, so that a machine learning algorithm can learn effectively.
\subsection{Obtaining Examples of Swimmers}
The type of data collected is very important to creating a reliable detection system; Ideally there is an adequate variance of situations described in the section above. There are many potential sources for collecting this data, such as \cite{swim_usa,Aust_Institute_Sport,Can_Institute_Sport}. Swim USA's YouTube page alone has roughly five-hundred races published on it a year. As a first step, examples of swimmers were collected from race footage taken from only one pool; that being the 2019 TYR Pro Swim Series - Bloomington, posted on Youtube. A bigger dataset of pools and races from different sources is more desirable it was decided that before more work was done, some preliminary work would be completed to understand how a model would preform with one pool.
\subsection{Classes of Swimmers}
Referring to the paper \cite{sha}, six classes of swimmers will be collected when creating a swimmer dataset. These classes are as follows: on-blocks, diving, underwater, swimming, turning, and finishing. These six classes, if detected correctly, are valuable in automated collection of swimming metrics such as dive reaction speed, distance off the wall, as well collecting splits, race times, and more. Thus, these six classes will be used in the creation of a dataset. There are points in a race when a swimmer must transition from one class to the next, these transitions must be well defined so that annotations are collected consistently.
The following is a guide on how to choose the swimmers class in a frame, given knowledge of the race and the distance completed in the race. This will not include illegal transitions such as underwater to turn.
\begin{enumerate}
\item \textbf{``On-blocks''}
\begin{itemize}
\item ``On-blocks'' is always first
\item Class starts at the point when the swimmers are on the blocks at the start of the race
\item The transition to the next class will be defined as the point when the swimmer is no longer touching the blocks
\end{itemize}
\item \textbf{Diving}
\begin{itemize}
\item Diving is always after ``On-blocks''
\item Defined as the point when the swimmer is in mid air and not on the block or underwater or swimming yet
\item The transition out of diving will be defined as the point when the entire swimmer becomes occluded by the water and splash of the dive entry, in the case that a swimmer fails to completely submerge themselves, skip the underwater class completely and start annotating as swimming
\end{itemize}
\item \textbf{Underwater}
\begin{itemize}
\item Underwater can only happen after a turn or diving
\item Defined as any point in the race when the swimmer is completely submerged, not touching a wall and not swimming
\item The transition out of underwater will be defined as the point when the swimmer breaks the water with any part of their body to start swimming
\item Don't annotate a swimmer if they can't be seen, i.e. 90\% of swimmer is hidden due to angle, lane ropes and refraction of water
\end{itemize}
\item \textbf{Swimming}
\begin{itemize}
\item Swimming comes after underwater, diving or turning
\item Defined as any point in the race when the swimmer is completing legal stroke cycles and not touching a wall
\item The transition out of swimming into turning can occur on a touch turn or on a flip turn.
\item When preforming a touch turn, turning commences when the swimmer touches the wall
\item When preforming a flip turn, the turn commences when the swimmer is on their front and the head is submerged due to the the flip
\item The transition out of swimming into finishing is when the swimmer touches the wall and the races has concluded
\end{itemize}
\item \textbf{Turning}
\begin{itemize}
\item Turning only happens after swimming
\item Defined as any point in a race when the swimmer comes to a stop near enough to a wall in order to touch or push off the wall
\item The transition out of turning to underwater is when the swimmer's feet or possibly, last body part leaves the wall
\item The point when a swimmer is completely straight can also signify the transition to underwater
\item In the case that a swimmer fails to completely submerge themselves after a touch, skip the underwater class completely and start annotating as swimming
\item There should be no point at which a turn should not be boxed unless it is cut off by the camera or camera angle as the swimmer is somewhere on the wall
\end{itemize}
\item \textbf{Finish}
\begin{itemize}
\item Finishing only happens after swimming
\item Defined as any point after the conclusion of the race distance
\item Finishing is always the final class of swimmer
\end{itemize}
\end{enumerate}
\subsection{Class Annotation Details}
This section outlines how to assign a bounding box to each example of a swimmer. In general the box must be the smallest possible box containing the entire swimmer, "Except where the bounding box would have to be made excessively large to include a few additional pixels ($\leq$5\%)" \cite{Annotation_Guidelines}. If 80\% - 90\% of a swimmer is cut off by the camera, do not give them a box. Put a box around a swimmer that can be identified in any way, unless it is cut off by the camera or camera angle. Because there are a variety of situations where this statement becomes ambiguous there will be some general guidelines for specific classes.
\begin{figure*}[ht!]
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=1.2in]{on_block.png}
\captionsetup{justification=centering}
\caption[center]{Swimmers on blocks}
\label{fig:on_blocks}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=1.2in]{swimming.png}
\captionsetup{justification=centering}
\caption{Swimmers swimming}
\label{fig:swimming}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=1.2In]{under_water.png}
\captionsetup{justification=centering}
\caption{Swimmers underwater}
\label{fig:underwater}
\end{subfigure}
\caption{Examples of swimmers states}
\end{figure*}
\subsubsection{``On-blocks''}
For swimmers in the farther lanes and behind other swimmers, add the tightest box possible around all visible parts of the swimmer. If the tip of a foot is visible from behind another swimmer, for example, do not make the box excessively larger than the majority of the swimmer visible. An example would be the swimmer above the annotated swimmer in figure \ref{fig:on_blocks}.
\subsubsection{Swimming}
Stretch the box to include arms and feet. Center the end of the box with the swimmers feet around the splash produced by the kick if the feet are not visible.
\subsubsection{Underwater}
When a swimmer is visible, create the smallest possible box that encompasses the swimmer, see figure \ref{fig:underwater}. When a swimmer becomes too difficult to box accurately do not annotate the swimmer, see the top three swimmers in figure \ref{fig:underwater}.
\subsubsection{Turning}
The smallest box shall be made around the swimmer such that it encompasses the swimmer, for all swimmers, regardless of how visible the swimmer is in terms of occlusions. If more than ninety percent of the swimmer is out of camera view then do not annotate the swimmer.
\subsubsection{Finishing}
As a swimmer finishes, they generally look to the clock to see their time. As this happens, they transition from being in horizontal body position, to being in a vertical body position. Due to the refraction of water and bubbles formed by the swimmer, the body of the swimmer becomes invisible to the camera. Thus, a minimal box around what is viable is all that is required.
\subsubsection{Diving}
It can be difficult to determine exactly which swimmer is being annotated. This is because the minimal box including the entirety of one swimmer could also require that the swimmer below is included. Create a minimal box around the swimmer being annotated even if this means the box created also includes a large portion of the other swimmer.
\begin{figure*}[ht!]
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=1.2in]{turn.png}
\captionsetup{justification=centering}
\caption[center]{Swimmers turning}
\label{fig:on_blocks}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=1.2in]{diving.png}
\captionsetup{justification=centering}
\caption{Swimmers diving}
\label{fig:swimming}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[height=1.2In]{finish.png}
\captionsetup{justification=centering}
\caption{Swimmers finishing}
\label{fig:underwater}
\end{subfigure}
\caption{Additional examples of swimmers states}
\end{figure*}
\section{Experiments Using One Pool}
When annotating video, many frames are highly correlated, and thus are redundant for training a detection or tracking algorithm. For this reason, tests were performed to reduce the amount of redundant annotations collected. In these experiments, we examined how to limit annotation to avoid collecting redundant data while still capturing sufficient data variety to train a successful model.
\subsection{Collecting Swimmer Data for Tracking}
To illustrate the importance of efficiently and effectively collecting data, a simple and somewhat extreme example is considered. regard a single race video of a fifteen-hundred freestyle, at thirty frames per second, with eight swimmers, and with a length of sixteen minutes (regarded as a slow men's time one the world stage). The resulting video if annotated in full would result in more than 230,000 examples of swimmers, preforming all six classes of swimming in all its frames. When one considers the nature of a fifteen-hundred freestyle it is obvious these examples do not contain the right proportion of swimmer examples. This is evident as the examples will not contain all four strokes, their respective turns or the both genders of swimmers to say the least. There are other problems with collecting one single race and this is mentioned in the summary of swim race footage variability. Regardless, Using a custom-built labelling system, annotations of swimmers required an average of two seconds per bounding box. This means labelling the entirety of the aforementioned video would take over five days of continuous work. Because of the high redundancy in these images, such annotation would be an inefficient use of time. The following describes the experiments conducted in order to find a better annotation procedure.
\subsection{Extraction of Swimming Video Features}
The method used to test what frames in race video to annotate and what frames to skip was as follows. Using footage found on Swim USA's YouTube page \cite{swim_usa}, data was collected from a few videos of one competition, 2019 TYR Pro Swim Series - Bloomington. All possible strokes, turns and dives where present in the data collected. One in every three frames of the footage was annotated, as suggested in \cite{victor}. An exception was made with footage containing diving, in which case video was annotated frame by frame. This exception was due to the large amount of movement a dive contains and its short duration in time. The result was three-thousand frames of data with 25,000 examples of swimmers in various classes, the exact values can be seen in Table~\ref{tab:collected_data}.
\begin{table}
\centering
\begin{tabular}{l|r|r}
Class & \# Annotations & Percent of Total\\
\hline \hline
``On-blocks'' & 2,344 \hspace{0.45cm} & 10\% \hspace{0.5cm} \\
Diving & 1,124 \hspace{0.45cm} & 5\%\hspace{0.6cm} \\
Swimming & 13,009 \hspace{0.45cm} & 53\%\hspace{0.6cm} \\
Underwater & 2,997 \hspace{0.45cm} & 12\%\hspace{0.6cm} \\
Turning & 1,558 \hspace{0.45cm} & 6\%\hspace{0.6cm} \\
Finishing & 3,534 \hspace{0.45cm} & 14\%\hspace{0.6cm} \\
\hline
Total & 24,566 \hspace{0.45cm} & 100\%\hspace{0.6cm} \\
\end{tabular}
\caption{The amount of collected data for each class}
\label{tab:collected_data}
\end{table}
After the collection of this data, multiple models were trained with different subsets of this collected data to find the amount and distribution of data that produces optimal results. Optimal results being reducing the amount of redundancy in the dataset, while still obtaining detection results that are good enough. The first method of creating subsets was to randomly select a specified percentage of the three-thousand frames. The second method of subset creation was to randomly select a specified percentage of each class of the three-thousand frames. This method guaranteed that there will always be the same "percent of total" in all classes. Tests of the models using the second methods data will show if a certain class should have had more annotations in the initial collection phase.
\section{Results}
The Darknet-53, YOLOv3-416 model \cite{yolov3} and Darknet-15, YOLOv3-tiny-416 model \cite{tiny_yolo} was considered for testing. It was found that their results where almost identical and so the following tests were completed with the tiny model. In total, fifteen models where trained with 1\%, 2\%, 5\%, 10\%, 25\%, 50\%, 75\% and 100\% of the data collected. For each test the models where given the exact same architecture and parameters; the only way they differed was in the datasets they were trained on. Their performance was tested against a dataset of five-hundred frames that were not used in training. Half of the test set was obtained from the same pool used for training and the other half was obtained from a different pool but with similar conditions. These pools are designated as Bloomington and Winter National, respectively. The performance was gauged using mean average precision (mAP) for each class, for more details, see \cite{everingham}. The mAP of tracking was also collected. Tracking disregards the classes, the mAP value of tracking represents how good the model is at identifying the position of a swimmer in a pool.
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{results_plot.png}
\caption{Results from the data collection test}
\label{fig:res_plot}
\end{figure}
Figure \ref{fig:res_plot} shows a condensed breakdown of the results. The top plots represents testing of footage from the same pool, Bloomington. The bottom plots represents testing of footage from a different pool, Winter National. The x-axis represents the percent of data used for training and the y-axis represents the mAP value. Each line in a plot is either a different class or the tracking results. The plots on the left represent the first subset distribution and the plots on the right represent the second subset distribution.
The first thing to notice from this test is that based on the top two graphs, using roughly twenty percent of the data collected was sufficient to produce comparable results. That is collecting data every fifteen frames and every five frames for diving. After reducing the data collected to less than twenty percent in a steep decline in overall performance is observed.
Next thing to notice is the extremely poor performance of the model when predicting diving at the Winter National pool and the less than optimal performance in the turning and underwater. These results are due to the difference in camera angles from one pool to the other, as can be seen in Figure \ref{fig:dive_comp}. This could have been partially fixed by flipping the training images horizontally but the camera angle between pools is different even with the horizontal flip. That being said, the swimming in Winter National was captured at roughly the same position as Bloomington. This is confirmed in figure \ref{fig:res_plot} as the Bloomington and Winter National swimming plots have the same profile. This is in contrast to the rest of the Winter National plots that have drastically different profiles than the Bloomington results.
\begin{figure}[ht]
\centering
\includegraphics[width=6cm]{Dive_comp.jpg}
\caption{Difference between dive camera angles: Winter National (top) and Bloomington (bottom)}
\label{fig:dive_comp}
\end{figure}
Lastly, there seems to be no significant difference in the amount data for which mAP value sharply drops when comparing across all classes. This might indicate that collecting annotations once every fifteen frames (once every five for diving) is a good enough approximation. However, this conclusion is based on a relatively small test set. If more insight is to be gained on the distribution of classes collected from swimming footage, more tests need to be conducted.
\section{Summary}
In this paper we presented the first step in a project for the automation of swimming analytics. We began construction of a dataset through identifying the important aspects of data collection and annotation. The results suggest that data can be efficiently collected from a video by annotating frames at two or three frames a second (six frames a second for diving). Such analysis provided validation that under optimal circumstances a detection system can exist. Lastly, this experiment gave a general intuition of how deep learning detection models such as Darknet \cite{Redmon} respond to swimmer data. Specifically, a lighter detection model, such as Darknet-15 performs roughly the same as Darknet-53 for the detection of swimmers \cite{yolov3,tiny_yolo}. lastly, there is no reason to believe that a system such as this one should not work in a general sense, once given more training examples of swimmers in different competitions.
With the tools put forth by this paper we are able to begin the next steps in automating swimming analytics. We will use the procedure presented here to collect more data from a variety of sources, creating an annotated dataset for swimming. Next, we will build better tracking solutions incorporating swimmer dynamics such as in~\cite{bewley2016simple}, and finally we will build metric collection solutions to automatically derive common swimming metrics such as stroke count and stroke length. The beauty of this work is that is very modular and as such can be built upon once the ground work has been completed. Upon completion, we are confident that this project will greatly help simplify the collection and use of swimming analytics, assisting coaches and athletes across all levels of swimming and even possibly help to increase viewer interest of swimming.
| {
"attr-fineweb-edu": 1.933594,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdJs5qsNCPgVW-l4T | \section{Introduction}
The engaging nature of the complexity related to team sport competition has historically captured the attention of academics
\cite{blasius2009zipf,clauset2015safe,holleczek2012particle,baek2015nash,ben2007efficiency,yamamoto2021preferential,neiman2011reinforcement,mukherjee2019prior,mandic2019trends,merritt2013environmental,gudmundsson2017spatio,fister2015computational}.
In recent years, the interest in studying this area has increased. Promoted by the demands of the thriving global sports market, the research community is now seeing the study of the phenomena related to these systems as challenges to face in the context of a hyper-competitive twenty-one-century industry \cite{patel2020intertwine}.
However, most of the papers available in the literature use empirical data to perform statistical analysis aiming to generate key performance indicators \cite{drikos2009correlates,buldu2019defining,bransen2019measuring,cakmak2018computational,gama2016networks,garrido2020consistency}.
This approach can be handy for coaches using data-driven based training systems, but it usually does not provide a deeper understanding of the phenomena.
From an alternative perspective, our view focuses on use empirical data to uncovering the mechanisms that explain the emergence of global observables;
based on a multidisciplinary framework that connects physics with complexity in the social sciences \cite{jusup2022social}.
In previous works, we have successfully studied an modeled several aspects of the game of football \cite{chacoma2020modeling,chacoma2021stochastic}.
In this paper, we report a strikingly simple mechanism that governs the dynamics of volleyball.
To place the complexity of this sport in context, let us briefly discuss the basic concepts.
In volleyball games, two teams of six players, separated by a high net, try to score points by grounding the ball on the other's team court.
In possession of the ball, teams may hit the ball up to three times, but players must not hit the ball twice consecutively.
During a rally, namely the time between the serve and the end of the play, the teams compete for the point using specially trained hits to control the ball, set, and attack.
Blocks, likewise, are trained actions to try to stop or alter an opponent's attack, but unlike the others, the hit involved in blocks does not count as an ``official'' hit. It means that a player who hits the ball by blocking can consecutively touch the ball again without infringing the rules.
On the other hand, we want to point out that the emergence of complexity in the game is strictly related to players' ball control and accuracy.
Note that if one of the players is not able to handle the ball, their team misses the point; if one of the players cannot pass the ball accurately, it increases the probability that the teammates can not control the ball, and if one of the players cannot attack accurately, it increases the probability that the opponents can handle the ball and eventually obtain the point.
In order to study this complex dynamic and unveil the mechanisms behind it, we proceed as follows.
First we visualized and collected data from 20 high-level international games.
Then we performed a data-driven analysis to get insight into the underlying mechanisms of the game.
With this, we proposed a parsimonious model and developed an analytical approach to obtain a closed-form expression able to capture in remarkable good approximation one of the most relevant observables of the dynamics: the probability that the players perform n hits in a rally, $P_n$.
A variable that is key to characterizing the system complexity, and it is also related to teams performances \cite{sanchez2015analysis,link2019performance}, it is involved in self-regulated phenomena \cite{sanchez2016dynamics}, may affect the motor activities during a top-level match \cite{mroczek2014analysis}, etc.
\section{Data collection}
To collect the data used in this work, we coded a visualization program that allows extracting information from video records of volleyball games.
The program is designed for a trained operator to record the most relevant events observed during rallies. The information gathered at each event include,
\begin{enumerate}
\item An identification number for the player that hit the ball in the event.
\item The time of the event referred to the beginning of the game.
\item The position of the player that hit the ball. In two dimensions, referred to the court's floor.
\item The type of hit performed: pass, set, attack, block, etc.
\end{enumerate}
We visualized $10$ games of the 2018 FIVB Men's Volleyball Nations League and $10$ games of the 2019 FIVB Men's Volleyball Challenger Cup, collecting the information of $3302$ rallies in total.
The visualization program and an anonymized version of the collected data used for this work can be found in \cite{data}. For further information on the visualized games and details of the visualization process, please c.f. Supplementary Material section S1.
\section{Data analysis}
We analyzed the collected data to understand why the players succeed or fail when they intend to score a point.
In Fig.~\ref{fi:insight} we show heatmaps indicating the probability that the players hit the ball in particular zones of the court. Panels (a)-(b) show the case of the 1st hit, (c)-(d) of the 2nd hit, and (e)-(f) the case of the 3rd hit.
At the left (colored in blue), we show the results when the possession succeeds, namely when they scores the point. At the right (colored in red), when they miss the point.
Parameters $\rho_S$ and $\rho_F$ give the probability normalized by the maximum value and are linked to the color intensity of the heatmaps.
In the inset of panel (e), we show the court divided and numerated into zones. Notice that, in the following, we will use these references to discuss the results.
As a first observation, we can see that, in successful possessions, it is highly probable that the players hit the ball in their natural action zones: the first hit (reception) in zones 5-6-1, the second one (set up) around 2-3, and the attack in the zones 4-3-2 (at the front) and 6-1 (at the back).
In unsuccessful possessions, we can see that the players have to move out of the actions zones to perform the hit.
For instance, in panel (b), we can see that the probability of the players performing the 1st hit around zone 2 rises.
Note, that it is not tactical convenient because this is the zone that the setter uses to perform the 2nd hit, and the tight presence of other players may hinder them and produces a fail or cause a reduction of accuracy.
If the 1st hit is performed inaccurately, it produces what we see in panel (d), where the setter frequently has to go to the attackers' zones to handle the ball causing the same hinder effect as in the previous case.
In this context, the attackers' options are limited.
In extremes cases, they will not attack. They will pass an easy ball to the opponents just to keep playing.
Therefore, in these cases, the players' performance diminishes.
In conclusion, when the players have to move out of their action zones to handle the ball, it increases the probability of both missing the ball and hitting the ball imprecise.
In the following, we will use these observations to define our model.
\begin{figure}[t!]
\centering
\includegraphics[width=0.75\textwidth]{fig1.pdf}
\caption{
Probability that the players hit the ball in particular zones of the court.
Notice that in each panel, we show the court layout, including the position of the net at the top, the 3 meters line at the middle, the background line at the bottom, and the lateral lines at the sides.
Panels (a)-(b) show the case of the 1st hit, (c)-(d) of the 2nd hit, and (e)-(f) the case of the 3rd hit.
Plots at the left (colored in blue) shows the results when the possession succeeds.
Plots at the right (colored in red) when they miss the point.
The inset in panel (e) shows the six zones of the court used as reference.
We use the information presented in these plots to highlight the lack of performance that the players exhibit when moving out of their action zones to perform a hit.
}
\label{fi:insight}
\end{figure}
\section{Model}
We have observed that, during rallies, when the players have to move out of their action zone to perform a hit, the probability of missing the ball increases, and the precision decreases.
In the light of these observations, to model the rallies' dynamic, we introduce two stochastic parameters, $p$ and $q$, as follows,
\begin{enumerate}
\item {\it The probability of performing the hit, $p$}.
If the players have to move out of their action zones to hit the ball, then, there is a probability $p$ of performing the hit, and $1-p$ of missing it.
\item {\it The probability of achieving precision, $q$}.
If the players have to move out of their action zones to perform a hit, then there is a probability $q$ of achieving precision in two situations:
\begin{enumerate}
\item Defending or setting, in the first and second hit passing the ball towards the teammate's action zone.
\item Attacking or serving, sending the ball out of the action zone of the opponent in charge of taking the first hit.
\end{enumerate}
With probability $1-q$, likewise, the opposite occurs in both situations.
\end{enumerate}
Following these rules, the players will move the ball around the court $n$ times until one of them misses a hit, ending the rally.
\section{Analytical approach to obtain $P_n$}
\label{se:analytical}
We now focus on developing an analytical approach to obtain the probability distribution that the players perform $n$ hits during rallies, $P_n$.
First, we introduce two approximation: (i) we will suppose that the teams always try to use the three reglementary hits to score the point. In other words, they do not pass the ball to the other team at the first or second hit. Considering that these events are rare in the dataset ($<0.1\%$ of the observed cases), we understand that this is a reasonable approximation; and
(ii) we will not consider the blocks as hits. By rule, when a team blocks an attack, they still have the three hits to control, set and attack. Therefore, in practice, the effects of blocks can be absorbed as inefficiencies in the attack.
Let us start calculating the cases $P_1$, $P_2$ and $P_3$. We obtain,
\begin{subequations}
\begin{align}
P_1 &= q\,(1-p), \label{eq:P1}\\
P_2 &= q\,p\,(1-q)\,(1-p), \label{eq:P2}\\
P_3 &= q\,p\,(1-q)\,p\,(1-q)\,(1-p). \label{eq:P3}
\end{align}
\label{eq:1}
\end{subequations}
In eq.~(\ref{eq:P1}), the probability $q$ indicates that the service is performed outside the action zone of the player in charge of taking the first hit (a difficult service), and the probability $(1-p)$ indicates that this player cannot perform the hit. Consequently, in the rally only one hit is performed: the service.
In eq.~(\ref{eq:P2}), $q$ indicates a difficult service, $p$ indicates that the player can perform the second hit, $(1-q)$ indicates the pass is performed outside the action zone of the player in charge of taking the second hit (setter), and $(1-p)$ indicates that the later cannot perform the third hit. Consequently, in the rally two hits are performed: the service, and the first hit.
With a similar analysis, we can obtain the probability of performing three hits, $P_3$, that we show in eq.~(\ref{eq:P3}).
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\textwidth]{fig2.pdf}
\caption{
With transition probabilities $P_{dd}$ , $P_{de}$, $P_{ed}$, and $P_{ee}$, it is possible to evaluate the probability that the players perform $n$ hits in a rally, $P_n$, by using a decision tree. In this plot we show the
decision tree used to calculate $P_7$.
Nodes represent the number of hits performed in complete ball possession and edges transition probabilities.
Circles show the end of the paths used for the calculation of probability $D_3$ [see main text, Eq.~(\ref{eq:4})].
}
\label{fi:Decisiontree}
\end{figure}
To obtain $P_4$ and further values of $n$ is useful to define the transition probabilities linked to the teams' chances of performing the three hits that define a complete possession,
\begin{equation}
\begin{split}
\centering
P_{dd} &= p\,q+ p\,(1-q)\,p\,q + p\,(1-q)\,p\,(1-q)\,p\,q, \\
P_{de} &= p\,(1-q)\,p\,(1-q)\,p\,(1-q),\\
P_{ed} &= 1, \\
P_{ee} &= 0,
\end{split}
\label{eq:2}
\end{equation}
where $P_{dd}$ is the probability that a team receives a difficult ball and, after the three hits, delivers to the other team a difficult one, $P_{de}$ is the probability that a team receives a difficult ball and delivers an easy one, $P_{ed}$ the probability that a team receives an easy ball and delivers a difficult one, and $P_{ee}$ the probability that a team receives and delivers an easy ball.
In this frame, $P_4$ can be written as follows,
\begin{equation}
\centering
P_4 = q\,P_{dd}\,(1-p) + (1-q)\, P_{ed}\,(1-p),
\label{eq:3}
\end{equation}
where the first term is the probability that a team receives a difficult ball, handles it, delivers a difficult ball, and the other team cannot achieve the first hit. The second term is the probability that a team receives an easy ball, delivers a difficult one, and the opponent team cannot achieve the first hit.
To calculate $P_n$ for higher values of $n$, we use a binary decision tree.
As an example, let us focus on calculating $P_7$.
In Fig.~\ref{fi:Decisiontree} we exhibit a three levels tree where the nodes indicate the number of hits performed and the edges the transition probabilities to the next level.
Note, the level of the tree, $l$, is related to the number of performed hits by the expression $n=3l-2$.
If we define the probability $D_3$ as,
\begin{equation}
\centering
D_3 =
q\,P_{dd} P_{dd} +
q\,P_{de} P_{ed} +
(1-q)\,P_{ed} P_{dd},
\label{eq:4}
\end{equation}
obtained from the sum of all the paths leading to the third level leaves indicating that the team in possession is delivering a difficult ball (see circles in Fig.~\ref{fi:Decisiontree}),
then, to obtain $P_7$ we just multiply for $(1-p)$, which indicates the team receiving the attack cannot perform the $8th$ hit. Therefore,
\begin{equation}
\centering
P_7 = D_3\,(1-p).
\label{eq:5}
\end{equation}
To calculate the probability for values of $n$ between two levels of the tree, for instance $P_8$ and $P_9$, we use Eq.~(\ref{eq:4}) as follows,
\begin{equation}
\centering
\begin{split}
P_8 &= D_3\,p\,(1-q)\,(1-p)\\
P_9 &= D_3\,p\,(1-q)\,p\,(1-q)\,(1-p).
\end{split}
\label{eq:6}
\end{equation}
As the reader may be noted, the calculation of $D_l$ is useful to obtain $P_n$. Therefore in the following, we focus on calculating $D_l$ $\forall \, l\in \mathbb{N}$.
We can write the probability of ``achieving'' a level $l$ of the tree, $A_l$, as,
\begin{equation}
\centering
A_l = E_l + D_l,
\label{eq:7}
\end{equation}
where probability $E_l$ is the sum of all the paths leading to the $l\,th$ level leaves which transition probabilities indicate that the team in possession is delivering an easy ball.
Similarly, we can write $D_{l+1}$ and $A_{l+1}$ as,
\begin{equation}
\begin{split}
\centering
D_{l+1} &= E_l\,P_{ed} + D_l\,P_{dd}\\
A_{l+1} &= E_l\,P_{ed} + D_l\,(P_{de}+P_{dd}).
\end{split}
\label{eq:8}
\end{equation}
Combining Eqs.~(\ref{eq:7}) and (\ref{eq:8}), and using $P_{ed}=1$ [see Eqs.~(\ref{eq:2})], we obtain the following system of mutually recursive linear sequences,
\begin{equation}
\centering
\begin{pmatrix}
D_{l+1}\\
A_{l+1}
\end{pmatrix}
=
\begin{pmatrix}
(P_{dd} -1) & 1 \\
(P_{de}+P_{dd}-1) & 1
\end{pmatrix}
\begin{pmatrix}
D_{l}\\
A_{l}
\end{pmatrix}.
\label{eq:9}
\end{equation}
Then, the roots of the characteristic polynomial linked to the $2\times2$ matrix of system (\ref{eq:9}),
\begin{equation}
\centering
\lambda_{1,2} =
\frac{P_{dd} \pm \sqrt{P_{dd}^2 + 4\,P_{de}}}{2},
\label{eq:10}
\end{equation}
can be used to express the general solution of $D_{l}$ as,
\begin{equation}
\centering
D_{l} = a\,\lambda_1^{l-1} + b\,\lambda_2^{l-1},
\label{eq:11}
\end{equation}
where constants $a$ and $b$ can be found by using Eqs.~(\ref{eq:10}) and (\ref{eq:11}), and the expressions for $D_{l=1}$ and $D_{l=2}$,
\begin{equation}
\centering
\begin{pmatrix}
a\\
b
\end{pmatrix}=
\begin{pmatrix}
\lambda_1^{0} & \lambda_2^{0}\\
\lambda_1^{1} & \lambda_2^{1}
\end{pmatrix}^{-1}
\begin{pmatrix}
q \\
q\,P_{dd}+ (1-q)\,P_{ed}
\end{pmatrix}.
\label{eq:12}
\end{equation}
In the light of the described above, we have formally found the exact solution for $P_n$. In the following set of equations we summarize our result,
\begin{equation}
\begin{split}
\centering
P_n &= D_{\frac{n+2}{3}}(1-p), \\
P_{n+1} &= D_{\frac{n+2}{3}}\,p\,(1-q)\,(1-p), \\
P_{n+2} &= D_{\frac{n+2}{3}}\,p\,(1-q)\,p\,(1-q)\,(1-p),
\end{split}
\label{eq:13}
\end{equation}
with $n=3l-2$, and $l\in\mathbb{N}$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\textwidth]{fig3.pdf}
\caption{
The probability that the players perform $n$ hits during rallies, $P_n$.
Colored in red with circles marks, the empirical case.
Colored in black with plus symbols, the analytical results.
We can observe that the analytical approach captures well the non-trivial behavior of the empirical distribution. The similarity can be quantified by calculating the Jensen-Shannon divergence between the curves. From this procedure, we obtain $D_{JS}=0.009$, which indicates a high similarity.
}
\label{fi:pn}
\end{figure}
In Fig.~\ref{fi:pn} we show the probability distribution $P_n$.
The extracted from the data is colored in red and the theoretical calculated with equations (\ref{eq:13}) is colored in black.
The values of parameters $p$ and $q$ were set by carrying out a minimization process using the Jensen-Shannon divergence ($D_{JS}$) between the curves as metrics.
From this procedure, we obtained $p=0.54$ and $q=0.46$ with $D_{JS}=0.009$, indicating a good similarity between the curves. Such correspondence can also be observed with the naked eye in the figure.
On the other hand, the mean value and the standard deviation for the empirical curve are $\avg{n}= 4.58(70)$, $\sigma(n)=3.40(45)$, and for the theoretical curve are $\avg{n^{TH}}= 4.74(83)$, $\sigma(n^{TH})=3.44(66)$.
We can see that the values are smaller in the empirical case but still very similar to the calculations.
Regarding the obtained values of $p$ and $q$, it is necessary to highlight that in a real games, each player on the court should have a different pair of values $p$ and $q$. This is because distinct players should perform differently.
Moreover, these pairs could depend on the set because players vary their performance during the game.
However, with these results, we show that the simplification proposed in our model allows us to represent each player as a single {\it average player}. In this context, since we construct the empirical probability $P_n$ with the results of several matches, teams, and players, it is expected that throughout the minimization process, we obtain values close to $0.5$.
The reader might note that the shape of the curves follows a zig-zag pattern with the peaks placed at the values $n=1,4,7,10,...$, which are the number of hits related to teams' complete ball possessions.
We can explain this non-trivial behavior by analyzing the natural dynamics of the game.
If a team can control an attack in the first hit, they will use the rest of the reglementary hits to set the ball and attack back. As we have previously mentioned, it is unlikely they attack at the first or second hit.
Because of this, it is more probable to find in the dataset cases, for instance, with $n=7$ than cases with $n=6$.
We want to highlight that despite the complex dynamics of the game, the analytical approach based on our parsimonious stochastic model succeeds in capturing the non-trivial behavior of this relevant observable.
\section{Multi-agent simulations}
We now apply the rules of our model to guide the dynamics of a minimalist 1-D self-propelled agents system thought of to emulate volleyball rallies.
We aim to confirm, with this agent-based model, the analytical results obtained for probability distribution $P_n$.
Moreover, we show that it is possible to combine the analytical results with the data to give an approach to capture other empirical global observables related to spatiotemporal variables.
The agent model is based on the following elements,
\begin{enumerate}
\item {\it Teams and players.} In this parsimonious system, there are two teams with two players (agents) by team.
\item {\it The court.} We represent the game's court as a 1-D array with four sites.
The 1st and the 2nd sites shape the first team side, and the 3rd and the 4th sites shape the second team side.
The net is placed between sites 2 and 3, delimiting the teams' sides.
Players are allowed to occupy any site of their zones. They can also overlap, but they cannot invade the rival's side.
\item {\it The teammates' roles.} To emulate the player's tactical dependency in volleyball teams, we propose that one of the two players manages the 1st and 3rd hit (defender/ attacker), and the other manages the 2nd hit (setter).
\item {\it Initial conditions}. The player's sites at the beginning of the rally are randomly set. The player at the 1st site serves the ball.
\item {\it Dynamics.} If the agents have to change their current place to hit the ball, then the parameters $p$ and $q$ give the probability of performing the hit and achieving precision, respectively (see section Model).
Achieving precision in this context means, in the 1st hit, sending the ball to the partner place, and in the 3rd hit, sending the ball to the site where the rival in charge of taking the 1st hit is not occupying.
The rally ends when one of the players misses a hit.
\end{enumerate}
We define $T$ as the rallies' total time and $R$ as the length of the projection of the ball trajectory on the court's floor.
These two variables can be empirically measured from the collected data by simply adding the succession of temporal and spatial intervals, $\Delta t$ and $\Delta r$, observed between rallies' events.
To obtain $T$ and $R$ from simulations we propose the following,
\begin{enumerate}
\setcounter{enumi}{5}
\item {\it Global observables $T$ and $R$}. At every step of the dynamics, the players hitting the ball draws from the empirical joint distribution P($\Delta t$,$\Delta r$), shown in Fig.~\ref{fi:simulations}~(a), a pair of values (${\Delta t}_i$, ${\Delta r}_i$) that we will link to the $ith$ event.
Then, at the end of the rallies, we compute $T= \sum^n_i {\Delta t}_i$ and $R= \sum^n_i {\Delta r}_i$, where $n$ is the total number of events, to obtain the simulated values of these observables.
\end{enumerate}
Notice we sample $\Delta t$ and $\Delta r$ from the joint distribution because these variables are correlated.
This is evidenced in Fig.~\ref{fi:simulations}~(a), where we can see that the multimodal behavior of $P(\Delta r)$ and $P(\Delta t)$ results in a non-trivial joint distribution with several local maxima.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{fig4.pdf}
\caption{
In these plots, we summarize the results obtained from the multi-agent simulations.
Panel (a) shows the joint probability distribution that we used to draw, at every simulation step, a pair $(\Delta t_i, \Delta r_i)$.
Panel (b) exhibits the probability distribution $P(n)$. Here we show the empirical observations (DS), the theoretical calculation (TH), and the obtained from the agent-based model (MO).
Panel (c) shows the probability distribution $P(T)$ obtained from the dataset (DS) and from the model's outcomes.
Panel (d) exhibits the probability distribution $P(R)$ obtained from the dataset (DS) and from the model's outcomes.
Panel (e) shows the relation $\avg{T}$ vs. $n$ and $\avg{R}$ vs. $n$, a comparison between the empirical case (DS) and the result of the model (MO).
}
\label{fi:simulations}
\end{figure}
With the model defined, we performed $10^5$ rally simulations setting the parameters $p$ and $q$ with the values calculated in section \ref{se:analytical} .
In Fig.~\ref{fi:simulations} panels (b) to (e), we compare the outcomes with the empirical data. Additionally, in panel (b), we plot the theoretical curve given by the Eq.~(13). In the following, we discuss these results.
In panel (b), we show the probability distribution $P(n)$. We can see that it agrees with the theoretical curve, confirming the calculations.
Panel (c) show the probability distribution $P(T)$.
In this case, we can see that the model captures very well the empirical data.
The plot is in linear-log scale to emphasize the probability decay as an exponential curve.
The calculation for the mean value and the standard deviation for the empirical data gives $\avg{T^{DS}}=5.47~(s)$ and $\sigma(T^{DS})=4.00~(s)$, and for the simulations $\avg{T^{MO}}=3.52~(s)$ and $\sigma(T^{MO})=3.86~(s)$.
The similarity between the values of the first and second moment supports the existence of an exponential behavior.
In panel (d), the probability distribution $P(R)$ is shown. As well as in previous cases, we see that the model satisfactorily captures the global aspects of the empirical data.
The mean and standard deviation for both cases are
$\avg{R^{DS}}=34.42~(m)$, $\sigma(R^{DS})=18.23~(m)$, and $\avg{R^{MO}}=39.17~(m)$, $\sigma(R^{MO})=25.52~(m)$.
Lastly, in panel (e), we show the evolution of $\avg{T}$ and $\avg{R}$ with the number of hits $n$.
We can see a linear behavior in both cases.
For $\avg{T}$ the empirical observations agree well with the model's outcomes.
For $\avg{R}$, on the contrary, we observe a deviation at higher values of $n$.
This deviation may be related to the emergence of some type of complexity in the actual dynamics that our simple model cannot capture.
For instance, sometimes during the game the play becomes unstable, and the players hit the ball several times in reduced space until one team is able to control the ball.
Notice we cannot capture this kind of ``burstiness effect'' through our simple model.
According to the stated rules, at every step, we have to randomly draw a pair $(\Delta t, \Delta r)$ which implies following a memoryless process.
In this frame, since we have a joint probability distribution that is bimodal in the variable $\Delta r$, the probability of obtaining a long sequence of pairs with small values of $\Delta r$ is low.
Consequently, for a given value of $n$, the calculated value of $\avg{R}$ in the model could tend to be slightly higher than in the empirical case, as we observe in Fig.~\ref{fi:simulations} panel (e).
\section{Conclusions}
Our investigation focused on studying the dynamics of volleyball's rallies.
We proposed a framework based on games visualization, collection of relevant information, and through data-driven analysis aiming to obtain insights to define the rules of a mathematical model able to capture the underlying dynamics in volleyball games.
We found that the players are more likely to fail and become imprecise if they have to move out of their actions zones to perform a hit.
With this in mind, we proposed a model based on two stochastic parameters: $p$ and $q$, where the first is the probability of performing the hit, and the latter is the probability of achieving precision.
Then we calculated a closed-form expression for the probability that players perform $n$ hits in the rally, $P_n$, that agree remarkably well with the empirical observations.
Therefore, we understand that we have uncovered two stochastic variables able to partially generate the level of complexity observed in the volleyball dynamics.
In this regard, we consider that this work represents a new step towards a broad understanding of volleyball games as complex adaptive systems.
Moreover, the collected data that we make publicly available with this work is, as far as we know, the largest open collection of volleyball-logs ever released, an invaluable resource for the research community that opens the door for further research in this area.
Finally, we want to point out that our findings provide new knowledge that should be actively taken into account for sports scientists and coaches.
Since it has been shown that the length of rallies may affects the team behavior through many indirect variables \cite{sanchez2015analysis,link2019performance,sanchez2016dynamics,mroczek2014analysis}, our results can be handy for designing new efficient data-driven training systems aiming to enhance the performance in competitive scenarios.
\section*{Data availability statement}
The data that support the findings of this study are publicly available in \cite{data}.
\section*{Acknowledgement}
We acknowledge enlightening discussions with Luc\'ia Pedraza.
This work was partially supported by CONICET under Grant number PIP 112 20200 101100; FonCyT under Grant number PICT-2017-0973; and SeCyT-UNC (Argentina).
| {
"attr-fineweb-edu": 1.931641,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbUfxK1ThhAqzU1Hy | \section{Introduction} \label{intro}
\textcolor{black}{The tourism industry} is one of the fast-growing sectors \textcolor{black}{in} the world. On the wave of digital transformation, \textcolor{black}{this sector} is experiencing a shift from mass tourism to personalized travel. Designing a \textcolor{black}{tailored} tourist trip is a rather complex and time-consuming \textcolor{black}{process}. \textcolor{black}{Therefore}, the \textcolor{black}{use} of expert and intelligent systems \textcolor{black}{can be beneficial}. Such systems typically appear in the form of ICT integrated solutions that perform \textcolor{black}{(usually on a hand-held device)} three main services: recommendation of attractions (Points of Interest, PoIs), route generation and itinerary customization \citep{gavalas2014survey}.
In this research work, we focus on route generation, known in literature as \textcolor{black}{the \textit{Tourist Trip Design Problem}} (TTDP). The objective of the TTDP is to select PoIs that maximize tourist satisfaction, while taking into account a set of parameters (e.g., alternative transport modes, distances among PoIs) and constraints (e.g.\textcolor{black}{, the duration of each visit,} the opening hours of each PoI and \textcolor{black}{the time available daily for sightseeing)}. In last \textcolor{black}{few} years there has been a flourishing of scholarly work on the TTDP \citep{ruiz2022systematic}. \textcolor{black}{Different variants of TTDP have been studied in the literature, the main classification being made w.r.t. the mobility environment which can be \textit{unimodal} or \textit{multimodal} \citep{ruiz2021tourist}}.
In this article, we focus on a variant of \textcolor{black}{the} TTDP in which a tourist can move from one PoI to the next one as a pedestrian or as a driver of a vehicle (like a car or a motorbike). \textcolor{black}{Under this hypothesis, one plan includes a car tour with a number of stops from which pedestrian subtours to attractions (each with its own time windows) depart.} We refer to this multimodal setting as a \textit{walk-and-drive} mobility environment. \textcolor{black}{Our research work was motivated by a project aiming to stimulate tourism in the Apulia region (Italy). Unfortunately, the public transportation system is not well developed in this rural area and most attractions can be conveniently reached only by car or scooter, as reported in a recent newspaper article \citep{alongrustyroads}: \textit{(in Apulia) sure, there are trains and local buses, but using them exclusively to cross this varied region is going to take more time than most travellers have.} Our research was also motivated by the need to maintain social distancing in the post-pandemic era \citep{li2020coronavirus}.}
\indent \textcolor{black}{The \textit{walk-and-drive} variant of the TTDP addressed in this article presents several peculiar algorithmic issues that we now describe}. The TTDP is a variant of the \textcolor{black}{\textit{Team Orienteering Problem with Time Windows}} (TOPTW), \textcolor{black}{which is known to be NP-hard \citep{GAVALAS201536}}. \textcolor{black}{We now review the state-of-the-art of modelling approaches, solution methods and planning applications for tourism planning. A systematic review of all the relevant literature has been recently published in \cite{ruiz2022systematic}. The TTDP is a variant of the Vehicle Routing Problem (VRP) with Profits \cite{archetti2014chapter}, a generalization of the classical VRP where the constraint to visit all customers is relaxed. A known profit is associated with each demand node. Given a fixed-size fleet of vehicles, VRP with profits aims to maximize the profit while minimizing the traveling cost. The basic version with only one route is usually presented as Traveling Salesman Problem (TSP) with Profits \cite{feillet2005traveling}. Following the classification introduced in \cite{feillet2005traveling} for the single-vehicle case, we distinguish three main classes. The first class of problems is composed by the Profitable Tour Problems (PTPs) \cite{dell1995prize} where the objective is to maximize the difference between the total collected profit and the traveling cost. The capacitated version of PTP is studied in \cite{archetti2009capacitated}. The second class is formed by Price-Collecting Traveling Salesman Problems (PCTSPs) \cite{balas1989prize} where the objective is to minimize the total cost subject to a constraint on the collected profit. The Price-Collecting VRPs has been introduced in \cite{tang2006iterated}. Finally, the last class is formed by the \textit{Orienteering Problems} (OPs) \cite{golden1987orienteering} (also called Selective TSPs \cite{laporte1990selective} or Maximum Collection Problems \cite{kataoka1988algorithm}) where the objective is to maximize collected profit subject to a limit on the total travel cost. The \textit{Team Orienteering Problem} (TOP) proposed by \cite{chao1996team} is a special case of VRP with profits; it corresponds to a multi-vehicle extension of OP where a time constraint is imposed on each tour. \\
\indent For the TTDP, the most widely modelling approach is the TOP. Several variants of TOP have been investigated with the aim of obtaining realistic tourist planning. Typically PoIs have to be visited during opening hours, therefore the best known variant is the Team Orienteering Problem with Time-Windows (TOPTW) (\cite{vansteenwegen2009iterated} \cite{boussier2007exact}, \cite{RIGHINI20091191}, \cite{6004465}). In many practical cases, PoIs might have multiple time windows. For example, the tourist attraction is open between 9 am and 14 am and between 3 pm and 7 pm. In \cite{tricoire2010heuristics}, the authors devise a polynomial-time algorithm for checking feasibility of multiple time windows. The size of the problem is reduced in a preprocessing phase if the PoI-based graph satisfies the triangle inequality.
The model closest to the one proposed in this work is the Multi-Modal TOP with Multiple Time Windows (MM-TOPMTW) \cite{ruiz2022systematic}. Few contributions deal with TTDP in a multimodal mobility environment. Different physical networks and modes of transports are incorporated according to two different models. The former implicitly incorporates multi-modality by considering the public transport. Due to the waiting times at boarding stops, the model is refereed to as Time-Dependent TOPTW (\cite{zografos2008algorithms}, \cite{garcia2013integrating}, \cite{gavalas2015heuristics}). Other models \textcolor{black}{incorporate} the choice of transport modes more explicitly, based on availability, preferences and time constraints . In particular in the considered transport modes the tourist either walks or takes a vehicle as passenger, i.e. bus, train, subway, taxi \cite{RUIZMEZA2021107776},\cite{ruiz2021tourist}, \cite{YU20171022}. To the best of our knowledge this is the first contribution introducing the TTDP in a \textit{walk-and-drive} mobility environment. Other variants have been proposed to address realistic instances. Among the others, they include: time dependent profits (\cite{vansteenwegen2019orienteering}, \cite{YU2019488}, \cite{gundling2020time}, \cite{KHODADADIAN2022105794}), score in arcs (\cite{VERBEECK201464}), tourist experiences (\cite{zheng2019using},\cite{RUIZMEZA2021107776},\cite{ruiz2021tourist},\cite{ruiz2021grasp}), hotel selection (\cite{zheng2020using},\cite{DIVSALAR2013150}), clustered POIs (\cite{exposito2019solving},\cite{EXPOSITO2019210}).
\\
\indent In terms of solution methods, meta-heuristic approaches are most commonly used to solve the TTDP and its variant. As claimed in \cite{ruiz2022systematic}, Iterated Local Search (ILS) or some variations of it (\cite{vansteenwegen2009iterated}, \cite{gavalas2015heuristics},\cite{gavalas2015ecompass}, \cite{doi:10.1287/trsc.1110.0377}) is the most widely applied technique. Indeed, the ILS provides fast and good quality solutions and, therefore, has been embedded in several real-time applications. Other solution methods are: GRASP (\cite{ruiz2021grasp},\cite{EXPOSITO2019210}), large neighboorhod search (\cite{amarouche2020effective}), evolution strategy approach (\cite{karabulut2020evolution}), tabu search (\cite{TANG20051379}), simulated annealing (\cite{LIN201294}, \cite{LIN2015632}), particle swarm optimization (\cite{DANG2013332}), ant colony optimisation(\cite{KE2008648}).\\
\indent We finally observe that algorithms solving the TTDP represent one of the main back-end components of expert and intelligent systems designed for supporting tourist decision-making. Among the others they include electronic tourist guides and advanced digital applications such as CT-Planner, eCOMPASS, Scenic Athens, e-Tourism, City Trip Planner, EnoSigTur, TourRec, TripAdvisor, DieToRec, Heracles, TripBuilder, TripSay. A more detailed review of these types of tools can be found in \cite{HAMID2021100337}, \cite{GAVALAS2014319} and \cite{borras2014intelligent}.}
\indent In this paper, we seek to go one step further with respect to the literature by devising insertion and removal operators tailored for a \textit{walk-and-drive} mobility environment. Then we integrate the proposed operators in an iterated local search. A computational campaign on realistic instances show that the proposed approach can handle realistic instances with up to 3643 points of interests in few seconds. The paper is organized as follows.
In section \ref{sec:2} we provide problem definition. In section \ref{sec:3} we describe the structure of the
algorithm used to solve the TTDP. Section \ref{sec:4} and \ref{sec:5} introduce insertion and removal operators to tackle \textcolor{black}{the} TTDP in a \textit{walk-and-drive} mobility environment. Section \ref{sec:6} illustrates how we enhance the proposed approach in order to handle instances with thousands of PoIs.
In Section \ref{sec:8}, we show the experimental results. Conclusions and further work are discussed in Section \ref{sec:9}.
\section{Problem definition}\label{sec:2}
Let $G=(V,A)$ denote a directed complete multigraph, where each vertex $i\in V$ represents a PoI. Arcs in $A$ are a PoI-based representation of two physical networks: pedestrian network and road network. \textcolor{black}{Moreover, let $m$ be the length (in days) of the planning horizon.} We denote with $(i,j,mode)\in A$ the connection from PoI $i$ to PoI $j$ with transport $mode\in\{Walk, Drive\}$. Arcs $(i,j,Walk)$ and $(i,j,Drive)$ represent the quickest paths from PoI $i$ to PoI $j$ on the pedestrian network and the road network, respectively. As far as the travel time durations \textcolor{black}{are} concerned, we denote with $t^w_{ij}$ and $t^d_{ij}$ the \textcolor{black}{durations} of the quickest paths from PoI $i$ to PoI $j$ with transport mode equal to $Walk$ and $Drive$, respectively.
A \textit{score} $P_i$ is assigned to each PoI $i\in V$. Such \textcolor{black}{a} score is determined by taking into account both the popularity of the attraction as well as preferences of \textcolor{black}{the} tourist. Each PoI $i$ is characterized by a time windows $[O_i,C_i]$ and a visit duration $T_i$. We denote with $a_i$ the arrival time of the tourist at PoI $i$, with $i\in V$. If the tourist arrives before the opening hour $O_i$, then he/she can wait. \textcolor{black}{Hence,} the PoI visit \textcolor{black}{starts} at time $z_i=max(O_i,a_i)$. The arrival time is feasible if the visit of PoI $i$ can be started before the closing hour $C_i$, i.e. $z_i\leq C_i$. Multiple time windows have been modelled as proposed in \cite{souffriau2013multiconstraint}. Therefore each PoI with more than one time window is replaced by a set of dummy PoI (with the same location and with the same profit) and with one time window each. A \textit{``max-n type"} constraint is added for each set of PoIs to guarantee that at most one PoI per set is visited.
\begin{figure}\label{esempiof}
\includegraphics[width=150mm ]{Soluzione.png}
\caption{Example of a \textcolor{black}{daily itinerary (weights on the arcs indicate travel times)}. }
\end{figure}
\indent In a \textit{walk-and-drive} mobility environment a TTDP solution consists in the selection of $m$ itineraries, starting and ending to a given initial tourist position. Each itinerary corresponds to a sequence of PoI visits and the transport mode selected for each pair of consecutive PoIs. \textcolor{black}{As an example, Figure 1 depicts the itinerary followed by a tourist on a given day. The tourist drives from node $i^s_1$ to node $i_3$, parks, then follows pedestrian tour $i_3-i_4-i_5$ in order to visit the attractions in nodes $i_3$, $i_4$ and $i_5$. Hence he/she picks up the vehicle parked nearby PoI $i_3$ and drives to vertex $i_6$, parks, then follows pedestrian tour $i_6-i_7-i_8-i_9$ in order to visit the corresponding attractions. Finally the tourist picks up the vehicle parked nearby PoI $i_6$ and drives to the final destination $i^e_1$ (which may coincide with $i^s_1$).}
Two parameters model tourist preferences in transport mode selection: $MinDrivingTime$ and $MaxWalkingTime$. Given a pair of PoIs $(i,j)$, we denote with $mode_{ij}$ the transport mode preferred by the tourist. \textcolor{black}{In the following, we assume that a tourist selects the transportation mode $mode_{ij}$ with the following policy (see Algorithm \ref{alg:alg00}). If} $t^w_{ij}$ is strictly greater than $MaxWalkingTime$, the transport mode preferred by the tourist is $Drive$. Otherwise if $t^d_{ij}$ is not strictly greater than $MinDrivingTime$ (and $t^w_{ij}\leq MaxWalkingTime$), the preferred transport mode is $Walk$. In all remaining cases, the tourist prefers the quickest transport mode. \textcolor{black}{It is worth noting that our approach is not dependent on the mode selection mechanism used by the tourist (i.e., Algorithm \ref{alg:alg00})}. A solution is feasible if the selected PoIs are visited within their time windows and each itinerary duration is not greater than $C_{max}$. The TTDP aims to determine the feasible tour that maximizes the total score of the visited PoIs. Tourist preferences on transport mode selection have been modelled as soft constraints. Therefore, ties on total score are broken by selecting the solution with the minimum number of connections violating tourist preferences.\\
\begin{algorithm}[H]
\SetAlgoLined
\KwIn{PoI $i$, PoI $j$}
\KwOut{$mode_{ij} $}
\uIf{$t^w_{ij}>MaxWalkingTime$}{%
$mode_{ij}\gets Drive$\;
}\uElseIf{$t^d_{ij}\leq MinDrivingTime$}
$mode_{ij}\gets Walk$\;
}\Else
{
\lIfElse{$t^w_{ij}\leq t^d_{ij}$}{$mode_{ij}\gets Walk$}{$mode_{ij}\gets Drive$}
}
\caption{SelectTransportMode}
\label{alg:alg00}
\end{algorithm}
\subsection{Modelling transfer}
Transfer connections occur when the tourist switches from the road network to the pedestrian network \textcolor{black}{or} vice versa. Since we assume that \textcolor{black}{tourists always enter} a PoI as a pedestrian, travel time $t^d_{ij}$ has to be increased with transfer times associated to the origin PoI $i$ and the destination PoI $j$. The former models the time required to pick up the vehicle parked nearby PoI $i$ (\textit{PickUpTime}). The latter models the time required to park and then reach on foot PoI $j$ (\textit{ParkingTime}). During a preprocessing phase we have increased travel time $t^d_{ij}$ by the (initial) \textit{PickUpTime} and the (final) \textit{ParkingTime}. It is worth noting that a transfer connection also occurs when PoI $i$ is the last PoI visited by a \textit{walking} subtour. In this case, the travel time from PoI $i$ to PoI $j$ corresponds to the duration of a \textit{walk-and-drive} path on the multigraph G: the tourist starts from PoI $i$, reaches on foot the first PoI visited by the walking subtour, then reaches PoI $j$ by driving. In Figure \ref{esempiof} an example of \textit{walk-and-drive} path is $i_5-i_3-i_6$. We observe that the reference application context consists of thousands of daily visitable PoIs. Therefore, it is not an affordable option pre-computing the durations of $(|V|-2)$ \textit{walk-and-drive} paths associated to each pair of PoIs $(i,j)$. For example in our computation campaign the considered $3643$ PoIs would require more than $180$ GB of memory to store about $5 \cdot 10^{10}$ travel times. For these reasons we have chosen to reduce significantly the size of the instances by including in the PoI-based graph $G$ only the \textit{PickUpTime} and \textit{ParkingTime}. As illustrated in the following sections, \textit{walk-and-drive} travel scenarios are handled as a special case of $Drive$ transport mode with travel time computed at run time.
\section{Problem-solving method}\label{sec:3}
Our solution approach is based on \textcolor{black}{the} \textit{Iterated Local Search (ILS)} \textcolor{black}{proposed in \cite{vansteenwegen2009iterated} for the TOPTW}. To \textcolor{black}{account for a \textit{walk-and-drive} mobility environment,
we developed a number of extensions and adaptations are discussed in corresponding sections. In our problem, the main decisions amount to determine the sequence of PoIs to be visited and the transport mode for each movement between pairs of consecutive PoIs}. The combination of \textit{walking} subtours and transport mode preferences is the new challenging part of a TTDP defined on a \textit{walk-and-drive} mobility environment. To handle these new features, our ILS contains new contributions compared to the literature. \textcolor{black}{Algorithm \ref{alg:alg0}} reports a general description of ILS. The algorithm is initialized with an empty solution. Then, an improvement phase is carried out by combining a local search and a perturbation step, both described in the following subsections. The algorithm stops when one of the following thresholds is reached: the maximum number of iterations without improvements or a time limit.
The following subsections are devoted to illustrating local search and the perturbation phase
\subsection{Local Search} Given an initial feasible solution (\textit{incumbent}), the idea of \textit{local search} is to explore a neighbourhood of solutions \textit{close} to the incumbent one. Once the best neighboor is found, if it is better than the incumbent, then the incumbent is updated and \textcolor{black}{the} search restarts.
In our case the local search procedure is an \textit{insertion heuristic}, where the initial incumbent is the empty solution and neighbours are all solutions obtained from the incumbent by adding a single PoI. The neighbourhood is explored in a systematic way by considering all possible insertions in the current solution. As illustrated in Section \ref{sec:4}, the feasibility of neighbour solutions is checked in constant time. As far as the objective function is concerned, we evaluate each insertion as follows. For each itinerary of the incumbent we consider a (unrouted) PoI $j$, if it can be visited without violating both its time window and the corresponding \textit{max-n type} constraint. Then it is determined the itinerary and the corresponding position with the smallest time consumption. We compute the ratio between the score of the PoI and the \textit {extra time} necessary to reach and visit the new PoI $j$. The ratio aims to model a trade-off between time consumption and score. As discussed in \cite{vansteenwegen2009iterated}, due to time windows the score is considered more relevant than the time consumption during the insertion evaluation. Therefore, the POI $j^*$ with the highest $(score) ^ 2 / (extra$ $time)$ ratio is chosen for insertion. Ties are broken by selecting the insertion with the minimum number of violated soft constraints. After the PoI to be inserted has been selected and it has been determined where to insert it, the affected itinerary needs to be updated as illustrated in Section \ref{sec:5}. This basic iteration of insertion is repeated until it is not possible to insert further PoIs due to the constraint imposed by the maximum duration of the itineraries and by PoI time windows. At this point, we have reached a local optimal solution and we proceed to diversify the search with a \textit {Solution Perturbation} phase. In Section \ref{sec:6}, we illustrate how we leverage clustering algorithms to identify and explore high density neighbourhood consisting of candidate PoIs with a `good' ratio value.
\subsection{Solution Perturbation}
The perturbation phase has the objective of diversifying the local search, avoiding that the algorithm remains \textit {trapped} in a local optima of solution landscape. The perturbation procedure aims to remove a set of PoIs occupying consecutive positions in the same itinerary. It is worth noting that the perturbation strategy is adaptive. As discussed in Section \ref{sec:4}, in a multimodal environment a removal might not satisfies the triangle inequality, generating a violation of time windows for PoIs visited later. Since time windows are modelled as hard constraints, the perturbation procedure adapts (in constant time) the starting and ending removal positions so that no time windows are violated. To this aim we relax a soft constraint, i.e. tourist preferences about transport mode connecting remaining PoIs. The perturbation procedure finalizes (Algorithm \ref{alg:alg0} - line \ref{alg:alg0:lab1}) the new solution by decreasing the arrival times to a value as close as possible to the start time of the itinerary, in order to avoid unnecessary waiting times.
Finally, we observe that the parameter concerning the length of the perturbation ($\rho_d$ in Algorithm \ref{alg:alg0}) is a measure of the degree of search \textit{diversification}. For this reason $\rho_d$ is incremented by 1 for each iteration in which there has not been an improvement of the objective function. If $\rho_d$ is equal to the length of the longest route, to prevent search from \textit{restarting from the empty solution}, the $\rho_d$ parameter is set equal to 50 $\%$ of the length of the smallest route in terms of number of PoIs. Conversely, if the solution found by the local search is the new \textit{best solution} $s _*$, then search \textit{intensification} degree is increased and a small perturbation is applied to the current solution $ s^{\prime} _ {*} $, i.e. $\rho_d$ perturbation is set to 1.\\
\begin{algorithm}[H]
\SetAlgoLined
\KwData{\textit{MaxIter, TimeLimit}}
$\sigma_d \gets 1$, $\rho_d \gets1$, $s^{\prime}_{*} \gets\emptyset$, $NumberOfTimesNoImprovement \gets 0$\;
\While {NumberOfTimesNoImprovement $\leq$ MaxIter Or ElapTime$\leq$ TimeLimit} {
$s^{\prime}_{*}\gets InsertionProcedure(s^{\prime}_{*})$\;
\uIf{ $s^{\prime}_{*}$ better than $s_*$}{
$s_* \gets s^{\prime}_{*}$\;
$\rho_d\gets1$\;
$NumberOfTimesNoImprovement \gets 0$\;
}
{$NumberOfTimesNoImprovement \gets NumberOfTimesNoImprovement + 1$\;}
$\rho_d\gets\rho_d + 1$\;
\If {$\rho_d \geq$ \textit{Size of biggest itinerary}}{
$\rho_d\gets \max(1,\lfloor$ \textit{(Size of smallest itinerary})$/2\rfloor)$\;
}
$\sigma_d \gets \sigma_d+ \rho_d$\;
$\sigma_d \gets \sigma_d$ $mod$ \textit{(Size of smallest itinerary)}\;
$s^{\prime}_{*}\gets$\textit{PerturbationProcedure}($s^{\prime}_{*}$,$\rho_d$,$\sigma_d$)\;
\textit{Update ElapTime}\;\label{alg:alg0:lab1}
}
\caption{Iterated Local Search }\label{alg:alg0}
\end{algorithm}
\section{Constant time evaluation framework}\label{sec:4}
This section illustrates how to check in constant time the feasibility of a solution chosen in the neighbourhood of $s'_{*}$.
To this aim the encoding of the current solution has been enriched with additional information.
As illustrated in the following section, such information needs to be updated not in constant time, when
the incumbent is updated. However this is done much less frequently (once per iteration) than evaluating all solutions in the neighbourhood of the current solution.
\paragraph{Solution Encoding} We recall that, due to multimodality, a feasible solution has to prescribe for each itinerary a sequence of PoIs and the transport mode between consecutive visits. We encode each itinerary in the solution $s^\prime_*$ as a sequence of PoI visits. Figure \ref{esempio_enc} is a graphical representation of the solution encoding of itinerary of Figure \ref{esempiof}.
\begin{figure}
\includegraphics[width=150mm ]{Solution_Encoding.png}
\caption{Graphical representation of solution encoding of itinerary of Figure \ref{esempiof}. Red travel times refers to duration of \textit{walk-and-drive} paths $(i_5-i_3-i_6)$ and $(i_9-i_6-i^e_1)$. }\label{esempio_enc}
\end{figure}
Given two PoIs $i$ and $k$ visited consecutively, we denote with $mode^*_{ik}$ the transport mode prescribed by $s'_*$. We also denote with $t_{ik}$, the travel time needed to move from PoI $i$ to PoI $k$. If the prescribed transport mode is $Drive$, then the travel time $t_{ik}$ has to take properly into account the transfer time needed to switch from the pedestrian network to the road network at PoI $i$. In particular, a transfer connection starting at the origin PoI $i$ might generate a \textit{walking} subtour. For example in the itinerary of Figure \ref{esempiof}, in order to drive from PoI $i_5$ to PoI $i_6$, the tourist has to go on foot from PoI $i_5$ to PoI $i_3$ (\textit{transfer connection}), pick up the vehicle parked nearby PoI $i_3$, drive from PoI $i_3$ to PoI $i_6$ and then park the vehicle nearby PoI $i_6$. In this case we have that $t_{i_5i_6}=t^w_{i_5i_3}+t^d_{i_3i_6}$. To evaluate in constant time the insertion of a new visit between PoIs $i_5$ and $i_6$, we need to encode also subtours. Firstly we maintain two quantities for the $h$-th subtour of an itinerary: the index of the first PoI and the index of the last PoI denote $FirstPoI_h$ and $LastPoI_h$, respectively.
For example, the itinerary in Figure \ref{esempiof} has two subtours: the first subtour ($h = 1$) is defined by the PoI sequence $i_3-i_4-i_5$ ($FirstPoI_ {1} = i_3, LastPoI_ {1} = i_5$); the second subtour ($h = 2$) is defined by the PoI sequence $ i_6-i_7-i_8-i_9 $ ($FirstPoI_ {2} = i_6, LastPoI_ {2} = i_9$). We also maintain information for determining in constant time the subtour which a PoI belongs to. In particular, we denote with $S$ a vector of $|V|$ elements: if PoI $i$ belongs to subtour $h$, then $S_i=h$. For the example in Figure \ref{esempiof} we have that $ S_ {i_3} = S_ {i_4} = S_ {i_5} = 1 $, while $ S_ {i_6} = S_ {i_7} = S_ {i_8} = S_ {i_9} = 2$. To model that the remaining PoIs do not belong to any subtour we set $ S_ {i_1} = S_ {i_2} = - 1$. Given two PoIs $i$ and $k$ visited consecutively by solution $s'_*$, the arrival time $a_k$ is determined as follows:
\begin{equation}\label{sol_enc:0}
a_k=z_i+T_i+t_{ik},
\end{equation}
where the travel time $t_{ik}$ is computed by Algorithm \ref{alg:alg2}, according to the prescribed transport $mode$. If $S_i\neq-1$ and $mode=Drive$, then the input parameter $p$ denote the first PoI of the subtour which PoI $i$ belongs to, i.e. $p=FirstPoI_{S_i}$. If $mode=Walk$ the input parameter $p$ is set to a the deafult value $-1$. Parameter $Check$ is a boolean input, stating if soft constraints are relaxed or not. If $Check$ is $true$, when $mode_{ik}$ violates soft constraints the travel time $t_{ik}$ is set to a large positive value M, making the arrivals at later PoIs infeasible wrt (hard) time-window constraints. In all remaining cases $t_{ik}$ is computed according to the following relationship:
\begin{equation}\label{sol_enc:1}
t_{ik}=t^w+t^d.
\end{equation}
In particular if the prescribed transport mode is \textit{``walk from PoI $i$ to PoI $k$"}, then $t^w=t^w_{ik}$ and $t^d=0$. Otherwise the prescribed transport mode is \textit{``walk from PoI $i$ to PoI $p$, pick-up the vehicle at PoI $p$ and then drive from PoI $p$ to PoI $k$"}, with $t^w=t^w_{ip}$ and $t^d=t^d_{pk}$. We abuse notation and when PoI $i$ does not belong to a subtour ($S_i=-1$) and $mode=Drive$, we set $p=i$ with $t^w_{ii}=0$ and $mode_{ii}=Walk$. A further output of Algorithm \ref{alg:alg2} is the boolean value $Violated$, exploited during PoI insertion/removal to update the number of violated soft constraints.\\
\indent The first six columns of Table \ref{tab:0} report the encoding of the itinerary reported in Figure \ref{esempio_enc}. Tourist position is represented by dummy PoIs $i^s_1$ and $i^e_1$, with a visiting time equal to zero. The arrival time $a_i$ is computed according to (\ref{sol_enc:0}). Column $z_i+T_i$ reports the leaving time with $z_i=max(a_i,O_i)$ and a visiting time $T_i$ equal to 5 time units. All leaving times satisfy time-window constraints, i.e. $z_i\leq C_i$. As far as the timing information associated to the starting and ending PoIs $i^s_1$ and $i^e_1$, they model that the tourist leaves $i^s_1$ at a given time instant (i.e. $a_{i^s_1}=0$), the itinerary duration is 224 time units, with time available for sightseeing equal 320 time units. All connections satisfy soft constraints, since we assume that $MaxWalkingTime$ and $MinDrivingTime$ are equal to 30 and 2 time units, respectively. The last four columns reports details about travel time computations performed by Algorithm \ref{alg:alg2}. Travel time information between PoI $i$ and the next one is reported on the row associated to PoI $i$. Thus this data are not provided for the last (dummy) PoI $i^e_1$. \\
\begin{table}[]\caption{Details of solution encoding for itinerary reported in Figure \ref{esempio_enc}}\label{tab:0}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|cc|ccc|cc|cccc|}
\hline
\multicolumn{6}{|c|}{\textbf{Itinerary}} & \multicolumn{2}{c|}{\textbf{Time windows}} & \multicolumn{4}{c|}{\textbf{Travel Time Computation}} \\ \cline{1-12}
\textbf{PoI} & \textbf{Violated} & $\textbf{mode}^*_{ik}$ & $\textbf{S}_i$ & $\textbf{a}_i$ & $\textbf{z}_i\textbf{+T}_i$ & \textbf{$O_i$} & \textbf{$C_i$} & \textbf{p} & $\textbf{t}^w$ & $\textbf{t}^d$ & $\textbf{t}_{ik}$ \\ \hline
$i^s_1$ & False & Drive & -1 & 0 & 0 & 0 & 0 & $i^s_1$ & 0 & 25 & 25 \\$i_2$ & False & Drive & -1 & 25 & 30 & 0 & 75 & $i_2$ & 0 & 15 & 15 \\ \hline
$i_3$ & False & Walk & 1 & 45 & 55 & 50 & 115 & -1 & 20 & 0 & 20 \\
$i_4$ & False & Walk & 1 & 75 & 80 & 60 & 95 & -1 & 5 & 0 & 5 \\
$i_5$ & False & Drive & 1 & 85 & 90 & 60 & 115 & $i_3$ & 25 & 5 & 30 \\ \hline
$i_6$ & False & Walk & 2 & 120 & 125 & 80 & 135 & -1 & 10 & 0 & 10 \\
$i_7$ & False & Walk & 2 & 135 & 155 & 150 & 175 & -1 & 20 & 0 & 20 \\
$i_8$ & False & Walk & 2 & 175 & 180 & 90 & 245 & -1 & 7 & 0 & 7 \\
$i_9$ & False & Drive & 2 & 187 & 192 & 90 & 245 & $i_6$ & 27 & 5 & 32 \\ \hline
$i^e_1$ & - & - & -1 & 224 & 224 & 0 & 320 & - & - & - & - \\ \hline
\end{tabular}%
}
\end{table}
\begin{algorithm}[H]
\SetAlgoLined
\SetKwInput{KwInput}{Input}
\SetKwInput{KwOutput}{Output}
\KwData{M}
\KwInput{PoI $i$, PoI $k$, $mode$, $Check$, PoI $p$}
\KwOutput{$t_{ik}$, Violated }
Violated$\gets$ False\;
\uIf{$mode==Walk$}{%
$t^d\gets 0$\;
\lIfElse{($Check \wedge mode_{ik}\neq Walk$)} {$t^w\gets M$}{$t^w\gets t^w_{ik}$}
\lIf{ ($mode_{ik}\neq Walk$)}{Violated$\gets$ True}
} \uElse
{
\lIfElse{($Check \wedge mode_{ip}\neq Walk$)} {$t^w\gets M$}{$t^w\gets t^w_{ip}$}
\lIfElse{($Check \wedge mode_{pk}\neq Drive$)} {$t^d\gets M$}{$t^d\gets t^d_{pk}$}
\lIf{ ($mode_{ip}\neq Walk \vee mode_{pk}\neq Drive$)}{Violated$\gets$ True}
}
$t_{ik}=t^w+t^d$\;
\caption{Compute travel time }
\label{alg:alg2}
\end{algorithm}
\subsection{Feasibility check}
In describing rules for feasibility checking, we will always consider inserting (unrouted) PoI $j$ between PoI $i$ and $k$. In the following we assume that PoI $j$ satisfies the \textit{max-n type} constraints, modelling multiple time windows. Feasibility check rules are illustrated in the following by distinguishing three main insertion scenarios. The first one is referred to as \textit{basic insertion} and assumes that the extra visit $j$ propagates a change only in terms of arrival times at later PoIs. The second one is referred to as \textit{advanced insertion} and generates a change on later PoIs in terms of both arrival times and (\textit{extra}) transfer time of subtour $S_k\neq-1$. The third one is referred to as a \textit{special case} of the advanced insertion, with PoI $k$ not belonging to any subtour (i.e. $S_k$ is equal to -1). A special case insertion generates a new subtour where PoI $k$ is the last attraction to be visited.\\
\indent Algorithm \ref{alg:alg3_1} reports the pseudocode of the feasibility check procedure, where the insertion type is determined by $(mode^*_{ik},S_k,mode_{ij},mode_{jk})$. To illustrate the completeness of our feasibility check procedure, we report in Table \ref{tab:1} all insertion scenarios, discussed in detail in the following subsections. It is worth noting that if $mode^*_{ik}$ is $Walk$ then there exists a \textit{walking} subtour consisting of at least PoIs $i$ and $k$, i.e. $S_k\neq-1$. For this reason we do not detail case 0 in Table \ref{tab:1}. \\
\begin{algorithm}[H]
\SetAlgoLined
\KwData{PoI $i$, PoI $j$,PoI $k$, incumbent solution $s^*$}
Compute $Shift_j$ and $Wait_j$\;
\uIf{$mode^*_{ik}=mode_{jk} \wedge (mode_{jk}=Drive \vee mode_{ij}=Walk)$}{
Check Feasibility with (\ref{Feas_ins_1}) and (\ref{Feas_ins_2})\tcp*[f]{Basic Insertion}\;\label{line:1}
}\uElseIf{$S_k\neq-1$}{
Compute $\Delta_k$ and $Shift_q$\;
Check feasibility with (\ref{Feas_ins_1_1}), (\ref{Feas_ins_3}) and (\ref{Feas_ins_2})\tcp*[f]{Advanced Insertion}\;\label{line:2}
}\Else{Compute $\Delta_k$ and $Shift_q$\;
Check feasibility with (\ref{Feas_ins_3.2.1}), (\ref{Feas_ins_3}) and (\ref{Feas_ins_2})\tcp*[f]{Special Case}\;\label{line:5}
}
\caption{Feasibility check procedure}
\label{alg:alg3_1}
\end{algorithm}
\begin{table}[]
\caption{Insertion scenarios and their relationships with feasibility check procedures. }\label{tab:1}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Case&$mode^*_{ik}$ & $S_k$ & $(mode_{ij},mode_{jk})$ & Insertion type \\ \hline
0& $Walk $ & $=-1$ &-&-\\ \cline{1-5}
\multirow{4}{*}{$1$}& \multirow{4}{*}{$Walk$}& \multirow{4}{*}{$\neq-1$} & (Walk,Walk) &Basic \\ \cline{4-5}
& & & (Drive,Drive) &\multirow{3}{*}{Advanced} \\ \cline{4-4}
& & & (Walk,Drive) & \\ \cline{4-4}
& & & (Drive,Walk) & \\ \hline
\multirow{4}{*}{$2$}&\multirow{4}{*}{$Drive$} & \multirow{4}{*}{$\neq-1$} & (Walk,Walk) &Advanced \\ \cline{4-5}
& & & (Drive,Drive) & \multirow{2}{*}{Basic} \\ \cline{4-4}
& & & (Walk,Drive) & \\ \cline{4-5}
& & & (Drive,Walk) & Advanced \\ \hline
\multirow{5}{*}{$3$}& \multirow{5}{*}{$Drive$} & \multirow{5}{*}{$-1$} & (Walk,Walk) & Special Case \\ \cline{4-5}
& & & (Drive,Drive) & \multirow{2}{*}{Basic} \\\cline{4-4}
& & & (Walk,Drive) & \\ \cline{4-5}
& & & (Drive,Walk) & Special Case \\ \hline
\end{tabular}%
\end{table}
\subsubsection{Basic insertion} We observe that in a unimodal mobility environment a PoI insertion is always \textit{basic} \cite{vansteenwegen2009iterated}.
In a \textit{walk-and-drive} mobility environment an insertion is checked as basic if one of the following conditions hold.
If PoI $j$ is added to the walking subtour which PoI $i$ and PoI $j$ belong to, i.e. case 1 in Table \ref{tab:1} with $mode_{ij}=mode_{jk}=Walk$. In all other cases we have a basic insertion if it prescribes $Drive$ as transport mode from $j$ to $k$, i.e. case 1 and 2 with $mode_{jk}=Drive$. Five out of 12 scenarios of Table \ref{tab:1} refers to basic insertions. Conditions
underlying the first three basic insertion scenarios is that $k$ belongs to a walking subtour (i.e. $S_k\neq-1$) and $FirstPoI_{S_k}$ is not updated after the insertion.
The remaining basic insertions of Table \ref{tab:1} refer to scenarios where before and after the insertion, PoI $k$ does not belong to a subtour.
All these five scenarios are referred to as basic insertions since the extra visit of PoI $j$ has an impact \textit{only} on the arrival times at later PoIs.
\paragraph{Examples}To ease the discussion, we illustrate two examples of basic insertions for the itinerary of Figure \ref{esempiof}. Other illustrative examples can be easily derived from Figure \ref{esempiof}.
\begin{itemize}
\item Insert PoI $j$ between PoI $i=i_3$ and POI $k=i_4$, with $mode_{ij}=Walk$ and $mode_{jk}=Walk$. Before and after the insertion $FirstPoI_{S_k}$ is $i_3$ and, therefore, the insertion has no impact on later transfer connections.
\item Insert PoI $j$ between Insert PoI $i=i^s_1$ and POI $k=i_2$, with $mode_{ij}=Walk$ and $mode_{jk}=Drive$. Before and after the insertion PoI $i_2$ does not belong to a subtour. Insertion can change only arrival times from PoI $i_2$ on.
\end{itemize}
\indent To achieve an O(1) complexity for the feasibility check of a basic insertion, we adopt the approach proposed in \cite{vansteenwegen2009iterated} for a unimodal mobility environment and reported in the following for the sake of completeness. We define two quantities for each PoI $i$ selected by the incumbent solution: $Wait_i$, $MaxShift_i$. We denote with $Wait_i$ the waiting time occurring when the tourist arrives at PoI $i$ before the opening hour:
$$Wait_i=max\{0, O_i-a_i\}.$$
$MaxShift_i$ represents the maximum increase of start visiting time $z_i$, such that later PoIs can be visited before their closing hour. $MaxShift_i$ is defined by (\ref{ins_1}), where for notational convenience PoI $i+1$ represents the immediate successor of a generic PoI $i$.
\begin{equation}\label{ins_1}
MaxShift_i=min\{\textcolor{black}{C_i-z_i},Wait_{i+1}+MaxShift_{i+1}\}.
\end{equation}
Table \ref{tab:2} reports values of $Wait$ and $MaxShift$ for the itinerary of Figure \ref{esempiof}.
It is worth noting that the definition of $MaxShift_i$ is a backward recursive formula, initialized with the difference ($C_{max}-z_{max}$), where $z_{max}$ denotes duration of the itinerary.
To check the feasibility of an insertion of PoI $j$ between PoI $i$ and $k$, we compute extra time $Shift_j$ needed to reach and visit PoI $j$, as follows:
\begin{equation}\label{Shift_1}
Shift_j=t_{ij}+ Wait_j+T_j+t_{jk}-t_{ik}.
\end{equation}
It is worth noting that travel times are computed by taking into account soft constraints (i.e. input parameter \textit{Check} of Algorithm \ref{alg:alg2} is set equal to \textit{true}).
Feasibility of an insertion is checked in constant time at line \ref{line:1} of Algorithm \ref{alg:alg3_1} by inequalities (\ref{Feas_ins_1}) and (\ref{Feas_ins_2}).\\
\begin{equation}\label{Feas_ins_1}
Shift_j=t_{ij}+ Wait_j+T_j+t_{jk}-t_{ik}\leq Wait_k+MaxShift_k
\end{equation}
\begin{equation}\label{Feas_ins_2}
z_i+T_i+t_{ij}+ Wait_j\leq C_j.
\end{equation}
\subsubsection{Advanced insertion}
In advanced insertion, the feasibility check has to take into account that the insertion has an impact on later PoIs in terms of both arrival times and transfer times. Let consider an insertion of a PoI $j$ between PoI $i_2$ and $i_3$ of Figure \ref{esempiof}, with $mode_{i_2j}=mode_{ji_3}=Walk$. The insertion has an impact on the travel time from PoI $i_5$ to PoI $i_6$, i.e. after the insertion travel time $t_{i_5i_6}$ has to be updated to the new value $t^{new}_{i_5i_6}=t^w_{i_5i_2}+t^d_{i_2i_6}$. This implies that we have to handle two distinct feasibility checks. The former has a scope from PoI $i_3$ to $i_5$ and checks the arrival times with respect to $Shift_j$ computed according to (\ref{Shift_1}). The latter concerns PoIs visited after $i_5$ and checks arrival times with respect to $Shift_{i_5}$, computed by taking into account both $Shift_j$ and the new value of $t_{i_5i_6}$. For notational convenience, the first PoI reached by driving after PoI $k$ is referred to as PoI $b$. Similarly, we denote with $q$ the last PoI of the walking subtour, which $k$ belongs to (i.e. if $S_k\neq-1$, then $q=LastPoI_{S_k}$). To check if the type of insertion is advanced, we have to answer the following question: has the insertion an impact on the travel time $t_{qb}$? To answer it is sufficient to check if after the insertion the value of $FirstPoI_{S_k}$ will be updated, i.e. the insertion changes the first PoI visited by the walking subtour $S_k$.
Five out of 12 scenarios of Table \ref{tab:1} refers to advanced insertions, that is scenarios where $k$ belongs to a \textit{walking} subtour (i.e. $S_k\neq-1$) and $FirstPoI_{S_k}$ is updated after the insertion.
Algorithm \ref{alg:alg3_1} handles such advanced insertions by checking if one of the following conditions holds.
The insertion of PoI $j$ splits the subtour which PoI $i$ and PoI $j$ belong to, i.e. case 1 in Table \ref{tab:1} with $mode_{ij}=Drive \vee mode_{jk}=Drive$. In all other cases the insertion is checked as advanced if PoI $j$ is \textit{appended} at the beginning of the subtour $S_k$, i.e. case 2 in Table \ref{tab:1} with $mode_{jk}=Walk$. \\
\paragraph{Examples}As we did for basic insertions, we illustrate two advanced insertions for the itinerary of Figure \ref{esempiof}. Other illustrative examples can be easily derived from Figure \ref{esempiof}.
\begin{itemize}
\item Insert PoI $j$ between PoI $i=i_7$ and POI $k=i_8$, with $mode_{ij}=Drive$ and $mode_{jk}=Walk$. After the insertion $FirstPoI_{S_k}$ is $j$. Insertion change $t_{i_9i_1}$ to the new value $t^{new}_{i_9i_1}=t^w_{i_9j}+t^d_{ji_1}$.
\item Insert PoI $j$ between PoI $i=i_5$ and POI $k=i_6$ , with $mode_{ij}=Walk$ and $mode_{jk}=Walk$. After the insertion $FirstPoI_{S_k}$ is $i_3$. Insertion change $t_{i_9i_1}$ to the new value $t^{new}_{i_9i_1}=t^w_{i_9i_3}+t^d_{i_3i_1}$.
\end{itemize}
\indent To evaluate in constant time an advanced insertion, for each PoI $i$ included in solution $s'_*$, three further quantities are defined when $S_k\neq-1$: $\overline{MaxShift}_i$, $\overline{Wait}_i$ and $ME_i$. \\
$\overline{MaxShift}_i$ represents the maximum increase of start visiting time $z_i$, such that later PoIs of subtour $S_i$ can be visited within their time windows. The definition of $\overline{MaxShift}_i$ is computed as follows in (backward) recursive manner starting with $\overline{MaxShift}_q=(C_q-z_q)$.
\begin{equation}\label{over_maxsh}
\overline{MaxShift}_i=min\{\textcolor{black}{C_i-z_i},Wait_{i+1}+\overline{MaxShift}_{i+1}\}.
\end{equation}
$\overline{Wait}_i$ corresponds to the sum of waiting times of later PoIs of subtour $S_i$. We abuse notation by denoting with $i+1$ the direct successor of PoI $i$ and such that $S_{i+1}=S_{i}$. Then we have that
\begin{equation}\label{over_wait}
\overline{Wait}_i=\overline{Wait}_{i+1}+Wait_{i},
\end{equation}
with $\overline{Wait}_{LastPoI_{S_i}}=Wait_{LastPoI_{S_i}}$.\\%\sum_{\ell: S_\ell=S_k: a_\ell>a_k}Wait_\ell.$$
It worth recalling that in a multimodal mobility environment an insertion might propagate to later PoIs a decrease of the arrival times. The maximum decrease that a PoI $i$ can propagate is equal to $\max\{0, a_i-O_i\}$. $ME_i$ represents the maximum decrease of arrival times that can be propagated from PoI $i$ to $LastPoI_{S_i}$, that is
\begin {equation}\label{me}
ME_i=\min\{ME_{i+1}, \max\{(0,a_i-O_i)\}\}
\end{equation}
with $ME_{LastPoI_{S_i}}=\max\{(0,a_{LastPoI_{S_i}}-O_{LastPoI_{S_i}})\}$.
If extra visit of PoI $j$ generates an increase of the arrival times at later PoIs, i.e. $Shift_j\geq0$, then the arrival time of PoI $LastPoI_{S_k}$ is increased by the quantity $max\{0,Shift_j-\overline{Wait}_k\}$. On the other hand if $Shift_j<0$ then the arrival time of PoI $LastPoI_{S_k}$ is decreased by the quantity $\min\{ME_k,|Shift_j|\}$.
Let $\lambda_j$ be a boolean function stating when $Shift_j$ is non-negative:
$$\lambda_j = \left\{ \begin{array}{rcl}
1 & & Shift_j\geq0 \\
0 & & Shift_j<0 \\
\end{array}\right.$$
We quantify the impact of extra visit of PoI $j$ on the arrival times of PoI $LastPoI{S_k}$ by computing the value $\Delta_k$ as follows
$$\Delta_{k}=\lambda_j\times\max\{0,Shift_j-\overline{Wait}_{k}\}-(1-\lambda_j)\times\min\{ME_{k},|Shift_j|\}.$$
To check the feasibility of the insertion of PoI $j$ between PoI $i$ and $k$, along with $Shift_j$ we compute $Shift_q$ as the difference between the new arrival time at PoI $b$ and the old one, that is:
\begin{equation}\label{Sft_q}
Shift_q=t^{new}_{qb}+\Delta_k-t_{qb},
\end{equation}
where $t^{new}_{qb}$ would be the new value of $t_{qb}$ if the algorithm inserted PoI $j$ between PoIs $i$ and $k$.
Feasibility of the insertion of PoI $j$ between PoI $i$ and $k$ is checked in constant time at line \ref{line:2} of Algorithm \ref{alg:alg3_1} by (\ref{Feas_ins_1_1}), (\ref{Feas_ins_3}) and (\ref{Feas_ins_2}).
\begin{equation}\label{Feas_ins_1_1}
Shift_j\leq Wait_k+\overline{MaxShift}_k
\end{equation}
\begin{equation}\label{Feas_ins_3}
Shift_q\leq Wait_b+MaxShift_b.
\end{equation}
Table \ref{tab:2} reports values of $\overline{Wait}$, $\overline{MaxShift}$ and $ME$ for subtours of itinerary of Figure \ref{esempiof}. As we did for basic insertions, travel times are computed by taking into account soft constraints.
\paragraph{Special case} A special case of the advanced insertion is when PoI $k$ does not belong to a subtour (i.e. $S_k=-1$) in the solution $s'_*$, but it becomes the last PoI of a new subtour after the insertion. Feasibility check rules (\ref{Feas_ins_1_1}) and (\ref{Feas_ins_3}) do not apply since $\overline{MaxShift}_k$, $\overline{Wait}_k$ and $ME_k$ are not defined. In this case, $\Delta_k$ is computed as follows:
$$\Delta_k= \lambda_j\max(0,Shift_j-Wait_k)-
(1-\lambda_j)\min\{\max\{0,a_k-O_k\},|Shift_j|\}.$$
Then we set $q=k$ and compute $Shift_q$ according to (\ref{Sft_q}). Feasibility of the insertion of PoI $j$ between PoI $i$ and $k$ is checked in constant time by (\ref{Feas_ins_3.2.1}), (\ref{Feas_ins_3}) and (\ref{Feas_ins_2}).
\begin{equation}\label{Feas_ins_3.2.1}
Shift_j\leq Wait_k+( C_q-z_q),
\end{equation}
\section{\textcolor{black}{Updating an itinerary}}\label{sec:5}
During the local search after a PoI to be inserted has been selected and it has been decided where to insert the PoI, the affected itinerary needs to be updated. Similarly, during the perturbation phase after a set of selected PoIs has been removed, the affected itineraries need to be updated. The following subsections detail how we update the information maintained to facilitate feasibility checking when a PoI is inserted and a sequence of PoI is removed.
\begin{algorithm}[!t]
\caption{Insertion Procedure }
\label{alg:alg4}
\SetAlgoLined
INIT: incumbent solution $s^{\prime}_*$\;
\For {POI j visited by $s^{\prime}_*$}{\label{alg4:2}
Determine the best feasible insertion with minimum value of $Shift^\prime_j$\;
Compute $Ratio_j$\;
}
Select POI $j^{*}=arg\min\limits_{j}(Ratio_j)$\;\label{alg4:3}
Visit $j^{*}$: Compute $a_{j^{*}}$, $z_{j^*}$, $Wait_{j^{*}}$, $Shift_{j^*}$, $S_{j^*}$\;\label{alg4:5}
Update information of subtours $S_{i^*}$, $S_{k^*}$\; \label{alg4:4}
\lIfElse{Advanced Insertion}{$q^*\gets LastPoI_{S_{k^*}}$, Compute $Shift_{q^*}$}{$q^*\gets-1$}\label{alg4:1}
$\overline{j}\gets j^{*}$\;
\For (\tcp*[f]{Forward Update}){POI j visited later than $j^{*}$ (Until $Shift_j=0$ $\wedge$ $j \geq q^*$)}{ \label{alg4:6}
Update $a_j$, $z_j$, $Wait_j$,$S_j$\;
\lIf{$j\neq q^*$}{Update $Shift_j$}
\lIf{$Shift_j=0$ $\wedge$ $j \geq q^*$}{$\overline{j}\gets j$}\label{alg4:9}
}
\For (\tcp*[f]{Backward Update-Step 1}){POI j visited earlier than $\overline{j}$ (Until $j=FirstPoI_{S_{j^*}}$) }{\label{alg4:7}
Update $MaxShift_j$\;
\lIf{$S_j\neq-1$}{Update $\overline{Wait}_j$, $\overline{MaxShift}_j$, ${ME}_j$}
}
\For (\tcp*[f]{Backward Update-Step 2}){POI j visited earlier than $FirstPoI_{S_{i^*}}$}{
Update $MaxShift_j$\label{alg4:8}\;
}
Update the number of violated soft constraints\;
\end{algorithm}
\subsection{Insert and Update}
Algorithm \ref{alg:alg4} reports the pseudocode of the proposed insertion procedure. During a major iteration of the local search, we select the best neighbour of the current solution $s'_*$ as follows (Algorithm \ref{alg:alg4} lines \ref{alg4:2}-\ref{alg4:3}). For each (unrouted) PoI $j$ we select the insertion with the minimum value of $Shift^{\prime}_j=Shift_j+Shift_q$. Then we compute $Ratio_j=(P_j)^2/Shift^{\prime}_j$. The best neighbour is the solution obtained by inserting in $s'_*$ the PoI $j^*$ with the maximum value of $Ratio_{j^*}$, i.e. $j^*=arg\max\limits_{j}{(P_j)^2/Shift^{\prime}_j}$. Ties are broken by selecting the solution that best fits transport mode preferences, i.e. the insertion with the minimum number of violated soft constraints. The \textit{coordinate} of the best insertion of $j^*$ are denoted with $i^*$, $k^*$. Solution is updated in order to include the visit of $j^*$ (Algorithm \ref{alg:alg4}-lines \ref{alg4:5}-\ref{alg4:4}). If the type of insertion is advanced we determine the value of $Shift_{q^*}$ according to (\ref{Sft_q}) (Algorithm \ref{alg:alg4}-line \ref{alg4:1}). Then, the solution encoding update consists of two consecutive main phases. The first phase is referred to as \textit{forward update}, since it updates a few information related to visit of PoI $j^*$ and later PoIs. The \textit{forward update} stops when the propagation of the insertion of $j^*$ has been completely \textit{absorbed} by waiting times of later PoIs (Algorithm \ref{alg:alg4}-lines \ref{alg4:6}-\ref{alg4:9}).
The second phase is initialized with the PoI $\overline{j}$ satisfying the stopping criterion of the \textit{forward update}. Such final step is refereed to as \textit{backward update}, since it iterates on PoIs visited earlier than $\overline{j}$ (Algorithm \ref{alg:alg4}-lines \ref{alg4:7}-\ref{alg4:8}). We finally update the number of violated constraints. As illustrated in the following, new arcs do not violate tourist preferences and therefore after the insertion of $j^*$ the number of violated soft constraints cannot increase
\paragraph{Solution encoding update}
Once inserted the new visit $j^{*}$ between PoI $i^*$ and PoI $k^*$, we update solution encoding as follows:
\begin{equation}\label{update_1}
a_{j^{*}}=z_i^*+T_i^*+t_{i^*j^{*}}
\end{equation}
\begin{equation}\label{update_2}
Wait_{j^{*}}=\max\{0, O_{j^{*}}-a_{j^{*}}\}
\end{equation}
\begin{equation}\label{update_3}
Shift_{j^{*}}=t_{i^*j^{*}}+ Wait_{j^{*}}+T_{j^{*}}+t_{j^{*}k^*}-t_{i^*k^*}.
\end{equation}
If needed, we update $S_{j^*}$, $FirstPoI_{S_{k^*}}$ and $LastPoI_{S_i{^*}}$. The insertion of $j^*$ propagates a change of the arrival times at later PoIs only if $Shift_{j^*}\neq 0$. We recall that in a multimodal setting, the triangle inequality might not hold. This implies that $j^*$ insertion propagates either an increase (i.e. $Shift_{j^*}>0$) or a decrease (i.e. $Shift_{j^*}<0$) of the arrival times. Solution encoding of later PoIs is updated according to formula (\ref{update_5})-(\ref{update_7}). For notational convenience we denote with $j$ the current PoI and $j-1$ its immediate predecessor.
\begin{equation}\label{update_5}
a_{j}=a_{j}+Shift_{{j-1}}
\end{equation}
\begin{equation}\label{update_6}
Shift_{j} = \left\{ \begin{array}{rcl}
\max\{0,Shift_{{j-1}}-Wait_{j}\} & & Shift_{j-1}>0 \\
\max\{O_{j}-z_{j},Shift_{j-1}\} & & Shift_{j-1}<0 \\
\end{array}\right.
\end{equation}
\begin{equation}\label{update_4}
Wait_{j}=max\{0, O_{j}-a_{j}\}
\end{equation}
\begin{equation}\label{update_7}
z_{j}=z_{j}+Shift_{j}
\end{equation}
At the first iteration, $j$ is initialized with $k^*$ and $Shift_{j-1}=Shift_{j^*}$. In particular (\ref{update_6}) states that after $j$ it is propagated the portion of $Shift_{j-1}$ exceeding $Wait_{j}$, when $Shift_{j-1}>0$. Otherwise $Shift_{j}$ is strictly negative only if no waiting time occurs at PoI $j$ in solution $s'_*$, that is $z_{j}>O_{j}$. If type of insertion is advanced we omit to update $Shift_{q^*}$, since it has been precomputed at line \ref{alg4:1} according to (\ref{Sft_q}).
The forward updating procedure stops before the end of the itinerary if $Shift_{j}$ is zero, meaning that waiting times have entirely \textit{absorbed} the initial increase/decrease of arrival times generated by $j^*$ insertion. Then we start the backward update, consisting of two main steps. During the first step the procedure iterates on PoIs visited between the POI $\overline{j}$, where the forward update stopped, and $FirstPoI_{S_j^*}$. We update $MaxShift_j$ according to the (\ref{ins_1}) as well as additional information for checking feasibility for advanced insertions. Therefore, if PoI $j$ belongs to a subtour (i.e. $S_j\neq-1$), then we also update $\overline{Wait_j}$, $\overline{MaxShift}_{j}$ and $ME_j$ according to the backward recursive formula (\ref{over_wait}), (\ref{over_maxsh}) and (\ref{me}). The second step iterates on PoI $j$ visited earlier than $FirstPoI_{S_{j^*}}$ and updates only $MaxShift_j$.
\subsection{Remove and Update}
The perturbation procedure aims to remove for each itinerary of the incumbent solution $\rho_d$ PoIs visited consecutively starting from position $\sigma_d$. Given an itinerary, we denote with $i$ and $k$ respectively the last PoI and the first PoI, that are visited before and after the selected $\rho_d$ PoIs. Let $Shift_{i}$ denotes the variation of total travel time generated by the removal and propagated to PoIs visited later, that is:
$$Shift_{i}=t_{ik}-(a_k-T_i-z_i).$$
In particular when we compute $t_{ik}$ we do not take into account tourist preferences, i.e. in Algorithm \ref{alg:alg2} the input parameter $Check$ is equal to false.
Due to multimodality, the triangle inequality might not be respected by the removal, since it can be propagate either an increase (i.e. $Shift_{i}>0$) or a decrease of the arrival times (i.e. $Shift_{i}<0$). In order to guarantee that after removing the selected PoIs, we obtain an itinerary feasible wrt hard constraints (i.e. time windows), we require that $Shift_{i}\leq0$. To this aims we adjust the starting and the ending removal positions so that it is not allowed to remove portions of multiple subtours. In particular, if $S_i$ is not equal to $S_k$, then we set the initial and ending removal positions respectively to $FirstPOI_{S_i}$ and the immediate successor of $LastPOI_{S_k}$. In this way we remove subtours $S_i$, $S_k$ along with all the in-between subtours. For example in Figure \ref{esempiof}, if $i$ and $k$ are equal to PoI $i_2$ and $i_4$ respectively, then we adjust $k$ so that the entire first subtour is removed, i.e. we set $k$ equal to $i_6$. Once the selected PoIs have been removed, the solution encoding update steps are the same of a basic insertion. We finally update the number of violated constraints.
\begin{algorithm}[!t]
\caption{Perturbation Procedure }
\label{alg:alg5}
\SetAlgoLined
INIT: an itinerary of solution $s^{\prime}_*$, i, k\;
$mode=Drive$\;
\uIf{$S_i=S_k$}{
\lIf{$S_i\neq-1$}{$mode\gets Walk$}
} \Else
{
\lIf{$S_i\neq-1$}{$i\gets FirstPoI_{S_i}$}
\lIf{$S_k\neq-1$}{$i\gets$ immediate successor of $LastPoI_{S_k}$}
}
Remove PoIs visited between $i$ and $k$\;
$mode^*_{ik}=mode$\;
$Shift_i\gets t_{ik}-(a_k-z_i-T_i)$\;
Update $a_i$, $z_i$, $Wait_i$\;
\For (\tcp*[f]{Forward Update}){POI j visited later than $i$ (Until $Shift_j=0$)}{ \label{alg5:1}
Update $a_j$, $z_j$, $Wait_j$\;
\lIf{$Shift_j=0$}{$\overline{j}\gets j$}\label{alg5:2}
}
\For (\tcp*[f]{Backward Update-Step 1}){POI j visited earlier than $\overline{j}$ (Until $j=i$) }{\label{alg5:3}
Update $MaxShift_j$\;
\lIf{$S_j\neq-1$}{Update $\overline{Wait}_j$, $\overline{MaxShift}_j$, $ME_j$}
}
Update $MaxShift_i$\;
\For (\tcp*[f]{Backward Update-Step 2}){POI j visited earlier than $i$}{
Update $MaxShift_j$\label{alg5:4}
}
\end{algorithm}
\subsection{A numerical example}
We provide a numerical example to illustrate the procedures described so far. We consider the itinerary of Figure \ref{esempiof}.
In particular we illustrate the feasibility check of the following three insertions for a PoI $j$, with $[O_j,C_j]=[0,300]$ and $T_j=5$. Durations of arcs involved in the insertion are reported in Figures \ref{Infeas_Ins} and Figure \ref{Feas_Ins}. As reported in Table \ref{tab:2} the itinerary of Figure \ref{esempiof} is feasible with respect to both time windows and soft constraints. As aforementioned, during the feasibility check, all travel times are computed by Algorithm \ref{alg:alg2} with input parameter $Check$ set equal to true.\\
\begin{table}[h!]\caption{Solution encoding with additional information for itinerary of Figure \ref{esempio_enc}}\label{tab:2}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|cccccccc|ccccc|}
\hline
\multicolumn{6}{|c|}{\textbf{Itinerary}} & \multicolumn{2}{c|}{\textbf{Time Windows}} & \multicolumn{5}{c|}{\textbf{Additional data}} \\ \hline
\multicolumn{1}{|c}{\textbf{PoI}} & \textbf{Violated}& $\textbf{mode}^*_{ik}$ & $\textbf{S}_i$ & $\textbf{a}_i$ & \multicolumn{1}{l|}{$\textbf{z}_i\textbf{+T}_i$} & $\textbf{O}_i$ & $\textbf{C}_i$ & $\textbf{Wait}_i$ & $\textbf{MaxShift}_i$ & $\overline{\textbf{Wait}_i}$ & $\overline{\textbf{MaxShift}_i}$ & $\textbf{ME}_i$ \\ \hline
$i_1^s$ &False & Drive & -1 & 0 & \multicolumn{1}{l|}{0} & 0 & 0 & 0 & 0 & - & - & - \\
$i_2$ &False & Drive & -1 & 25 & \multicolumn{1}{l|}{30} & 0 & 75 & 0 & 20 & - & - & - \\ \hline
$i_3$ & False & Walk & 1 & 45 & \multicolumn{1}{l|}{55} & 50 & 115 & 5 & 15 & 5 & 20 & 0 \\
$i_4$ & False & Walk & 1 & 75 & \multicolumn{1}{l|}{80} & 60 & 95 & 0 & 15 & 0 & 20 & 15 \\
$i_5$ & False & Drive & 1 & 85 & \multicolumn{1}{l|}{90} & 60 & 115 & 0 & 15 & 0 & 30 & 25 \\ \hline
$i_6$ & False & Walk & 2 & 120 & \multicolumn{1}{l|}{125} & 80 & 135 & 0 & 15 & 15 & 15 & 0 \\
$i_7$ & False & Walk & 2 & 135 & \multicolumn{1}{l|}{155} & 150 & 175 & 15 & 25 & 15 & 25 & 0 \\
$i_8$ & False & Walk & 2 & 175 & \multicolumn{1}{l|}{180} & 90 & 245 & 0 & 58 & 0 & 58 & 85 \\
$i_9$ & False & Drive & 2 & 187 & \multicolumn{1}{l|}{192} & 90 & 245 & 0 & 58 & 0 & 58 & 97 \\ \hline
$i_1^e$ & - & - & -1 & 224 & \multicolumn{1}{l|}{224} & 0 & 320 & 0 & 96 & - & - & - \\ \hline
\end{tabular}
}
\end{table}
\begin{figure}
\includegraphics[width=150mm ]{Infeas_Insert.png}
\caption{Example of infeasible insertions }\label{Infeas_Ins}
\end{figure}
\begin{figure}
\includegraphics[width=150mm ]{Feas_Insert_1.png}
\caption{Example of feasible insertion }\label{Feas_Ins}
\end{figure}
\paragraph{Insertion of PoI j between PoI $i^s_1$ and $i_2$ with $mode_{i_1j}=mode_{ji_2}=Drive$} We check feasibility by Algorithm \ref{alg:alg3_1}, with $i=i^s_1$, $k=i_2$. The type of insertion is basic since $mode^*_{ik}= mode_{jk}$ and $mode_{jk}=Drive$. The feasibility is checked by (\ref{Feas_ins_1}) and (\ref{Feas_ins_2}), that is:
$$Shift_j=t_{ij}+Wait_j+T_j+t_{jk}-t_{ik}=25+0+5+25-25=30\nleq 0+20=Wait_k+MaxShift_k,$$
$$z_i+T_i+t_{ij}+Wait_j=25\leq80=C_j,$$
where travel times $t_{ij}$ and $t_{jk}$ has been computed by Algorithm \ref{alg:alg2} with $p$ set equal to $i^s_1$ and $j$, respectively. The insertion violates time window of PoI $i_4$. Such infeasibility is checked through the violation of (\ref{Feas_ins_1}).
\paragraph{Insertion of PoI j between PoI $i_5$ and $i_6$ with $mode_{i_5j}=mode_{ji_5}=Walk$} We check feasibility by Algorithm \ref{alg:alg3_1}, with $i=i_5$, $k=i_6$. The type of insertion is advanced since $mode^*_{ik}\neq mode_{jk}$ and $S_k\neq-1$. We recall that feasibility check consists of two parts. Firstly we check feasibility with respect to (\ref{Feas_ins_1_1}) and (\ref{Feas_ins_2}) that is
$$z_i+T_i+t_{ij}+Wait_j=118\leq300=C_j,$$
$$Shift_j=t_{ij}+Wait_j+T_j+t_{jk}-t_{ik}=15\leq 15=Wait_k+\overline{MaxShift}_k,$
where travel times have been computed by Algorithm \ref{alg:alg2}, with $p=-1$.
However the new visit of PoI $j$ is infeasible with respect to soft constraints. As aforementioned this case is encoded as a violation of time windows. Indeed, we compute $Shift_q$ according to (\ref{Sft_q}) with $q=i_9$, $b=i^e_1$, where travel time $t^{new}_{qb}$ is computed by Algorithm \ref{alg:alg2}, with $p=i_3$. Since the tourist has to walk more than 30 time units to pick up the vehicle, i.e. $t^w_{i_9i_3}=92$, then Algorithm \ref{alg:alg2} returns a value $t^{new}_{qb}$ equal to the (big) value M, which violates all time windows of later PoIs.
\paragraph{Insertion of PoI j between PoI $i_2$ and $i_3$ with $mode_{i_2j}=Drive$ and $mode_{ji_3}=Walk$} We check feasibility by Algorithm \ref{alg:alg3_1}, with $i=i_2$, $k=i_3$. The type of insertion is advanced since $mode^*_{ik}\neq mode_{jk}$ and $S_k\neq-1$. The insertion does not violate time windows of PoI $j$ and PoIs belonging to the subtour $S_k$. This is checked by verifying that conditions (\ref{Feas_ins_1_1}) and (\ref{Feas_ins_2}) are satisfied, that is:
$$Shift_j=t_{ij}+Wait_j+T_j+t_{jk}-t_{ik}=1\leq 20=Wait_k+\overline{MaxShift}_k,$
$$z_i+T_i+t_{ij}+Wait_j=38\leq300=C_j,$$
where $t_{ij}$ and $t_{jk}$ are computed by Algorithm \ref{alg:alg2} with $p=-1$.
Then we check feasibility with respect to closing hours of remaining (routed) PoIs. In particular we compute $Shift_q$ with $q=i_5$, $b=i_6$. Travel time $t^{new}_{qb}$ is computed with $p=j$. We have that $t^{new}_{qb}=28+8$. Since $Shift_j>0$, then $\Delta_k=\max\{0,Shift_j-\overline{Wait}_k\}=0$.
$$Shift_q=t^{new}_{qb}+\Delta_k-t_{qb}=36+0-30=6\leq0+15=Wait_b+MaxShift_b.$$
The insertion is feasible since it satisfies also (\ref{Feas_ins_3}).\\
\indent Table \ref{tab:3} shows details of the itinerary after the insertion of PoI $j$ between PoIs $i_2$ and $i_3$. It is worth noting that $Shift_k=0$, but the forward update stops at $\overline{j}=i_7$ since $Shift_q=6$. There is no need to update additional information of later PoIs.\\
\begin{table}[h!]\caption{Details of the itinerary after the insertion}\label{tab:3}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|cccccccc|ccccc|}
\hline
\multicolumn{6}{|c|}{\textbf{Itinerary}} & \multicolumn{2}{c|}{\textbf{Time Windows}} & \multicolumn{5}{c|}{\textbf{Additional data}} \\ \hline
\multicolumn{1}{|c}{\textbf{PoI}} & \textbf{Violated}& $\textbf{mode}^*_{ik}$ & $\textbf{S}_i$ & $\textbf{a}_i$ & \multicolumn{1}{l|}{$\textbf{z}_i\textbf{+T}_i$} & $\textbf{O}_i$ & $\textbf{C}_i$ & $\textbf{Wait}_i$ & $\textbf{MaxShift}_i$ & $\overline{\textbf{Wait}_i}$ & $\overline{\textbf{MaxShift}_i}$ & $\textbf{ME}_i$ \\ \hline
$i_1^s$ &False & Drive & -1 & 0 & \multicolumn{1}{l|}{0} & 0 & 0 & 0 & 0 & - & - & - \\
$i_2$ &False & Drive & -1 & 25 & \multicolumn{1}{l|}{30} & 0 & 75 & 0 & 13 & - & - & - \\\hline
$j$ &False & Walk & 1 & 38 & \multicolumn{1}{l|}{43} & 0 & 300 & 0 & 13 & 4 & 24 & 0 \\
$i_3$ & False & Walk & 1 & 46 & \multicolumn{1}{l|}{55} & 50 & 115 & 4 & 9 & 4 & 20 & 0 \\
$i_4$ & False & Walk & 1 & 75 & \multicolumn{1}{l|}{80} & 60 & 95 & 0 & 9 & 0 & 20 & 15 \\
$i_5$ & False & Drive & 1 & 85 & \multicolumn{1}{l|}{90} & 60 & 115 & 0 & 9 & 0 & 30 & 25 \\ \hline
$i_6$ & False & Walk & 2 & 126 & \multicolumn{1}{l|}{131} & 80 & 135 & 0 & 9 & 9 & 9 & 0 \\
$i_7$ & False & Walk & 2 & 141 & \multicolumn{1}{l|}{155} & 150 & 175 & 9 & 25 & 9 & 25 & 0 \\
$i_8$ & False & Walk & 2 & 175 & \multicolumn{1}{l|}{180} & 90 & 245 & 0 & 58 & 0 & 58 & 85 \\
$i_9$ & False & Drive & 2 & 187 & \multicolumn{1}{l|}{192} & 90 & 245 & 0 & 58 & 0 & 58 & 97 \\ \hline
$i_1^e$ & - & - & -1 & 224 & \multicolumn{1}{l|}{224} & 0 & 320 & 0 & 96 & - & - & - \\ \hline
\end{tabular}
}
\end{table}
\begin{table}[h!]\caption{Details of the itinerary after the removal}\label{tab:4}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|cccccccc|ccccc|}
\hline
\multicolumn{6}{|c|}{\textbf{Itinerary}} & \multicolumn{2}{c|}{\textbf{Time Windows}} & \multicolumn{5}{c|}{\textbf{Additional data}} \\ \hline
\multicolumn{1}{|c}{\textbf{PoI}} & \textbf{Violated}& $\textbf{mode}^*_{ik}$ & $\textbf{S}_i$ & $\textbf{a}_i$ & \multicolumn{1}{l|}{$\textbf{z}_i\textbf{+T}_i$} & $\textbf{O}_i$ & $\textbf{C}_i$ & $\textbf{Wait}_i$ & $\textbf{MaxShift}_i$ & $\overline{\textbf{Wait}_i}$ & $\overline{\textbf{MaxShift}_i}$ & $\textbf{ME}_i$ \\ \hline
$i_1^s$ &False & Drive & -1 & 0 & \multicolumn{1}{l|}{0} & 0 & 0 & 0 & 0 & - & - & - \\
$i_2$ &True & Drive & -1 & 25 & \multicolumn{1}{l|}{30} & 0 & 75 & 0 & 50 & - & - & - \\ \hline
$i_6$ & False & Walk & 2 & 32 & \multicolumn{1}{l|}{85} & 80 & 135 & 48 & 55 & 103 & 55 & 0 \\
$i_7$ & False & Walk & 2 & 95 & \multicolumn{1}{l|}{155} & 150 & 175 & 55 & 25 & 55 & 25 & 0 \\
$i_8$ & False & Walk & 2 & 175 & \multicolumn{1}{l|}{180} & 90 & 245 & 0 & 58 & 0 & 58 & 85 \\
$i_9$ & False & Drive & 2 & 187 & \multicolumn{1}{l|}{192} & 90 & 245 & 0 & 58 & 0 & 58 & 97 \\ \hline
$i_1^e$ & - & - & -1 & 224 & \multicolumn{1}{l|}{224} & 0 & 320 & 0 & 96 & - & - & - \\ \hline
\end{tabular}
}
\end{table}
\paragraph{Removal of PoIs between $i_2$ and $i_6$} Table \ref{tab:4} reports details of the itinerary after the removal of PoIs visited between $i_2$ and $i_6$. Travel time $t_{i_2i_6}$ is computed by Algorithm \ref{alg:alg2} with input parameter $Check$ set equal to false. We observe that driving from PoI $i_2$ to PoI $i_6$ violates the soft constraint about $MinDrivingTime$, therefore after the removal the algorithm increases the total number of violated soft constraints.
\section{Lifting ILS performance through \textcolor{black}{unsupervised learning}}\label{sec:6}
The insertion heuristic explores in a systematic way the neighbourhood of the current solution. Of course, the larger the set $V$ the worse the ILS performance. In order to reduce the size of the neighbourhood explored by the local search, we exploited two mechanisms. Firstly, given the tourist starting position $i^s_1$, we consider an unrouted PoI as candidate for the insertion if it belongs to set:
$$\mathcal{N}_r(i^s_1) = \{i \in V : d(i, i_1^s) \leq r\} \subseteq V$$
where $d: V \times V \rightarrow \mathbb{R}^+$ denotes a non-negative distance function and the radius $r$ is a non negative scalar value. The main idea is that it is likely that the lowest ratio values are associated to PoIs located very far from $i^s_1$.
We used the Haversine formula to approximate the shortest (orthodromic) distance between two geographical points along the Earth's surface.
The main drawback of this neighbourhood filtering is that a low value of radius $r$ might compromise the degree of diversification during the search. To overcome this drawback we adopt the strategy proposed in \cite{gavalas2013cluster}. It is worth noting that in \cite{gavalas2013cluster} test instances are defined on an Euclidean space. Since, we use a (more realistic) similarity measure representing the travel time duration of a quickest path, we cannot use k-means algorithm to build a clustering structure. To overcome this limitation we have chosen a hierarchical clustering algorithm.
Therefore, during a preprocessing step we cluster PoIs. The adopted hierarchical clustering approach gives different partitioning depending on the level-of-resolution we are looking at. In particular, we exploited agglomerative clustering which is the most common type of hierarchical clustering. The algorithm starts by considering each observation as a single cluster; then, at each iteration two \emph{similar} clusters are merged to define a new larger cluster until all observations are grouped into a single fat cluster. The result is a tree called dendrogram. The similarity between pair of clusters is established by a linkage criterion: e.g. the maximum distances between all observations of the two sets or the variance of the clusters being merged.
In this work, the metric used to compute linkage is the walking travel time between pairs of PoIs in the mobility environment: this with the aim of reducing the total driving time.
Given a PoI $i \in V$, we denote with $\mathcal{C}_i$ the cluster label assigned to $i$. $\mathcal{C}_d$ is the cluster containing the tourist starting position.
We enhance the local search so that to ensure that a cluster (different from $\mathcal{C}_d$) is visited at most once in a tour. $\mathcal{C}_d$ can be visited at most twice in a tour: when departing from and when arriving to the depot, respectively.
A PoI $j\in \mathcal{N}_r(i_1^s)$ can be inserted between PoIs $i$ and $k$ in a itinerary $\mathfrak{p}$ only if at least one of the following conditions is satisfied:
\begin{itemize}
\item $\mathcal{C}_i = \mathcal{C}_j \vee \mathcal{C}_k = \mathcal{C}_j$, or
\item $\mathcal{C}_i = \mathcal{C}_k = \mathcal{C}_d \wedge |\mathcal{L}_\mathfrak{p}|=1$, or
\item $\mathcal{C}_i \neq \mathcal{C}_k \wedge \mathcal{C}_j \notin \mathcal{L}_\mathfrak{p} $,
\end{itemize}
where $\mathcal{L}_\mathfrak{p}$ denotes the set of all cluster labels for PoIs belonging to itinerary $\mathfrak{p}$. At first iteration of ILS $\mathcal{L}_\mathfrak{p} = \{\mathcal{C}_d\}$;
subsequently, after each insertion of a PoI $j$, set $\mathcal{L}_\mathfrak{p}$ is enriched with $\mathcal{C}_j$.
In the following section we thoroughly discuss about the remarkable performance improvement obtained, when such cluster based neighbourhood search is applied on (realistic) test instances with thousands of PoIs.
\section{Computational experiments}\label{sec:8}
This section presents the results of the computational experiments conducted to evaluate the performance of our method. We have tested our heuristic algorithm on a set of \textcolor{black}{instances} derived from the pedestrian and road \textcolor{black}{networks} of Apulia (Italy).
All experiments reported in this section \textcolor{black}{were} run on a standalone Linux machine
with an Intel Core i7 processor composed by 4 cores clocked at 2.5 GHz and
equipped with 16 GB of RAM. The machine learning component was implemented in Python (version 3.10). The agglomerative clustering implementations were taken from \textit{scikit-learn} machine learning library. All other algorithms have been coded in Java. \\
Map data \textcolor{black}{were} extracted from OpenStreetMap (OSM) geographic database of the world (publicly available at \url{https://www.openstreetmap.org}). We used the GraphHopper (\url{https://www.graphhopper.com/}) routing engine to precompute all quickest paths between PoI pairs applying an ad-hoc parallel one-to-many Dijkstra for both moving modes (walking and driving). GraphHopper is able to assign a speed for every edge in the graph based on the road type extracted from OSM data for different vehicle profiles: on foot, hike, wheelchair, bike, racing bike, motorcycle and car.
\textcolor{black}{A fundamental assumption} in our work is that travel times on both driving and pedestrian networks satisfy triangle inequality. In order to satisfy this preliminar requirement, we run \textcolor{black}{the Floyd-Warshall} \cite{floyd1962algorithm,warshall1962theorem} algorithm as a post-processing step to enforce triangle inequality when not met (due to roundings or detours).
The PoI-based graph consists of $3643$ PoIs.
Walking speed has been fixed to $5$ km/h, while the maximum walking distance is $2.5$ km: i.e. the maximum time that can be traveled on foot is half an hour ($MaxWalkingTime$).
As stated before, we improved the removal and insertion operators of the ILS proposed in order to take into account the extra travel time spent by the tourist to switch from the pedestrian network to the road network. Assuming that the destination has a parking service, we increased the traversal time by car of a customizable constant amount fixed to $10$ minutes ($ParkingTime$). We set the time need to switch from the pedestrian network to the road network equal to at least 5 time minutes ($PickUpTime$). Walking is the preferred mode whenever the traversal time by car is lower than or equal to $6$ minutes ($MinDrivingTime$).
PoI score measures the popularity of attraction. We recall that the research presented in this paper is part of \textcolor{black}{a project} aiming to develop technologies enabling territorial marketing and tourism in Apulia (Italy). The popularity of PoIs has been extracted from a tourism related Twitter dataset presented in \cite{https://doi.org/10.48550/arxiv.2207.00816}.
ILS is stopped after $150$ consecutive iterations without improvements or a time limit of one minute is reached. \\
Instances are defined by the following parameters:
\begin{itemize}
\item number of itineraries $m = 1, 2, 3, 4, 5, 6, 7$;
\item starting tourist position (i.e. its latitude and longitude);
\item a radius $r = 10, 20, 50, +\infty$ km for the spherical neighborhood $\mathcal{N}_r(i_1^s)$ around the starting tourist position.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=100mm ]{apulia.png}
\caption{Starting positions.\label{fig:apulia}}
\end{figure}
We considered eight different starting positions along the Apulian territory, as showed in Figure \ref{fig:apulia}. The maximum itinerary duration $C_{max}$ has been fixed to $12$ hours. Every PoI have $0$, $1$ or $2$ opening time-windows depending on current weekday.
\begin{table}[!t]
\normalsize
\caption{Candidate PoIs set size.\label{tableCss}}
\centering
\begin{tabular}{|lc|cccccccc|}
\hline
$r$ & position & PoIs & $D_1$ & $D_2$ &$D_3$ &$D_4$ & $D_5$ & $D_6$ & $D_7$ \\ \hline
10 & 1 & 172 & 257 & 257 & 257 & 203 & 257 & 256 & 148 \\
& 2 & 62 & 91 & 91 & 91 & 89 & 91 & 90 & 71 \\
& 3 & 172 & 214 & 215 & 176 & 216 & 216 & 136 & 137 \\
& 4 & 109 & 118 & 120 & 109 & 120 & 120 & 99 & 99 \\
& 5 & 118 & 127 & 132 & 122 & 132 & 132 & 109 & 108 \\
& 6 & 79 & 108 & 108 & 107 & 97 & 108 & 106 & 73 \\
& 7 & 117 & 140 & 141 & 115 & 141 & 141 & 140 & 140 \\
& 8 & 81 & 65 & 82 & 82 & 82 & 82 & 80 & 80 \\ \hline
20 & 1 & 324 & 507 & 509 & 509 & 385 & 509 & 507 & 254 \\
& 2 & 117 & 174 & 172 & 174 & 169 & 174 & 174 & 129 \\
& 3 & 301 & 350 & 359 & 312 & 360 & 360 & 264 & 262 \\
& 4 & 245 & 266 & 280 & 251 & 280 & 280 & 223 & 222 \\
& 5 & 338 & 363 & 390 & 357 & 390 & 390 & 321 & 320 \\
& 6 & 262 & 359 & 359 & 346 & 305 & 359 & 354 & 228 \\
& 7 & 222 & 260 & 260 & 194 & 261 & 262 & 258 & 253 \\
& 8 & 263 & 296 & 329 & 328 & 287 & 329 & 324 & 240 \\ \hline
50 & 1 & 872 & 1260 & 1289 & 1286 & 1009 & 1289 & 1279 & 712 \\
& 2 & 779 & 1010 & 1017 & 928 & 1008 & 1022 & 926 & 776 \\
& 3 & 1194 & 1380 & 1437 & 1306 & 1437 & 1441 & 1198 & 1130 \\
& 4 & 1267 & 1394 & 1463 & 1311 & 1463 & 1466 & 1202 & 1179 \\
& 5 & 1083 & 1185 & 1252 & 1124 & 1254 & 1254 & 994 & 991 \\
& 6 & 883 & 1232 & 1230 & 1147 & 1090 & 1235 & 1225 & 832 \\
& 7 & 836 & 1083 & 1082 & 938 & 1031 & 1089 & 1081 & 860 \\
& 8 & 670 & 875 & 905 & 902 & 768 & 905 & 896 & 606 \\ \hline
$+\infty$ & * & 3643 & 4591 & 4570 & 4295 & 4521 & 4581 & 4297 & 3781\\
\hline
\end{tabular}
\end{table}
Table \ref{tableCss} summarizes for any radius-position pair:
\begin{itemize}
\item the number of PoIs in the spherical neighborhood $\mathcal{N}_r(i_1^s)$;
\item $D_i$ the number of PoIs opened during day $i$ \textcolor{black}{($i=1, \dots,m=7$, }from Monday to Sunday).
\end{itemize}
When $r$ is set equal to $+\infty$ (last table line) no filter is applied and all $3643$ PoIs in the dataset are candidates for insertion.
Computational results are showed in Table \ref{tableNoClustering}, while Table \ref{tableClustering} reports results obtained with PoI-clustering enabled. Each row represents the average value of the eight instances, with the following headings:
\begin{itemize}
\item DEV: the ratio between total score for the solution and the best known solution
\item TIME: execution time in seconds;
\item PoIs: number of PoIs;
\item $|S|$: number of walking subtours;
\item SOL: number of improved solutions;
\item IT: total number of iterations;
\item IT$_f$: number of iterations without improvements w.r.t. the incumbent solution;
\item $T^{d}$: total driving time divided by $m \cdot C_{max}$;
\item $T^{w}$: total walking time divided by $m \cdot C_{max}$;
\item $T$: total service time divided by $m \cdot C_{max}$;
\item $W$: total waiting time divided by $m \cdot C_{max}$.
\end{itemize}
Since the territory is characterized by a high density of POIs, radius $r=50$ km is sufficient to build high-quality tours.
Furthermore, we notice that the clustering-based ILS greatly improves the execution times of the algorithm, without compromising the quality of the final solution. In particular, the results obtained for increasing $m$ show that, when clustering is enabled, the ILS is able to do many more iterations, thus discovering new solutions and improving the quality of the final solution.
When the radius value $r$ is lower than or equal to 50 Km and PoI-clustering is enabled, the algorithm stops mainly due to the iteration limit with $m$ not greater than $5$ itineraries.
The ILS approach is very efficient. The results confirm that the amount of time spent waiting is very small. Itineraries are well-composed with respect to total time spent travelling (without exhausting the tourist). On average, our approach builds itineraries with about $2$ \textit{walking} subtours per day. In particular total walking time and total driving time corresponds respectively to about 6\% and 20\% of the available time.
On average the visit time corresponds to about the 70\% of the available time. Whilst the waiting time is on average less than 1.5\%.
We further observe that by increasing the value $r$, the search execution times significantly increase with and without PoI-clustering. With respect to tour quality, clustered ILS is able to improve the degree of diversification on the territory, without remain trapped in high-profit isolated areas.
\begin{table}[!t]
\small
\centering
\caption{Computational results\label{tableNoClustering}}
\begin{tabular}{|ll|ccccccccccc|}
\hline
$m$ & $r$ & DEV [\%] & TIME [s] & PoIs & $|S|$ & SOL & IT & IT$_f$ & $T^{d}$ [\%] & $T^{w}$ [\%] & $T$ [\%] & $W$ [\%] \\\hline
1 & 10 & 16.5 & 0.8 & 18.3 & 1.9 & 2.1 & 155.0 & 150.0 & 13.2 & 6.9 & 78.7 & 1.2 \\
& 20 & 9.3 & 1.7 & 19.4 & 2.5 & 2.4 & 157.3 & 150.0 & 17.0 & 7.4 & 74.9 & 0.7 \\
& 50 & 3.4 & 7.0 & 19.8 & 2.4 & 3.6 & 162.5 & 150.0 & 20.9 & 7.2 & 70.9 & 1.0 \\
& $+\infty$ & 2.7 & 37.5 & 19.5 & 1.3 & 2.6 & 157.1 & 150.0 & 22.8 & 8.2 & 68.1 & 0.9 \\ \hline
2 & 10 & 26.7 & 2.0 & 33.4 & 3.0 & 2.8 & 159.8 & 150.0 & 15.5 & 5.8 & 77.3 & 1.4 \\
& 20 & 16.8 & 5.3 & 35.0 & 5.1 & 4.5 & 165.8 & 150.0 & 19.5 & 6.0 & 73.2 & 1.3 \\
& 50 & 5.0 & 27.5 & 38.3 & 4.5 & 8.1 & 183.3 & 150.0 & 20.6 & 6.9 & 71.6 & 0.9 \\
& $+\infty$ & 1.8 & 60.0 & 38.6 & 4.3 & 5.3 & 89.5 & 74.0 & 24.3 & 6.8 & 67.8 & 1.0 \\\hline
3 & 10 & 31.6 & 3.4 & 46.8 & 5.0 & 4.0 & 176.3 & 150.0 & 14.2 & 5.4 & 78.6 & 1.8 \\
& 20 & 19.2 & 10.0 & 50.8 & 7.1 & 5.1 & 170.8 & 150.0 & 19.6 & 5.7 & 73.4 & 1.3 \\
& 50 & 3.2 & 50.5 & 56.0 & 7.8 & 9.6 & 178.3 & 134.3 & 22.3 & 6.5 & 69.9 & 1.4 \\
& $+\infty$ & 0.7 & 60.0 & 56.5 & 7.9 & 9.0 & 53.6 & 27.5 & 25.6 & 5.6 & 67.8 & 1.0 \\\hline
4 & 10 & 35.0 & 4.9 & 58.9 & 7.8 & 3.1 & 175.1 & 150.0 & 14.5 & 5.0 & 78.8 & 1.6 \\
& 20 & 21.7 & 16.0 & 65.5 & 9.5 & 7.1 & 190.1 & 150.0 & 20.1 & 5.1 & 73.2 & 1.5 \\
& 50 & 3.2 & 58.7 & 72.5 & 11.5 & 8.3 & 127.0 & 90.1 & 23.6 & 6.2 & 69.0 & 1.3 \\
& $+\infty$ & 1.2 & 60.0 & 72.6 & 10.6 & 8.9 & 36.5 & 13.8 & 26.7 & 6.0 & 66.1 & 1.2 \\\hline
5 & 10 & 38.4 & 5.6 & 70.4 & 8.5 & 5.4 & 166.5 & 150.0 & 14.7 & 4.3 & 79.1 & 1.9 \\
& 20 & 24.4 & 25.4 & 78.8 & 11.3 & 6.3 & 197.5 & 150.0 & 19.6 & 5.0 & 73.7 & 1.7 \\
& 50 & 2.6 & 60.0 & 89.3 & 13.1 & 7.6 & 88.9 & 59.8 & 23.0 & 6.0 & 69.7 & 1.3 \\
& $+\infty$ & 1.3 & 60.0 & 89.4 & 12.3 & 6.9 & 26.3 & 11.8 & 24.4 & 5.9 & 68.3 & 1.4 \\\hline
6 & 10 & 41.5 & 7.3 & 80.0 & 9.9 & 4.1 & 191.8 & 150.0 & 14.2 & 4.3 & 78.9 & 2.5 \\
& 20 & 27.2 & 27.6 & 90.8 & 13.9 & 5.6 & 184.6 & 150.0 & 20.4 & 4.8 & 73.0 & 1.8 \\
& 50 & 4.6 & 60.0 & 102.6 & 16.0 & 7.3 & 67.3 & 44.9 & 24.4 & 5.7 & 68.6 & 1.3 \\
& $+\infty$ & 2.0 & 60.0 & 103.9 & 15.5 & 7.6 & 21.8 & 8.4 & 26.7 & 5.8 & 65.9 & 1.5 \\\hline
7 & 10 & 44.1 & 8.0 & 88.1 & 12.9 & 4.5 & 194.4 & 150.0 & 14.1 & 3.9 & 78.4 & 3.6 \\
& 20 & 28.4 & 34.3 & 104.5 & 15.0 & 6.0 & 180.1 & 150.0 & 19.5 & 4.9 & 73.5 & 2.2 \\
& 50 & 4.2 & 60.0 & 118.0 & 18.1 & 8.0 & 56.1 & 23.5 & 24.8 & 5.5 & 68.1 & 1.6 \\
& $+\infty$ & 3.2 & 60.0 & 117.4 & 18.9 & 7.5 & 18.4 & 5.4 & 27.6 & 5.2 & 65.8 & 1.4 \\ \hline
\multicolumn{2}{|c|}{AVG} & \textbf{15.0} & \textbf{31.2} & \textbf{65.5} & \textbf{9.2} & \textbf{5.8} & \textbf{133.3} & \textbf{108.7} & \textbf{20.5} & \textbf{5.8} & \textbf{72.2} & \textbf{1.5}\\
\hline
\end{tabular}
\end{table}
\begin{table}[!t]
\small
\centering
\caption{Computational results with clustering\label{tableClustering}}
\begin{tabular}{|ll|ccccccccccc|}
\hline
$m$ & $r$ & DEV [\%] & TIME [s] & PoIs & $|S|$ & SOL & IT & IT$_f$ & $T^{d}$ [\%] & $T^{w}$ [\%] & $T$ [\%] & $W$ [\%] \\\hline
1 & 10 & 16.7 & 0.6 & 18.3 & 1.9 & 2.1 & 155.9 & 150.0 & 13.7 & 6.5 & 78.6 & 1.2 \\
& 20 & 9.7 & 0.9 & 19.4 & 2.4 & 2.8 & 157.3 & 150.0 & 16.5 & 6.9 & 75.7 & 0.8 \\
& 50 & 4.5 & 2.1 & 19.8 & 2.5 & 3.6 & 161.9 & 150.0 & 19.4 & 7.9 & 71.4 & 1.3 \\
& $+\infty$ & 2.7 & 9.7 & 19.5 & 1.4 & 2.3 & 154.6 & 150.0 & 22.5 & 8.6 & 67.7 & 1.2 \\ \hline
2 & 10 & 26.5 & 1.8 & 33.4 & 4.0 & 4.4 & 176.6 & 150.0 & 15.5 & 5.6 & 77.8 & 1.2 \\
& 20 & 16.8 & 2.5 & 35.4 & 4.3 & 3.3 & 164.8 & 150.0 & 18.0 & 6.8 & 73.6 & 1.6 \\
& 50 & 5.3 & 6.6 & 38.4 & 5.1 & 4.6 & 167.8 & 150.0 & 21.5 & 6.7 & 70.5 & 1.3 \\
& $+\infty$ & 1.5 & 35.5 & 38.9 & 4.3 & 5.4 & 176.5 & 150.0 & 22.0 & 7.6 & 69.5 & 1.0 \\ \hline
3 & 10 & 31.5 & 3.1 & 47.0 & 5.5 & 3.8 & 185.0 & 150.0 & 13.5 & 5.7 & 78.7 & 2.1 \\
& 20 & 19.2 & 4.9 & 51.3 & 7.1 & 4.4 & 192.0 & 150.0 & 19.2 & 5.8 & 73.5 & 1.5 \\
& 50 & 3.9 & 14.3 & 56.3 & 8.3 & 9.5 & 183.9 & 150.0 & 22.6 & 6.3 & 70.1 & 1.0 \\
& $+\infty$ & 1.5 & 59.8 & 56.4 & 6.8 & 7.5 & 156.8 & 111.4 & 23.9 & 6.4 & 68.5 & 1.2 \\ \hline
4 & 10 & 34.7 & 3.9 & 59.4 & 8.3 & 4.1 & 167.9 & 150.0 & 13.9 & 4.9 & 79.4 & 1.8 \\
& 20 & 22.0 & 7.6 & 64.9 & 9.1 & 5.1 & 191.6 & 150.0 & 19.8 & 5.3 & 73.3 & 1.6 \\
& 50 & 3.1 & 23.4 & 72.9 & 11.8 & 9.1 & 177.8 & 150.0 & 23.6 & 6.3 & 68.9 & 1.1 \\
& $+\infty$ & 1.0 & 60.0 & 72.9 & 10.8 & 7.9 & 96.1 & 66.0 & 25.1 & 5.9 & 67.8 & 1.2 \\ \hline
5 & 10 & 38.4 & 7.0 & 70.5 & 9.6 & 4.6 & 211.5 & 150.0 & 14.5 & 5.0 & 78.6 & 1.9 \\
& 20 & 24.7 & 10.0 & 78.9 & 10.6 & 5.1 & 187.9 & 150.0 & 19.3 & 5.2 & 73.9 & 1.6 \\
& 50 & 3.2 & 37.2 & 89.1 & 13.9 & 10.3 & 195.3 & 150.0 & 23.2 & 6.3 & 69.1 & 1.4 \\
& $+\infty$ & 1.3 & 60.0 & 88.4 & 13.4 & 7.9 & 66.6 & 40.9 & 27.2 & 5.4 & 66.4 & 1.1 \\ \hline
6 & 10 & 41.6 & 6.0 & 80.1 & 11.0 & 3.6 & 186.0 & 150.0 & 15.0 & 4.6 & 78.4 & 2.1 \\
& 20 & 27.1 & 14.0 & 91.5 & 12.6 & 6.5 & 212.8 & 150.0 & 19.9 & 5.1 & 72.9 & 2.1 \\
& 50 & 3.3 & 49.7 & 104.9 & 16.3 & 10.9 & 187.6 & 131.5 & 24.0 & 5.9 & 68.8 & 1.3 \\
& $+\infty$ & 1.3 & 60.0 & 105.6 & 17.0 & 10.3 & 51.0 & 29.8 & 26.5 & 5.7 & 66.5 & 1.3 \\ \hline
7 & 10 & 44.2 & 8.3 & 88.0 & 12.6 & 4.1 & 209.6 & 150.0 & 13.8 & 4.4 & 78.0 & 3.8 \\
& 20 & 28.4 & 14.4 & 104.3 & 16.6 & 4.8 & 169.3 & 150.0 & 19.5 & 4.7 & 73.8 & 2.0 \\
& 50 & 3.8 & 57.5 & 118.5 & 18.6 & 9.6 & 174.6 & 91.0 & 24.2 & 6.0 & 68.5 & 1.4 \\
& $+\infty$ & 1.0 & 60.0 & 119.8 & 19.0 & 9.4 & 42.5 & 17.5 & 26.8 & 5.3 & 66.3 & 1.7 \\ \hline
\multicolumn{2}{|c|}{AVG} & \textbf{15.0} & \textbf{22.2} & \textbf{65.8} & \textbf{9.4} & \textbf{6.0} & \textbf{162.9} & \textbf{129.9} & \textbf{20.2} & \textbf{6.0} & \textbf{72.4} & \textbf{1.5} \\
\hline
\end{tabular}
\end{table}
\section {Conclusions}\label{sec:9}
In this paper we have dealt with the tourist trip design problem in a \textit{walk-and-drive} mobility environment, where the tourist moves from one attraction to the following one as a pedestrian or as a driver of a vehicle. Transport mode selection depends on the compromise between travel duration and tourist \textcolor{black}{preferences. We have modelled} the problem as a \textcolor{black}{\textit{Team Orienteering Problem}} with multiple time windows on a multigraph, where tourist preferences on transport modes \textcolor{black}{have been expressed} as soft constraints.The proposed model is novel in the literature. We \textcolor{black}{have also devised} an adapted ILS coupled with an innovative approach to evaluate \textcolor{black}{neighbourhoods} in constant time. To validate \textcolor{black}{our solution approach}, realistic instances with thousands of PoIs \textcolor{black}{have been} tested. The proposed approach \textcolor{black}{has succeeded} in calculating personalised trips of \textcolor{black}{up to 7 days} in real-time. Future research lines will \textcolor{black}{consider additional aspects, such as traffic congestion and PoI score dependency on visit duration}.
\section*{Acknowledgments}
This research was supported by Regione Puglia (Italy) (Progetto Ricerca e Sviluppo C\-BAS CUP B54B170001200007 cod. prog. LA3Z825). This support is gratefully acknowledged.
| {
"attr-fineweb-edu": 1.938477,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeBM5i7PA9J7HcDkT | \section{Introduction}
Paired-comparison based ranking emerges in many fields of science such as social choice theory \citep{ChebotarevShamis1998a}, sports \citep{Landau1895, Landau1914, Zermelo1929, Radicchi2011, BozokiCsatoTemesi2016, ChaoKouLiPeng2018}, or psychology \citep{Thurstone1927}. Here a general version of the problem, allowing for different preference intensities (including ties) as well as incomplete and multiple comparisons between two objects, is addressed.
The paper contributes to this field by the formulation of an impossibility theorem: it turns out that two axioms, independence of irrelevant matches -- used, among others, in characterizations of Borda ranking by \citet{Rubinstein1980} and \citet{NitzanRubinstein1981} and recently discussed by \citet{Gonzalez-DiazHendrickxLohmann2013} -- and self-consistency -- a less known but intuitive property, introduced in \citet{ChebotarevShamis1997a} -- cannot be satisfied at the same time.
We also investigate domain restrictions and the weakening of the properties in order to get some positive results.
Our main theorem reinforces that while the row sum (sometimes called Borda or score) ranking has favourable properties in the case of round-robin tournaments, its application can be attacked when incomplete comparisons are present. A basket case is a Swiss-system tournament, where row sum seems to be a bad choice since players with weaker opponents can score the same number of points more easily \citep{Csato2013a, Csato2017c}.
The current paper can be regarded as a supplement to the findings of previous axiomatic discussions in the field \citep{AltmanTennenholtz2008, ChebotarevShamis1998a, Gonzalez-DiazHendrickxLohmann2013, Csato2018g} by highlighting some unknown connections among certain axioms.
Furthermore, our impossibility result gives mathematical justification for a comment appearing in the axiomatic analysis of scoring procedures by \citet{Gonzalez-DiazHendrickxLohmann2013}: 'when players have different opponents (or face opponents with different intensities), $IIM$\footnote{~$IIM$ is the abbreviation of independence of irrelevant matches, an axiom to be discussed in Section~\ref{Sec31}.} is a property one would rather not have' (p.~165). The strength of this property is clearly shown by our main theorem.
The study is structured as follows. Section~\ref{Sec2} presents the setting of the ranking problem and defines some ranking methods. In Section~\ref{Sec3}, two axioms are evoked in order to get a clear impossibility result.
Section~\ref{Sec4} investigates different ways to achieve possibility through the weakening of the axioms. Finally, some concluding remarks are given in Section~\ref{Sec5}.
\section{Preliminaries} \label{Sec2}
Consider a set of professional tennis players and their results against each other \citep{BozokiCsatoTemesi2016}. The problem is to rank them, which can be achieved by associating a score with each player. This section describes a possible mathematical model and introduces some methods.
\subsection{The ranking problem} \label{Sec21}
Let $N = \{ X_1,X_2, \dots, X_n \}$, $n \in \mathbb{N}$ be the \emph{set of objects} and $T = \left[ t_{ij} \right] \in \mathbb{R}^{n \times n}$ be a \emph{tournament matrix} such that $t_{ij} + t_{ji} \in \mathbb{N}$.
$t_{ij}$ represents the aggregated score of object $X_i$ against $X_j$, $t_{ij} / (t_{ij} + t_{ji})$ can be interpreted as the likelihood that object $X_i$ is better than object $X_j$. $t_{ii} = 0$ is assumed for all $X_i \in N$.
Possible derivations of the tournament matrix can be found in \citet{Gonzalez-DiazHendrickxLohmann2013} and \citet{Csato2015a}.
The pair $(N,T)$ is called a \emph{ranking problem}.
The set of ranking problems with $n$ objects ($|N| = n$) is denoted by $\mathcal{R}^n$.
A \emph{scoring procedure} $f$ is an $\mathcal{R}^n \to \mathbb{R}^n$ function that gives a rating $f_i(N,T)$ for each object $X_i \in N$ in any ranking problem $(N,T) \in \mathcal{R}^n$. Any scoring method immediately induces a ranking (a transitive and complete weak order on the set of $N \times N$) $\succeq$ by $f_i(N,T) \geq f_j(N,T)$ meaning that $X_i$ is ranked weakly above $X_j$, denoted by $X_i \succeq X_j$. The symmetric and asymmetric parts of $\succeq$ are denoted by $\sim$ and $\succ$, respectively: $X_i \sim X_j$ if both $X_i \succeq X_j$ and $X_i \preceq X_j$ hold, while $X_i \succ X_j$ if $X_i \succeq X_j$ holds, but $X_i \preceq X_j$ does not hold.
Every scoring method can be considered as a \emph{ranking method}. This paper discusses only ranking methods induced by scoring procedures.
A ranking problem $(N,T)$ has the skew-symmetric \emph{results matrix} $R = T - T^\top = \left[ r_{ij} \right] \in \mathbb{R}^{n \times n}$ and the symmetric \emph{matches matrix} $M = T + T^\top = \left[ m_{ij} \right] \in \mathbb{N}^{n \times n}$ such that $m_{ij}$ is the number of the comparisons between $X_i$ and $X_j$, whose outcome is given by $r_{ij}$. Matrices $R$ and $M$ also determine the tournament matrix as $T = (R + M)/2$.
In other words, a ranking problem $(N,T) \in \mathcal{R}^n$ can be denoted analogously by $(N,R,M)$ with the restriction $|r_{ij}| \leq m_{ij}$ for all $X_i,X_j \in N$, that is, the outcome of any paired comparison between two objects cannot 'exceed' their number of matches.
Although the description through results and matches matrices is not parsimonious, usually the notation $(N,R,M)$ will be used because it helps in the axiomatic approach.
The class of universal ranking problems has some meaningful subsets.
A ranking problem $(N,R,M) \in \mathcal{R}^n$ is called:
\begin{itemize}
\item
\emph{balanced} if $\sum_{X_k \in N} m_{ik} = \sum_{X_k \in N} m_{jk}$ for all $X_i,X_j \in N$. \\
The set of balanced ranking problems is denoted by $\mathcal{R}_{B}$.
\item
\emph{round-robin} if $m_{ij} = m_{k \ell}$ for all $X_i \neq X_j$ and $X_k \neq X_\ell$. \\
The set of round-robin ranking problems is denoted by $\mathcal{R}_{R}$.
\item
\emph{unweighted} if $m_{ij} \in \{ 0; 1 \}$ for all $X_i,X_j \in N$. \\
The set of unweighted ranking problems is denoted by $\mathcal{R}_{U}$.
\item
\emph{extremal} if $|r_{ij}| \in \{ 0; m_{ij} \}$ for all $X_i,X_j \in N$. \\
The set of extremal ranking problems is denoted by $\mathcal{R}_{E}$.
\end{itemize}
The first three subsets pose restrictions on the matches matrix $M$.
In a balanced ranking problem, all objects should have the same number of comparisons. A typical example is a Swiss-system tournament (provided the number of participants is even).
In a round-robin ranking problem, the number of comparisons between any pair of objects is the same. A typical example (of double round-robin) can be the qualification for soccer tournaments like UEFA European Championship \citep{Csato2018b}. It does not allow for incomplete comparisons.
Note that a round-robin ranking problem is balanced, $\mathcal{R}_{R} \subset \mathcal{R}_{B}$.
Finally, in an unweighted ranking problem, multiple comparisons are prohibited.
Extremal ranking problems restrict the results matrix $R$: the outcome of a comparison can only be a complete win ($r_{ij} = m_{ij}$), a draw ($r_{ij} = 0$), or a maximal loss ($r_{ij} = -m_{ij}$). In other words, preferences have no intensity, however, ties are allowed.
One can also consider any intersection of these special classes.
The \emph{number of comparisons} of object $X_i \in N$ is $d_i = \sum_{X_j \in N} m_{ij}$ and the \emph{maximal number of comparisons} in the ranking problem is $m = \max_{X_i,X_j \in N} m_{ij}$. Hence:
\begin{itemize}
\item
A ranking problem is balanced if and only if $d_i = d$ for all $X_i \in N$.
\item
A ranking problem is round-robin if and only if $m_{ij} = m$ for all $X_i,X_j \in N$.
\item
A ranking problem is unweighted if and only if $m = 1$.\footnote{~While $m_{ij} \in \{ 0; 1 \}$ for all $X_i,X_j \in N$ allows for $m=0$, it leads to a meaningless ranking problem without any comparison.}
\end{itemize}
Matrix $M$ can be represented by an undirected multigraph $G := (V,E)$, where the vertex set $V$ corresponds to the object set $N$, and the number of edges between objects $X_i$ and $X_j$ is equal to $m_{ij}$, so the degree of node $X_i$ is $d_i$.
Graph $G$ is said to be the \emph{comparison multigraph} of the ranking problem $(N,R,M)$, and is independent of the results matrix $R$. The \emph{Laplacian matrix} $L = \left[ \ell_{ij} \right] \in \mathbb{R}^{n \times n}$ of graph $G$ is given by $\ell_{ij} = -m_{ij}$ for all $X_i \neq X_j$ and $\ell_{ii} = d_i$ for all $X_i \in N$.
A ranking problem $(N,R,M) \in \mathcal{R}^n$ is called \emph{connected} or \emph{unconnected} if its comparison multigraph is connected or unconnected, respectively.
\subsection{Some ranking methods} \label{Sec22}
In the following, some scoring procedures are presented. They will be used only for ranking purposes, so they can be called ranking methods.
Let $\mathbf{e} \in \mathbb{R}^n$ denote the column vector with $e_i = 1$ for all $i = 1,2, \dots ,n$.
Let $I \in \mathbb{R}^{n \times n}$ be the identity matrix.
The first scoring method does not take the comparison structure into account, it simply sums the results from the results matrix $R$.
\begin{definition} \label{Def21}
\emph{Row sum}: $\mathbf{s}(N,R,M) = R \mathbf{e}$.
\end{definition}
The following \emph{parametric} procedure has been constructed axiomatically by \citet{Chebotarev1989_eng} as an extension of the row sum method to the case of paired comparisons with missing values, and has been thoroughly analysed in \citet{Chebotarev1994}.
\begin{definition} \label{Def22}
\emph{Generalized row sum}: it is the unique solution $\mathbf{x}(\varepsilon)(N,R,M)$ of the system of linear equations $(I+ \varepsilon L) \mathbf{x}(\varepsilon)(N,R,M) = (1 + \varepsilon m n) \mathbf{s}(N,R,M)$, where $\varepsilon > 0$ is a parameter.
\end{definition}
Generalized row sum adjusts the row sum $s_i$ by accounting for the performance of objects compared with $X_i$, and adds an infinite depth to the correction as the row sums of all objects available on a path from $X_i$ appear in the calculation. $\varepsilon$ indicates the importance attributed to this modification.
Note that generalized row sum results in row sum if $\varepsilon \to 0$: $\lim_{\varepsilon \to 0} \mathbf{x}(\varepsilon)(N,R,M) = \mathbf{s}(N,R,M)$.
The row sum and generalized row sum rankings are unique and easily computable from a system of linear equations for all ranking problems $(N,R,M) \in \mathcal{R}^n$.
The least squares method was suggested by \citet{Thurstone1927} and \citet{Horst1932}.
It is known as logarithmic least squares method in the case of incomplete multiplicative pairwise comparison matrices \citep{BozokiFulopRonyai2010}.
\begin{definition} \label{Def23}
\emph{Least squares}: it is the solution $\mathbf{q}(N,R,M)$ of the system of linear equations $L \mathbf{q}(N,R,M) = \mathbf{s}(N,R,M)$ and $\mathbf{e}^\top \mathbf{q}(N,R,M) = 0$.
\end{definition}
Generalized row sum ranking coincides with least squares ranking if $\varepsilon \to \infty$ because $\lim_{\varepsilon \to \infty} \mathbf{x}(\varepsilon)(N,R,M) = mn \mathbf{q}(N,R,M)$.
The least squares ranking is unique if and only if the ranking problem $(N,R,M) \in \mathcal{R}^n$ is connected \citep{KaiserSerlin1978, ChebotarevShamis1999, BozokiFulopRonyai2010}.
The ranking of unconnected objects may be controversial. Nonetheless, the least squares ranking can be made unique if Definition~\ref{Def23} is applied to all ranking subproblems with a connected comparison multigraph.
An extensive analysis and a graph interpretation of the least squares method, as well as further references, can be found in \citet{Csato2015a}.
\section{The impossibility result} \label{Sec3}
In this section, a natural axiom of independence and a kind of monotonicity property is recalled.
Our main result illustrates the impossibility of satisfying the two requirements simultaneously.
\subsection{Independence of irrelevant matches} \label{Sec31}
This property appears as \emph{independence} in \citet[Axiom~III]{Rubinstein1980} and \citet[Axiom~5]{NitzanRubinstein1981} in the case of round-robin ranking problems. The name independence of irrelevant matches has been used by \citet{Gonzalez-DiazHendrickxLohmann2013}.
It deals with the effects of certain changes in the tournament matrix.
\begin{axiom} \label{Axiom31}
\emph{Independence of irrelevant matches} ($IIM$):
Let $(N,T),(N,T') \in \mathcal{R}^n$ be two ranking problems and $X_i,X_j,X_k, X_\ell \in N$ be four different objects such that $(N,T)$ and $(N,T')$ are identical but $t'_{k \ell} \neq t_{k \ell}$.
Scoring procedure $f: \mathcal{R}^n \to \mathbb{R}^n$ is called \emph{independent of irrelevant matches} if $f_i(N,T) \geq f_j(N,T) \Rightarrow f_i(N,T') \geq f_j(N,T')$.
\end{axiom}
$IIM$ means that 'remote' comparisons -- not involving objects $X_i$ and $X_j$ -- do not affect the order of $X_i$ and $X_j$.
Changing the matches matrix may lead to an unconnected ranking problem.
Property $IIM$ has a meaning if $n \geq 4$.
Sequential application of independence of irrelevant matches can lead to any ranking problem $(N,\bar{T}) \in \mathcal{R}^n$, for which $\bar{t}_{gh} = t_{gh}$ if $\{ X_g,X_h \} \cap \{ X_i, X_j \} \neq \emptyset$, but all other paired comparisons are arbitrary.
\begin{lemma} \label{Lemma31}
The row sum method is independent of irrelevant matches.
\end{lemma}
\begin{proof}
It follows from Definition~\ref{Def21}.
\end{proof}
\subsection{Self-consistency} \label{Sec32}
The next axiom, introduced by \citet{ChebotarevShamis1997a}, may require an extensive explanation.
It is motivated by an example using the language of preference aggregation.
\begin{figure}[htbp]
\centering
\caption{The ranking problem of Example~\ref{Examp31}}
\label{Fig31}
\begin{tikzpicture}[scale=1, auto=center, transform shape, >=triangle 45]
\tikzstyle{every node}=[draw,shape=rectangle];
\node (n1) at (135:2) {$X_1$};
\node (n2) at (45:2) {$X_2$};
\node (n3) at (225:2) {$X_3$};
\node (n4) at (315:2) {$X_4$};
\foreach \from/\to in {n1/n2,n1/n3,n2/n4,n3/n4}
\draw [->] (\from) -- (\to);
\end{tikzpicture}
\end{figure}
\begin{example} \label{Examp31}
Consider the ranking problem $(N,R,M) \in \mathcal{R}_B^4 \cap \mathcal{R}_U^4 \cap \mathcal{R}_E^4$ with results and matches matrices
\[
R = \left[
\begin{array}{cccc}
0 & 1 & 1 & 0 \\
-1 & 0 & 0 & 1 \\
-1 & 0 & 0 & 1 \\
0 & -1 & -1 & 0 \\
\end{array}
\right] \text{ and }
M = \left[
\begin{array}{cccc}
0 & 1 & 1 & 0 \\
1 & 0 & 0 & 1 \\
1 & 0 & 0 & 1 \\
0 & 1 & 1 & 0 \\
\end{array}
\right].
\]
It is shown in Figure~\ref{Fig31}: a directed edge from node $X_i$ to $X_j$ indicates a complete win of $X_i$ over $X_j$ (and a complete loss of $X_j$ against $X_i$).
This representation will be used in further examples, too.
\end{example}
The situation in Example~\ref{Examp31} can be interpreted as follows. A voter prefers alternative $X_1$ to $X_2$ and $X_3$, but says nothing about $X_4$. Another voter prefers $X_2$ to $X_3$ and $X_4$, but has no opinion on $X_1$.
Although it is difficult to make a good decision on the basis of such incomplete preferences, sometimes it cannot be avoided. It leads to the question, which principles should be followed by the final ranking of the objects. It seems reasonable that $X_i$ should be judged better than $X_j$ if one of the following holds:
\begin{enumerate}[label=\ding{64}\arabic*]
\item \label{SC_con1}
$X_i$ achieves better results against the same objects;
\item \label{SC_con2}
$X_i$ achieves better results against objects with the same strength;
\item \label{SC_con3}
$X_i$ achieves the same results against stronger objects;
\item \label{SC_con4}
$X_i$ achieves better results against stronger objects.
\end{enumerate}
Furthermore, $X_i$ should have the same rank as $X_j$ if one of the following holds:
\begin{enumerate}[resume,label=\ding{64}\arabic*]
\item \label{SC_con5}
$X_i$ achieves the same results against the same objects;
\item \label{SC_con6}
$X_i$ achieves the same results against objects with the same strength.
\end{enumerate}
In order to apply these principles, one should measure the strength of objects. It is provided by the scoring method itself, hence the name of this axiom is \emph{self-consistency}.
Consequently, condition~\ref{SC_con1} is a special case of condition~\ref{SC_con2} (the same objects have naturally the same strength) as well as condition~\ref{SC_con5} is implied by condition~\ref{SC_con6}.
What does self-consistency mean in Example~\ref{Examp31}?
First, $X_2 \sim X_3$ due to condition~\ref{SC_con5}.
Second, $X_1 \succ X_4$ should hold since condition~\ref{SC_con1} as $r_{12} > r_{42}$ and $r_{13} > r_{43}$.
The requirements above can also be applied to objects which have different opponents.
Assume that $X_1 \preceq X_2$. Then condition~\ref{SC_con4} results in $X_1 \succ X_2$ because of $X_2 \succeq X_1$, $r_{12} > r_{21}$ and $X_3 \sim X_2 \succeq X_1 \succ X_4$, $r_{13} = r_{24}$. It is a contradiction, therefore $X_1 \succ (X_2 \sim X_3)$.
Similarly, assume that $X_2 \preceq X_4$. Then condition~\ref{SC_con4} results in $X_2 \succ X_4$ because of $X_1 \succ X_3$ (derived above), $r_{21} = r_{43}$ and $X_4 \succeq X_2 \sim X_3$, $r_{24} > r_{43}$. It is a contradiction, therefore $(X_2 \sim X_3) \succ X_4$.
To summarize, only the ranking $X_1 \succ (X_2 \sim X_3) \succ X_4$ is allowed by self-consistency.
The above requirement can be formalized in the following way.
\begin{definition} \label{Def31}
\emph{Opponent set}:
Let $(N,R,M) \in \mathcal{R}_U^n$ be an unweighted ranking problem. The \emph{opponent set} of object $X_i$ is $O_i = \{ X_j: m_{ij} = 1 \}$
\end{definition}
Objects of the opponent set $O_i$ are called the \emph{opponents} of $X_i$.
Note that $|O_i| = |O_j|$ for all $X_i, X_j \in N$ if and only if the ranking problem is balanced.
\begin{notation} \label{Not31}
Consider an unweighted ranking problem $(N,R,M) \in \mathcal{R}_U^n$ such that $X_i, X_j \in N$ are two different objects and $g: O_i \leftrightarrow O_j$ is a one-to-one correspondence between the opponents of $X_i$ and $X_j$, consequently, $|O_i| = |O_j|$.
Then $\mathfrak{g} : \{k: X_k \in O_i \} \leftrightarrow \{\ell: X_\ell \in O_j \}$ is given by $X_{\mathfrak{g}(k)} = g(X_k)$.
\end{notation}
In order to make judgements like an object has stronger opponents, at least a partial order among opponent sets should be introduced.
\begin{definition} \label{Def32}
\emph{Partial order of opponent sets}:
Let $(N,R,M) \in \mathcal{R}^n$ be a ranking problem and $f: \mathcal{R}^n \to \mathbb{R}^n$ be a scoring procedure.
Opponents of $X_i$ are at least as strong as opponents of $X_j$, denoted by $O_i \succeq O_j$, if there exists a one-to-one correspondence $g:O_i \leftrightarrow O_j$ such that $f_k(N,R,M) \geq f_{\mathfrak{g}(k)}(N,R,M)$ for all $X_k \in O_i$.
\end{definition}
For instance, $O_1 \sim O_4$ and $O_2 \sim O_3$ in Example~\ref{Examp31}, whereas $O_1$ and $O_2$ are not comparable.
Therefore, conditions~\ref{SC_con1}-\ref{SC_con6} never imply $X_i \succeq X_j$ if $O_i \prec O_j$ since an object with a weaker opponent set cannot be judged better.
Opponent sets have been defined only in the case of unweighted ranking problems, but self-consistency can be applied to objects which have the same number of comparisons, too. The extension is achieved by a decomposition of ranking problems.
\begin{definition} \label{Def33}
\emph{Sum of ranking problems}:
Let $(N,R,M),(N,R',M') \in \mathcal{R}^n$ be two ranking problems with the same object set $N$. The \emph{sum} of these ranking problems is the ranking problem $(N,R+R',M+M') \in \mathcal{R}^n$.
\end{definition}
Summing of ranking problems may have a natural interpretation. For example, they can contain the preferences of voters in two cities of the same country or the paired comparisons of players in the first and second half of the season.
Definition~\ref{Def33} means that any ranking problem can be decomposed into unweighted ranking problems, in other words, it can be obtained as a sum of unweighted ranking problems.
However, while the sum of ranking problems is unique, a ranking problem may have a number of possible decompositions.
\begin{notation} \label{Not32}
Let $(N,R^{(p)},M^{(p)}) \in \mathcal{R}_U^n$ be an unweighted ranking problem.
The opponent set of object $X_i$ is $O_i^{(p)}$.
Let $X_i, X_j \in N$ be two different objects and $g^{(p)}: O_i^{(p)} \leftrightarrow O_j^{(p)}$ be a one-to-one correspondence between the opponents of $X_i$ and $X_j$.
Then $\mathfrak{g}^{(p)}: \{k: X_k \in O_i^{(p)} \} \leftrightarrow \{\ell: X_\ell \in O_j^{(p)} \}$ is given by $X_{\mathfrak{g}^{(p)}(k)} = g^{(p)}(X_k)$.
\end{notation}
\begin{axiom} \label{Axiom32}
\emph{Self-consistency} ($SC$) \citep{ChebotarevShamis1997a}:
A scoring procedure $f: \mathcal{R}^n \to \mathbb{R}^n$ is called \emph{self-consistent} if the following implication holds for any ranking problem $(N,R,M) \in \mathcal{R}^n$ and for any objects $X_i,X_j \in N$:
if there exists a decomposition of the ranking problem $(N,R,M)$ into $m$ unweighted ranking problems -- that is, $R = \sum_{p=1}^m R^{(p)}$, $M = \sum_{p=1}^m M^{(p)}$, and $(N,R^{(p)},M^{(p)}) \in \mathcal{R}_U^n$ is an unweighted ranking problem for all $p = 1,2, \dots ,m$ -- in a way that enables a one-to-one mapping $g^{(p)}$ from $O^{(p)}_i$ onto $O^{(p)}_j$ such that $r_{ik}^{(p)} \geq r_{j \mathfrak{g}^{(p)}(k)}^{(p)}$ and $f_k(N,R,M) \geq f_{\mathfrak{g}^{(p)}(k)}(N,R,M)$ for all $p = 1,2, \dots ,m$ and $X_k \in O_i^{(p)}$, then
$f_i(N,R,M) \geq f_{j}(N,R,M)$, furthermore, $f_i(N,R,M) > f_{j}(N,R,M)$ if $r_{ik}^{(p)} > r_{j \mathfrak{g}^{(p)}(k)}^{(p)}$ or $f_k(N,R,M) > f_{\mathfrak{g}^{(p)}(k)}(N,R,M)$ for at least one $1 \leq p \leq m$ and $X_k \in O_i^{(p)}$.
\end{axiom}
Self-consistency formalizes conditions~\ref{SC_con1}-\ref{SC_con6}: if object $X_i$ is obviously not worse than object $X_j$, then it is not ranked lower, furthermore, if it is better, then it is ranked higher.
Self-consistency can also be interpreted as a property of a ranking.
The application of self-consistency is nontrivial because of the various opportunities for decomposition into unweighted ranking problems.
However, it may restrict the relative ranking of objects $X_i$ and $X_j$ only if $d_i = d_j$ since there should exist a one-to-one mapping between $O_i^{(p)}$ and $O_j^{(p)}$ for all $p = 1,2, \dots ,m$.
Thus $SC$ does not fully determine a ranking, even on the set of balanced ranking problems.
\begin{figure}[htbp]
\centering
\caption{The ranking problem of Example~\ref{Examp32}}
\label{Fig32}
\begin{tikzpicture}[scale=1, auto=center, transform shape, >=triangle 45]
\tikzstyle{every node}=[draw,shape=rectangle];
\node (n1) at (240:3) {$X_1$};
\node (n2) at (180:3) {$X_2$};
\node (n3) at (120:3) {$X_3$};
\node (n4) at (60:3) {$X_4$};
\node (n5) at (0:3) {$X_5$};
\node (n6) at (300:3) {$X_6$};
\foreach \from/\to in {n1/n2,n2/n3,n4/n5,n5/n6}
\draw (\from) -- (\to);
\foreach \from/\to in {n1/n6,n3/n4}
\draw [->] (\from) -- (\to);
\end{tikzpicture}
\end{figure}
\begin{example} \label{Examp32}
Let $(N,R,M) \in \mathcal{R}_B^6 \cap \mathcal{R}_U^6 \cap \mathcal{R}_E^6$ be the ranking problem in Figure~\ref{Fig32}: a directed edge from node $X_i$ to $X_j$ indicates a complete win of $X_i$ over $X_j$ in one comparison (as in Example~\ref{Examp31}) and an undirected edge from node $X_i$ to $X_j$ represents a draw in one comparison between the two objects.
\end{example}
\begin{proposition} \label{Prop31}
Self-consistency does not fully characterize a ranking method on the set of balanced, unweighted and extremal ranking problems $\mathcal{R}_B \cap \mathcal{R}_U \cap \mathcal{R}_E$.
\end{proposition}
\begin{proof}
The statement can be verified by an example where at least two rankings are allowed by $SC$, we use Example~\ref{Examp32} for this purpose.
Consider the ranking $\succeq^1$ such that $(X_1 \sim^1 X_2 \sim^1 X_3) \succ^1 (X_4 \sim^1 X_5 \sim^1 X_6)$.
The opponent sets are $O_1 = \{ X_2, X_6 \}$, $O_2 = \{ X_1, X_3 \}$, $O_3 = \{ X_2, X_4 \}$, $O_4 = \{ X_3, X_5 \}$, $O_5 = \{ X_4, X_6 \}$ and $O_6 = \{ X_1, X_5 \}$, so $O_2 \succ (O_1 \sim O_3 \sim O_4 \sim O_6) \succ O_5$.
The results of $X_1$ and $X_3$ are $(0;1)$, the results of $X_2$ and $X_5$ are $(0;0)$, while the results of $X_4$ and $X_6$ are $(-1;0)$.
For objects with the same results, $SC$ implies $X_1 \sim X_3$, $X_4 \sim X_6$ and $X_2 \succ X_5$ (conditions~\ref{SC_con3} and \ref{SC_con6}), which hold in $\succeq^1$.
For objects with different results, $SC$ leads to $X_2 \succ X_4$, $X_3 \succ X_4$, and $X_3 \succ X_5$ after taking the strength of opponents into account (condition~\ref{SC_con2}). These requirements are also met by the ranking $\succeq^1$.
Self-consistency imposes no other restrictions, therefore the ranking $\succeq^1$ satisfies it.
Now consider the ranking $\succeq^2$ such that $X_2 \prec^2 (X_1 \sim^2 X_3) \prec^2 (X_4 \sim^2 X_6) \prec^2 X_5$.
The opponent sets remain the same, but their partial order is given now as $O_2 \prec (O_4 \sim O_6)$, $O_2 \prec O_5$, $(O_1 \sim O_3) \prec (O_4 \sim O_6)$ and $(O_1 \sim O_3) \prec O_5$ (the opponents of $X_1$ and $X_2$, as well as $X_4$ and $X_5$, cannot be compared).
For objects with the same results, $SC$ implies $X_1 \sim X_3$, $X_4 \sim X_6$ and $X_2 \prec X_5$ (conditions~\ref{SC_con3} and \ref{SC_con6}), which hold in $\succeq^2$.
For objects with different results, $SC$ leads to $X_1 \succ X_2$ after taking the strength of opponents into account (condition~\ref{SC_con2}). This condition is also met by the ranking $\succeq^2$.
Self-consistency imposes no other restrictions, therefore the ranking $\succeq^2$ also satisfies this axiom.
To conclude, rankings $\succeq^1$ and $\succeq^2$ are self-consistent. The ranking obtained by reversing $\succeq^2$ meets $SC$, too.
\end{proof}
\begin{lemma} \label{Lemma32}
The generalized row sum and least squares methods are self-consistent.
\end{lemma}
\begin{proof}
See \citet[Theorem~5]{ChebotarevShamis1998a}.
\end{proof}
\citet[Theorem~5]{ChebotarevShamis1998a} provide a characterization of self-consistent scoring procedures, while \citet[Table~2]{ChebotarevShamis1998a} gives some further examples.
\subsection{The connection of independence of irrelevant matches and self-consistency} \label{Sec33}
So far we have discussed two axioms, $IIM$ and $SC$. It turns out that they cannot be satisfied at the same time.
\begin{figure}[htbp]
\centering
\caption{The ranking problems of Example~\ref{Examp33}}
\label{Fig33}
\begin{subfigure}{.5\textwidth}
\centering
\subcaption{Ranking problem $(N,R,M)$}
\label{Fig33a}
\begin{tikzpicture}[scale=1, auto=center, transform shape, >=triangle 45]
\tikzstyle{every node}=[draw,shape=rectangle];
\node (n1) at (135:2) {$X_1$};
\node (n2) at (45:2) {$X_2$};
\node (n3) at (315:2) {$X_3$};
\node (n4) at (225:2) {$X_4$};
\foreach \from/\to in {n1/n2,n1/n4,n2/n3}
\draw (\from) -- (\to);
\draw [->] (n4) -- (n3);
\end{tikzpicture}
\end{subfigure
\begin{subfigure}{.5\textwidth}
\centering
\subcaption{Ranking problem $(N,R',M)$}
\label{Fig33b}
\begin{tikzpicture}[scale=1, auto=center, transform shape, >=triangle 45]
\tikzstyle{every node}=[draw,shape=rectangle];
\node (n1) at (135:2) {$X_1$};
\node (n2) at (45:2) {$X_2$};
\node (n3) at (315:2) {$X_3$};
\node (n4) at (225:2) {$X_4$};
\foreach \from/\to in {n1/n2,n1/n4,n2/n3}
\draw (\from) -- (\to);
\draw [->] (n3) -- (n4);
\end{tikzpicture}
\end{subfigure}
\end{figure}
\begin{example} \label{Examp33}
Let $(N,R,M), (N,R',M) \in \mathcal{R}_B^4 \cap \mathcal{R}_U^4 \cap \mathcal{R}_E^4$ be the ranking problems in Figure~\ref{Fig33} with the results and matches matrices
\[
R = \left[
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & -1 & 0 \\
\end{array}
\right], \,
R' = \left[
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 \\
0 & 0 & 1 & 0 \\
\end{array}
\right], \text{ and }
M = \left[
\begin{array}{cccc}
0 & 1 & 0 & 1 \\
1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1 \\
1 & 0 & 1 & 0 \\
\end{array}
\right].
\]
\end{example}
\begin{theorem} \label{Theo31}
There exists no scoring procedure that is independent of irrelevant matches and self-consistent.
\end{theorem}
\begin{proof}
The contradiction of the two properties is proved by Example~\ref{Examp33}.
The opponent sets are $O_1 = O_3 = \{ X_2, X_4 \}$ and $O_2 = O_4 = \{ X_1, X_3 \}$ in both ranking problems.
Assume to the contrary that there exists a scoring procedure $f: \mathcal{R}^n \to \mathbb{R}^n$, which is independent of irrelevant matches and self-consistent.
$IIM$ means that $f_1(N,R,M) \geq f_2(N,R,M) \iff f_1(N,R',M) \geq f_2(N,R',M)$.
\begin{enumerate}[label=\emph{\alph*})]
\item \label{Enum_a}
Consider the (identity) one-to-one mapping $g_{13}: O_1 \leftrightarrow O_3$, where $g_{13}(X_2) = X_2$ and $g_{13}(X_4) = X_4$. Since $r_{12} = r_{42} = 0$ and $0 = r_{14} > r_{34} = -1$, $g_{13}$ satisfies condition~\ref{SC_con1} of $SC$, hence $f_1(N,R,M) > f_3(N,R,M)$.
\item \label{Enum_b}
Consider the (identity) one-to-one mapping $g_{42}: O_4 \leftrightarrow O_2$, where $g_{42}(X_1) = X_1$ and $g_{42}(X_3) = X_3$. Since $r_{41} = r_{21} = 0$ and $1 = r_{43} > r_{23} = 0$, $g_{42}$ satisfies condition~\ref{SC_con1} of $SC$, hence $f_4(N,R,M) > f_2(N,R,M)$.
\item \label{Enum_c}
Suppose that $f_2(N,R,M) \geq f_1(N,R,M)$, implying $f_4(N,R,M) > f_3(N,R,M)$.
Consider the one-to-one correspondence $g_{12}: O_1 \leftrightarrow O_2$, where $g_{12}(X_2) = X_1$ and $g_{12}(X_4) = X_3$. Since $r_{12} = r_{21} = 0$ and $r_{14} = r_{23} = 0$, $g_{12}$ satisfies condition~\ref{SC_con3} of $SC$, hence $f_1(N,R,M) > f_2(N,R,M)$. It is a contradiction.
\end{enumerate}
Thus only $f_1(N,R,M) > f_2(N,R,M)$ is allowed.
Note that ranking problem $(N,R',M)$ can be obtained from $(N,R,M)$ by the permutation $\sigma: N \to N$ such that $\sigma(X_1) = X_2$, $\sigma(X_2) = X_1$, $\sigma(X_3) = X_4$ and $\sigma(X_4) = X_3$. The above argument results in $f_2(N,R',M) > f_1(N,R',M)$, contrary to independence of irrelevant matches.
To conclude, no scoring procedure can meet $IIM$ and $SC$ simultaneously.
\end{proof}
\begin{corollary} \label{Col31}
The row sum method violates self-consistency.
\end{corollary}
\begin{proof}
It is an immediate consequence of Lemma~\ref{Lemma31} and Theorem~\ref{Theo31}
\end{proof}
\begin{corollary} \label{Col32}
The generalized row sum and least squares methods violate independence of irrelevant matches.
\end{corollary}
\begin{proof}
It follows from Lemma~\ref{Lemma32} and Theorem~\ref{Theo31}.
\end{proof}
A set of axioms is said to be \emph{logically independent} if none of them are implied by the others.
\begin{corollary} \label{Col33}
$IIM$ and $SC$ are logically independent axioms.
\end{corollary}
\begin{proof}
It is a consequence of Corollaries~\ref{Col31} and \ref{Col32}.
\end{proof}
\section{How to achieve possibility?} \label{Sec4}
Impossibility results, like the one in Theorem~\ref{Theo31}, can be avoided in at least two ways: by introducing some restrictions on the class of ranking problems considered, or by weakening of one or more axioms.
\subsection{Domain restrictions} \label{Sec41}
Besides the natural subclasses of ranking problems introduced in Section~\ref{Sec21}, the number of objects can be limited, too.
\begin{proposition} \label{Prop41}
The generalized row sum and least squares methods are independent of irrelevant matches and self-consistent on the set of ranking problems with at most three objects $\mathcal{R}^n | n \leq 3$.
\end{proposition}
\begin{proof}
$IIM$ has no meaning on the set $\mathcal{R}^n | n \leq 3$, so any self-consistent scoring procedure is appropriate, thus Lemma~\ref{Lemma32} provides the result.
\end{proof}
Proposition~\ref{Prop41} has some significance since ranking is not trivial if $n=3$.
However, if at least four objects are allowed, the situation is much more severe.
\begin{proposition} \label{Prop42}
There exists no scoring procedure that is independent of irrelevant matches and self-consistent on the set of balanced, unweighted and extremal ranking problems with four objects $\mathcal{R}_B^4 \cap \mathcal{R}_U^4 \cap \mathcal{R}_E^4$.
\end{proposition}
\begin{proof}
The ranking problems of Example~\ref{Examp33}, used for verifying the impossibility in Theorem~\ref{Theo31}, are from the set $\mathcal{R}_B^4 \cap \mathcal{R}_U^4 \cap \mathcal{R}_E^4$.
\end{proof}
Proposition~\ref{Prop42} does not deal with the class of round-robin ranking problems. Then another possibility result emerges.
\begin{proposition} \label{Prop43}
The row sum, generalized row sum and least squares methods are independent of irrelevant matches and self-consistent on the set of round-robin ranking problems $\mathcal{R}_R$.
\end{proposition}
\begin{proof}
Due to axioms \emph{agreement} \citep[Property~3]{Chebotarev1994} and \emph{score consistency} \citep{Gonzalez-DiazHendrickxLohmann2013}, the generalized row sum and least squares ranking methods coincide with the row sum on the set of $\mathcal{R}_R$, so Lemmata~\ref{Lemma31} and \ref{Lemma32} provide $IIM$ and $SC$, respectively.
\end{proof}
Perhaps it is not by chance that characterizations of the row sum method were suggested on this -- or even more restricted -- domain \citep{Young1974, HanssonSahlquist1976, Rubinstein1980, NitzanRubinstein1981, Henriet1985, Bouyssou1992}.
\subsection{Weakening of independence of irrelevant matches} \label{Sec42}
For the relaxation of $IIM$, a property discussed by \citet{Chebotarev1994} will be used.
\begin{definition} \label{Def41}
\emph{Macrovertex} \citep[Definition~3.1]{Chebotarev1994}:
Let $(N,R,M) \in \mathcal{R}^n$ be a ranking problem.
Object set $V \subseteq N$ is called \emph{macrovertex} if $m_{ik} = m_{jk}$ for all $X_i, X_j \in V$ and $X_k \in N \setminus V$.
\end{definition}
Objects in a macrovertex have the same number of comparisons against any object outside the macrovertex. The comparison structure in $V$ and $N \setminus V$ can be arbitrary. The existence of a macrovertex depends only on the matches matrix $M$, or, in other words, on the comparison multigraph of the ranking problem.
\begin{axiom} \label{Axiom41}
\emph{Macrovertex independence ($MVI$)} \citep[Property~8]{Chebotarev1994}:
Let $V \subseteq N$ be a macrovertex in ranking problems $(N,T),(N,T') \in \mathcal{R}^n$ and $X_i, X_j \in V$ be two different objects such that $(N,T)$ and $(N,T')$ are identical but $t'_{ij} \neq t_{ij}$.
Scoring procedure $f: \mathcal{R}^n \to \mathbb{R}^n$ is called \emph{macrovertex independent} if $f_k(N,T) \geq f_\ell(N,T) \Rightarrow f_k(N,T') \geq f_\ell(N,T')$ for all $X_k, X_\ell \in N \setminus V$.
\end{axiom}
Macrovertex independence says that the order of objects outside a macrovertex is independent of the number and result of comparisons between the objects inside the macrovertex.
\begin{corollary} \label{Col41}
$IIM$ implies $MVI$.
\end{corollary}
Note that if $V$ is a macrovertex, then $N \setminus V$ is not necessarily another macrovertex. Hence the 'dual' of property $MVI$ can be introduced.
\begin{axiom} \label{Axiom42}
\emph{Macrovertex autonomy ($MVA$)}:
Let $V \subseteq N$ be a macrovertex in ranking problems $(N,T),(N,T') \in \mathcal{R}^n$ and $X_k, X_\ell \in N \setminus V$ be two different objects such that $(N,T)$ and $(N,T')$ are identical but $t'_{k \ell} \neq t_{k \ell}$.
Scoring procedure $f: \mathcal{R}^n \to \mathbb{R}^n$ is called \emph{macrovertex autonomous} if $f_i(N,T) \geq f_j(N,T) \Rightarrow f_i(N,T') \geq f_j(N,T')$ for all $X_i, X_j \in V$.
\end{axiom}
Macrovertex autonomy says that the order of objects inside a macrovertex is not influenced by the number and result of comparisons between the objects outside the macrovertex.
\begin{corollary} \label{Col42}
$IIM$ implies $MVA$.
\end{corollary}
Similarly to $IIM$, changing the matches matrix -- as allowed by properties $MVI$ and $MVA$ -- may lead to an unconnected ranking problem.
\begin{figure}[htbp]
\centering
\caption{The comparison multigraph of Example~\ref{Examp41}}
\label{Fig41}
\begin{tikzpicture}[scale=1, auto=center, transform shape]
\tikzstyle{every node}=[draw,shape=rectangle];
\node[color=blue] (n1) at (240:3) {\textcolor{blue}{$\mathbf{X_1}$}};
\node[color=blue] (n2) at (180:3) {\textcolor{blue}{$\mathbf{X_2}$}};
\node[color=blue] (n3) at (120:3) {\textcolor{blue}{$\mathbf{X_3}$}};
\node (n4) at (60:3) {$X_4$};
\node (n5) at (0:3) {$X_5$};
\node (n6) at (300:3) {$X_6$};
\foreach \from/\to in {n1/n5,n2/n5,n3/n5}
\draw (\from) -- (\to);
\draw[transform canvas={xshift=0.3ex},color=red,semithick](n1) -- (n4);
\draw[transform canvas={xshift=-0.3ex},color=red,semithick](n1) -- (n4);
\draw[color=red,semithick](n1) -- (n5);
\draw[transform canvas={yshift=0.3ex},color=red,semithick](n2) -- (n4);
\draw[transform canvas={yshift=-0.3ex},color=red,semithick](n2) -- (n4);
\draw[color=red,semithick](n2) -- (n5);
\draw[transform canvas={yshift=0.3ex},color=red,semithick](n3) -- (n4);
\draw[transform canvas={yshift=-0.3ex},color=red,semithick](n3) -- (n4);
\draw[color=red,semithick](n3) -- (n5);
\draw[transform canvas={xshift=0.5ex},dashed,semithick](n2) -- (n3);
\draw[transform canvas={xshift=0ex},dashed,semithick](n2) -- (n3);
\draw[transform canvas={xshift=-0.5ex},dashed,semithick](n2) -- (n3);
\draw[transform canvas={xshift=0.5ex},dotted,thick](n5) -- (n6);
\draw[transform canvas={xshift=0ex},dotted,thick](n5) -- (n6);
\draw[transform canvas={xshift=-0.5ex},dotted,thick](n5) -- (n6);
\draw[transform canvas={yshift=0ex},dotted,thick](n4) -- (n5);
\end{tikzpicture}
\end{figure}
\begin{example} \label{Examp41}
Consider a ranking problem with the comparison multigraph in Figure~\ref{Fig41}.
The object set $V = \{ \mathbf{X_1,X_2,X_3} \}$ is a macrovertex as the number of (red) edges from any node inside $V$ to any node outside $V$ is the same (two to $X_4$, one to $X_5$, and zero to $X_6$). $V$ remains a macrovertex if comparisons inside $V$ (represented by dashed edges) or comparisons outside $V$ (dotted edges) are changed.
Macrovertex independence requires that the relative ranking of $X_4$, $X_5$, and $X_6$ does not depend on the number and result of comparisons between the objects $X_1$, $X_2$, and $X_3$.
Macrovertex autonomy requires that the relative ranking of $X_1$, $X_2$, and $X_3$ does not depend on the number and result of comparisons between the objects $X_4$, $X_5$, and $X_6$
The implications of $MVI$ and $MVA$ are clearly different since object set $N \setminus V = \{ X_4, X_5, X_6 \}$ is not a macrovertex because $m_{14} = 2 \neq 1 = m_{15}$.
\end{example}
\begin{corollary} \label{Col43}
The row sum method satisfies macrovertex independence and macrovertex autonomy.
\end{corollary}
\begin{proof}
It is an immediate consequence of Lemma~\ref{Lemma31} and Corollaries~\ref{Col41} and \ref{Col42}.
\end{proof}
\begin{lemma} \label{Lemma41}
The generalized row sum and least squares methods are macrovertex independent and macrovertex autonomous.
\end{lemma}
\begin{proof}
\citet[Property~8]{Chebotarev1994} has shown that generalized row sum satisfies $MVI$. The proof remains valid in the limit $\varepsilon \to \infty$ if the least squares ranking is defined to be unique, for instance, the sum of ratings of objects in all components of the comparison multigraph is zero.
Consider $MVA$.
Let $\mathbf{s} = \mathbf{s}(N,T)$, $\mathbf{s}' = \mathbf{s}(N,T')$, $\mathbf{x} = \mathbf{x}(\varepsilon)(N,T)$, $\mathbf{x}' = \mathbf{x}(\varepsilon)(N,T')$ and $\mathbf{q} = \mathbf{q}(N,T)$, $\mathbf{q}' = \mathbf{q}(N,T')$.
Let $V$ be a macrovertex and $X_i, X_j \in V$ be two arbitrary objects.
Suppose to the contrary that $x_i \geq x_j$, but $x_i' < x_j'$, hence $x_i' - x_i < x_j' - x_j$.
Let $x_k' - x_k = \max_{X_g \in V} (x_g' - x_g)$ and $x_\ell' - x_\ell = \min_{X_g \in V} (x_g' - x_g)$, therefore $x_k' - x_k > x_\ell' - x_\ell$ and $x_k' - x_k \geq x_g' - x_g \geq x_\ell' - x_\ell$ for any object $X_g \in V$.
For object $X_k$, definition~\ref{Def22} results in
\begin{equation} \label{eq1}
x_k = (1+\varepsilon m n)s_k + \varepsilon \sum_{X_g \in V} m_{kg} (x_g - x_k) + \varepsilon \sum_{X_h \in N \setminus V} m_{kh} (x_h - x_k).
\end{equation}
Apply \eqref{eq1} for object $X_\ell$. The difference of these two equations is
\begin{eqnarray} \label{eq2}
x_k - x_\ell & = & (1+\varepsilon m n) (s_k - s_\ell) + \varepsilon \sum_{X_g \in V} \left[ m_{kg} (x_g - x_k) - m_{\ell g} (x_g - x_\ell) \right] + \nonumber \\
& & + \varepsilon \sum_{X_h \in N \setminus V} \left[ m_{kh} (x_h - x_k) - m_{\ell h} (x_h - x_\ell) \right].
\end{eqnarray}
Note that $m_{kh} = m_{\ell h}$ for all $X_h \in N \setminus V$ since $V$ is a macrovertex, therefore \eqref{eq2} is equivalent to
\begin{eqnarray} \label{eq3}
\left( 1 + \varepsilon \sum_{X_h \in N \setminus V} m_{kh} \right) \left( x_k - x_\ell \right) & = & (1+\varepsilon m n) (s_k - s_\ell) + \nonumber \\
& & + \varepsilon \sum_{X_g \in V} \left[ m_{kg} (x_g - x_k) - m_{\ell g} (x_g - x_\ell) \right].
\end{eqnarray}
Apply \eqref{eq3} for the ranking problem $(N,T')$:
\begin{eqnarray} \label{eq4}
\left( 1 + \varepsilon \sum_{X_h \in N \setminus V} m_{kh}' \right) \left( x_k' - x_\ell' \right) & = & (1+\varepsilon m n) (s_k' - s_\ell') + \nonumber \\
& & + \varepsilon \sum_{X_g \in V} \left[ m_{kg}' (x_g' - x_k') - m_{\ell g}' (x_g' - x_\ell') \right].
\end{eqnarray}
Let $\Delta_{ij} = (x_i' - x_j') - (x_i - x_j)$ for all $X_i, X_j \in V$. Note that $m_{kh}' = m_{kh}$ for all $X_h \in N \setminus V$, $m_{kg}' = m_{kg}$ and $m_{\ell g}' = m_{\ell g}$ for all $X_g \in V$ as well as $s_k' = s_k$ and $s_\ell' = s_\ell$ since only comparisons outside $V$ may change. Take the difference of \eqref{eq4} and \eqref{eq3}
\begin{equation} \label{eq5}
\left( 1 + \varepsilon \sum_{X_h \in N \setminus V} m_{kh} \right) \Delta_{k \ell} = \varepsilon \sum_{X_g \in V} \left( m_{kg} \Delta_{gk} - m_{\ell g} \Delta_{g \ell} \right).
\end{equation}
Due to the choice of indices $k$ and $\ell$, $\Delta_{k \ell} > 0$ and $\Delta_{gk} \leq 0$, $\Delta_{g \ell} \geq 0$. It means that the left-hand side of \eqref{eq5} is positive, while its right-hand side is nonpositive, leading to a contradiction. Therefore only $x_i' - x_i = x_j' - x_j$, the condition required by $MVA$, can hold.
The same derivation can be implemented for the least squares method. With the notation $\Delta_{ij} = (q_i' - q_j') - (q_i - q_j)$ for all $X_i, X_j \in V$, we get -- analogously to \eqref{eq5} as $\varepsilon \to \infty$ --
\begin{equation} \label{eq6}
\sum_{X_h \in N \setminus V} m_{kh} \Delta_{k \ell} = \sum_{X_g \in V} \left( m_{kg} \Delta_{gk} - m_{\ell g} \Delta_{g \ell} \right).
\end{equation}
But $\Delta_{k \ell} > 0$, $\Delta_{gk} \leq 0$, and $\Delta_{g \ell} \geq 0$ is not enough for a contradiction now: \eqref{eq6} may hold if $\sum_{X_h \in N \setminus V} m_{kh} = 0$, namely, $X_k$ is not connected to any object outside the macrovertex $V$ as well as $\Delta_{gk} = 0$ and $\Delta_{g \ell} = 0$ when $m_{kg} = m_{\ell g} > 0$.
However, if there exists no object $X_g \in N \setminus V$ such that $m_{kg} = m_{\ell g} > 0$, then there is no connection between object sets $V$ and $N \setminus V$ since $V$ is a macrovertex, and we have two independent ranking subproblems, where the least squares ranking is unique according to the extension of definition~\ref{Def23}, so $MVA$ holds.
On the other hand, if there exists an object $X_g \in N \setminus V$ such that $m_{kg} = m_{\ell g} > 0$, then $\Delta_{gk} = 0$ and $\Delta_{g \ell} = 0$, but $\Delta_{k \ell} = \Delta_{g \ell} - \Delta_{gk} > 0$, which is a contradiction.
Therefore $q_i' - q_i = q_j' - q_j$, the condition required by $MVA$, holds.
\end{proof}
Lemma~\ref{Lemma41} leads to another possibility result.
\begin{proposition} \label{Prop44}
The generalized row sum and least squares methods are macrovertex autonomous, macrovertex independent and self-consistent.
\end{proposition}
This statement turns out to be more general than the one obtained by restricting the domain to round-robin ranking problems in Proposition~\ref{Prop43}.
\begin{corollary} \label{Col44}
$MVA$ or $MVI$ implies $IIM$ on the domain of round-robin ranking problems $\mathcal{R}_R$.
\end{corollary}
\begin{proof}
Let $(N,T),(N,T') \in \mathcal{R}_R^n$ be two ranking problems and $X_i,X_j,X_k, X_\ell \in N$ be four different objects such that $(N,T)$ and $(N,T')$ are identical but $t'_{k \ell} \neq t_{k \ell}$.
Consider the macrovertex $V = \{ X_i,X_j \}$. Macrovertex autonomy means $f_i(N,T) \geq f_j(N,T) \Rightarrow f_i(N,T') \geq f_j(N,T')$, the condition required by $IIM$.
Consider the macrovertex $V' = \{ X_k,X_\ell \}$. Macrovertex independence means $f_i(N,T) \geq f_j(N,T) \Rightarrow f_i(N,T') \geq f_j(N,T')$, the condition required by $IIM$.
\end{proof}
\subsection{Weakening of self-consistency} \label{Sec43}
We think self-consistency is more difficult to debate than independence of irrelevant matches, but, on the basis of the motivation of $SC$ in Section~\ref{Sec32}, there exists an obvious way to soften it by being more tolerant in the case of opponents: $X_i$ is not required to be better than $X_j$ if it achieves the same result against stronger opponents.
\begin{axiom} \label{Axiom43}
\emph{Weak self-consistency} ($WSC$):
A scoring procedure $f: \mathcal{R}^n \to \mathbb{R}^n$ is called \emph{weakly self-consistent} if the following implication holds for any ranking problem $(N,R,M) \in \mathcal{R}^n$ and for any objects $X_i,X_j \in N$:
if there exists a decomposition of the ranking problem $(N,R,M)$ into $m$ unweighted ranking problems -- that is, $R = \sum_{p=1}^m R^{(p)}$, $M = \sum_{p=1}^m M^{(p)}$, and $(N,R^{(p)},M^{(p)}) \in \mathcal{R}_U^n$ is an unweighted ranking problem for all $p = 1,2, \dots ,m$ -- in a way that enables a one-to-one mapping $g^{(p)}$ from $O^{(p)}_i$ onto $O^{(p)}_j$ such that $r_{ik}^{(p)} \geq r_{j \mathfrak{g}^{(p)}(k)}^{(p)}$ and $f_k(N,R,M) \geq f_{\mathfrak{g}^{(p)}(k)}(N,R,M)$ for all $p = 1,2, \dots ,m$ and $X_k \in O_i^{(p)}$, then
$f_i(N,R,M) \geq f_{j}(N,R,M)$, furthermore, $f_i(N,R,M) > f_{j}(N,R,M)$ if $r_{ik}^{(p)} > r_{j \mathfrak{g}^{(p)}(k)}^{(p)}$ for at least one $1 \leq p \leq m$ and $X_k \in O_i^{(p)}$.
\end{axiom}
It can be seen that self-consistency (Axiom~\ref{Axiom32}) formalizes conditions \ref{SC_con1}-\ref{SC_con6}, while weak self-consistency only requires the scoring procedure to satisfy \ref{SC_con1}, \ref{SC_con2}, and \ref{SC_con4}-\ref{SC_con6}.
\begin{corollary} \label{Col45}
$SC$ implies $WSC$.
\end{corollary}
\begin{lemma} \label{Lemma42}
The row sum method is weakly self-consistent.
\end{lemma}
\begin{proof}
Let $(N,R,M) \in \mathcal{R}^n$ be a ranking problem such that $R = \sum_{p=1}^m R^{(p)}$, $M = \sum_{p=1}^m M^{(p)}$ and $(N,R^{(p)},M^{(p)}) \in \mathcal{R}_U^n$ is an unweighted ranking problem for all $p = 1,2, \dots ,m$.
Let $X_i,X_j \in N$ be two objects and assume that for all $p = 1,2, \dots ,m$ there exists a one-to-one mapping $g^{(p)}$ from $O^{(p)}_i$ onto $O^{(p)}_j$, where $r_{ik}^{(p)} \geq r_{j \mathfrak{g}^{(p)}(k)}^{(p)}$ and $s_k(N,R,M) \geq s_{\mathfrak{g}^{(p)}(k)}(N,R,M)$.
Obviously, $s_i(N,R,M) = \sum_{p=1}^m \sum_{X_k \in O_i^{(p)}} r_{ik} \geq \sum_{p=1}^m \sum_{X_k \in O_j^{(p)}} r_{j \mathfrak{g}^{(p)}(k)} = s_j(N,R,M)$. Furthermore, $s_i(N,R,M) > s_j(N,R,M)$ if $r_{ik}^{(p)} > r_{j \mathfrak{g}^{(p)}(k)}^{(p)}$ for at least one $p = 1,2, \dots ,m$.
\end{proof}
The last possibility result comes immediately.
\begin{proposition} \label{Prop45}
The row sum method is independent of irrelevant matches and weakly self-consistent.
\end{proposition}
\begin{proof}
It follows from Lemmata~\ref{Lemma31} and \ref{Lemma42}.
\end{proof}
According to Lemma~\ref{Lemma42}, the violation of self-consistency by row sum (see Corollary~\ref{Col31}) is a consequence of condition~\ref{SC_con3}: the row sums of $X_i$ and $X_j$ are the same even if $X_i$ achieves the same result as $X_j$ against stronger opponents.
It is a crucial argument against the use of row sum for ranking in tournaments which are not organized in a round-robin format, supporting the empirical findings of \citet{Csato2017c} for Swiss-system chess team tournaments.
\section{Conclusions} \label{Sec5}
\begin{table}[htbp]
\centering
\caption{Summary of the axioms}
\label{Table1}
\begin{subtable}{\textwidth}
\centering
\begin{tabularx}{0.9\textwidth}{l CC} \toprule
Axiom & Abbreviation & Definition \\ \midrule
Independence of irrelevant matches & $IIM$ & Axiom~\ref{Axiom31} \\
Self-consistency & $SC$ & Axiom~\ref{Axiom32} \\
Macrovertex independence & $MVI$ & Axiom~\ref{Axiom41} \\
Macrovertex autonomy & $MVA$ & Axiom~\ref{Axiom42} \\
Weak self-consistency & $WSC$ & Axiom~\ref{Axiom43} \\ \bottomrule
\end{tabularx}
\end{subtable}
\vspace{0.25cm}
\begin{subtable}{\textwidth}
\begin{tabularx}{\textwidth}{l CCC} \toprule
& \multicolumn{3}{c}{Is it satisfied by the particular method?} \\
Axiom & Row sum (Definition~\ref{Def21}) & Generalized row sum (Definition~\ref{Def22}) & Least squares (Definition~\ref{Def23}) \\ \midrule
Independence of irrelevant matches & \textcolor{PineGreen}{\ding{52}} & \textcolor{BrickRed}{\ding{55}} & \textcolor{BrickRed}{\ding{55}} \\
Self-consistency & \textcolor{BrickRed}{\ding{55}} & \textcolor{PineGreen}{\ding{52}} & \textcolor{PineGreen}{\ding{52}} \\
Macrovertex independence & \textcolor{PineGreen}{\ding{52}} & \textcolor{PineGreen}{\ding{52}} & \textcolor{PineGreen}{\ding{52}} \\
Macrovertex autonomy & \textcolor{PineGreen}{\ding{52}} & \textcolor{PineGreen}{\ding{52}} & \textcolor{PineGreen}{\ding{52}} \\
Weak self-consistency & \textcolor{PineGreen}{\ding{52}} & \textcolor{PineGreen}{\ding{52}} & \textcolor{PineGreen}{\ding{52}} \\ \bottomrule
\end{tabularx}
\end{subtable}
\end{table}
The paper has discussed the problem of ranking objects in a paired comparison-based setting, which allows for different preference intensities as well as incomplete and multiple comparisons, from a theoretical perspective. We have used five axioms for this purpose, and have analysed three scoring procedures with respect to them. Our findings are presented in Table~\ref{Table1}.
However, our main contribution is a basic impossibility result (Theorem~\ref{Theo31}). The theorem involves two axioms, one -- called independence of irrelevant matches -- posing a kind of independence concerning the order of two objects, and the other -- self-consistency -- requiring to rank objects with an obviously better performance higher.
We have also aspired to get some positive results. Domain restriction is fruitful in the case of round-robin tournaments (Proposition~\ref{Prop43}), whereas limiting the intensity and the number of preferences does not eliminate impossibility if the number of objects is meaningful (Proposition~\ref{Prop42}, but Proposition~\ref{Prop41}). Self-consistency has a natural weakening, satisfied by row sum besides independence of irrelevant matches (Proposition~\ref{Prop45}), although $SC$ seems to be the more plausible property than $IIM$.
Independence of irrelevant matches can be refined through the concept of macrovertex such that the relative ranking of two objects should not depend on an outside comparison only if the comparison multigraph have a special structure. The implied possibility theorem (Proposition~\ref{Prop44}) is more general than the positive result in the case of round-robin ranking problems (consider Corollary~\ref{Col44}).
There remains an unexplored gap between our impossibility and possibility theorems since the latter allows for more than one scoring procedure. Actually, generalized row sum and least squares methods cannot be distinguished with respect to the properties examined here, as illustrated by Table~\ref{Table1}.\footnote{~Some of their differences are highlighted by \citet{Gonzalez-DiazHendrickxLohmann2013}.}
The loss of independence of irrelevant matches makes characterizations on the general domain complicated since self-consistency is not an axiom easy to seize. Despite these challenges, axiomatic construction of scoring procedures means a natural continuation of the current research.
\section*{Acknowledgements}
\addcontentsline{toc}{section}{Acknowledgements}
\noindent
We thank \emph{S\'andor Boz\'oki} for useful advice. \\
Anonymous reviewers provided valuable comments and suggestions on earlier drafts. \\
The research was supported by OTKA grant K 111797 and by the MTA Premium Post Doctorate Research Program.
| {
"attr-fineweb-edu": 1.799805,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeXfxK0wg09KOYEbo | \section{Introduction}
\vspace{-.75em}
In many competitions, a large number of entrants compete against each other and are then {ordered} based on performance. Prize money is distributed to the entrants based on their rank in the order, with higher ranks receiving more money than lower ranks.
The question that we are interested in is: how should prize money be distributed among the entrants? That is, how much money should
go to the winner of the contest? How much to 2nd place? How much to 1,128th place?
\vspace{-1em}
\subsection{Motivation}
\vspace{-.65em}
We became interested in this problem in the context of daily \emph{fantasy sports}\footnote{In fantasy sports, participants build a team of real-world athletes, and the team earns points based on the actual real-world performance of the athletes.
Traditional fantasy sports competitions run for an entire professional sports season; daily sports competitions typically run for just a single day or week.
\vspace{-.25em}}, a growing sector of online fantasy sports competitions where users pay a fee to enter and can win real prize money.
Daily fantasy sports were legalized in the United States in 2006 by the Unlawful Internet Gambling Enforcement Act, which classified them as \emph{games of skill}. Since then, the industry has been dominated by two companies,
FanDuel and DraftKings. In 2015, these companies collected a combined \$3 billion in entry fees, which generated an estimated \$280 million in revenue \cite{updatedrevenues}. Analysts project continued industry growth as daily fantasy attracts a growing portion of the 57 million people who actively play traditional fantasy sports \cite{eilersResearch,fstaDemo}.
Yahoo launched a daily fantasy sports product in July 2015. Work on the contest management portion of this product has led to interesting economic and algorithmic challenges involving player pricing, revenue maximization, fill-rate prediction and of course, payout structure generation.
\smallskip
\noindent \textbf{The Importance of Payout Structure.}
Payout structure has been identified as an important factor in determining how appealing a competition is to users.
Payouts are regularly discussed on forums and websites devoted to the daily fantasy sports industry, and these structures have a substantial effect on the strategies that contest entrants pursue (see, e.g., \cite{roto1,roto2,roto3,roto4,roto5}).
Furthermore, considerable attention has been devoted to payout structures in related contexts. For example, popular articles discuss the payout structures used in World Series of Poker (WSOP) events (see Section \ref{sec:poker} for details), and at least one prominent poker tournament director has said that ``payout structure could determine whether or not a player comes back to the game.'' \cite{poker2}.
\vspace{-1em}
\subsection{Payouts in Daily Fantasy Sports}
\vspace{-.6em}
For some types of fantasy sports contests, the appropriate payout structure is obvious. For example, in a ``Double Up'' contest, roughly half of the entrants win back twice the entry fee, while the other half wins nothing.
However, some of the most popular contests are analogous to real-world golf and poker tournaments, in which the winner should win a very large amount of money, second place should win slightly less, and so on. We refer to such competitions as \emph{tournaments}.
Manually determining a reasonable payout structure for each tournament offered is unprincipled and laborious, especially when prize pools and contestant counts vary widely across contests. Furthermore, given typical constraints, manually constructing even a single payout structure is difficult, even ``virtually impossible'' in the words of a WSOP Tournament Director \cite{poker2}.
The challenge is amplified for online tournaments where the number of contestants and prizes awarded can be orders of magnitude larger than in traditional gaming and sporting: FanDuel and DraftKings run contests with hundreds of thousands of entrants and up to \$15 million in prizes.
Accordingly, our goal is to develop efficient algorithms for automatically determining payouts.
\vspace{-1em}
\subsection{Summary of Contributions}
\vspace{-.6em}
Our contributions are two-fold. First, we (partially) formalize the properties that a payout structure for a daily fantasy tournament should satisfy.
Second, we present several algorithms for calculating such payout structures based on a general two stage framework. In particular, we present an efficient heuristic that scales to extremely large tournaments and is currently in production at Yahoo.
All methods are applicable beyond fantasy sports to any large tournament, including those for golf, fishing, poker, and online gaming (where very large prize pools are often crowd-funded \cite{crowdfunding}).
\section{Payout Structure Requirements}
\vspace{-.5em}
\label{sec:requirements}
We begin by formalizing properties that we want in a payout structure.
A few are self-evident.
\vspace{-.3em}
\begin{itemize}[itemsep=-.1em]
\item \textbf{(Prize Pool Requirement)} The total amount of money paid to the users must be equal to the Total Prize Pool. This is a hard requirement, for legal reasons: if a contest administrator says it will pay out \$1 million, it must pay out exactly \$1 million.
\item\textbf{(Monotonicity Requirement)} The prizes should satisfy \emph{monotonicity}. First place should win at least as much as second place, who should win at least as much as third, and so on.
\end{itemize}
\vspace{-.3em}
\noindent There are less obvious requirements as well.
\vspace{-.3em}
\begin{itemize}[itemsep=-.1em]
\item \textbf{(Bucketing Requirement)} To concisely publish payout structures, prizes should fall into a manageable set of ``buckets'' such that all users within one bucket get the same prize. It is not desirable to pay out thousands of distinct prize amounts.
\item \textbf{(Nice Number Requirement)} Prizes should be paid in amounts that are aesthetically pleasing. Paying a prize of $\$1,000$ is preferable to $\$1,012.11$, or even to $\$1,012$.
\item \textbf{(Minimum Payout Requirement)}: It is typically unsatisfying to win an amount smaller than the entry fee paid to enter the contest. So any place awarded a non-zero amount should receive at least some minimum amount $E$. Typically, we set $E$ to be 1.5 times the entry fee.
\end{itemize}
\vspace{-.3em}
\noindent Finally, the following characteristic is desirable:
\vspace{-.3em}
\begin{itemize}[itemsep=-.1em]
\item \textbf{(Monotonic Bucket Sizes)}: Buckets should increase in size when we move from higher ranks to lower ranks.
For example, it is undesirable for 348 users to receive a payout of \$20, 2 users to \$15, and 642 users to receive \$10.
\end{itemize}
Of course, it is not enough to simply find a payout structure satisfying all of the requirements above. For example,
a winner-take-all payout structure satisfies all requirements, but is not appealing to entrants.
Thus, our algorithms proceed in two stages. We first determine an ``initial'', or ideal, payout structure that
captures some intuitive notions of attractiveness and fairness. We then modify the initial payout structure \emph{as little as possible} to satisfy the requirements.
Before discussing this process, three final remarks regarding requirements are in order.
\medskip
\noindent \textbf{Small Contests.} In tournaments with few entrants, it is acceptable to pay each winner a distinct prize. In this case, the bucketing requirement is superfluous and choosing payouts is much easier.
\medskip
\noindent \textbf{Handling Ties.} In daily fantasy sports and other domains such as poker and golf, entrants who tie for a rank typically split all prizes due to those entrants equally. Accordingly, even if the initial payout structure satisfies the Nice Number Requirement, the actual payouts may not. However,
this is inconsequential: the purpose of the Nice Number Requirement is to ensure that aesthetically pleasing payouts are published, not to ensure that aesthetically pleasing payouts are received.
\medskip \noindent \textbf{What are ``Nice Numbers''?}
There are many ways to define nice numbers, i.e., the numbers that we deem aesthetically pleasing.
Our algorithms will work with any such definition, as long as it comes with an algorithm that, given a number $a \in \mathbb{R}$,
can efficiently return the largest nice number less than or equal to $a$.
Here we give one possible definition.
\begin{definition}[Nice Number]
\label{def:nice_numbers}
A nonnegative integer $X\in Z_+$ is a ``nice number'' if $X=A\cdot 10^K$ where $K, A \in Z_+$, and $A\le 1000$ satisfies all of the following properties:
\vspace{-.3em}
\begin{enumerate}[itemsep=-.3em]
\item if $A\ge 10$ then $A$ is a multiple of $5$;
\item if $A\ge 100$ then $A$ is a multiple of $25$;
\item if $A\ge 250$ then $A$ is a multiple of $50$.
\end{enumerate}
\end{definition}
\vspace{-.3em}
\noindent Under Definition \ref{def:nice_numbers}, the nice numbers less than or equal to 1000 are:
\vspace{-.4em}
\begin{align*}
\{1, 2, 3, \dots, 10, 15, 20, \dots, 95,\allowbreak 100,\allowbreak 125, 150,
\dots, 225, 250, 300, 350, \dots, 950, 1000\}
\end{align*}
\vspace{-1.75em}
\noindent The nice numbers between 1000 and 3000 are
$\{1000, 1250, 1500, 1750, 2000, 2250, 2500, 3000\}$.
\subsection{Prior Work}
\vspace{-.5em}
\label{sec:prior_work}
While our exact requirements have not been previously formalized, observation of published payout structures for a variety of contests suggests that similar guiding principals are standard. Moreover, manually determining payouts to match these requirements is, anecdotally, not an easy task. For example, it is not hard to find partially successful attempts at satisfying nice number constraints.
\vspace{-1em}
\begin{figure}[h]
\centering
\includegraphics[width=.55\textwidth]{bassmaster_open_2015.eps}
\vspace{-1em}
\caption{Bassmaster Open payouts are not quite ``nice numbers'' \protect\cite{bassmasterInfo}.}
\label{fig:bmaster_failure}
\vspace{-.75em}
\end{figure}
For example, consider the 2015 Bassmaster Open fishing tournament, which paid out a top prize of \$51,400, instead of the rounder \$51,000 or \$50,000 (see Figure \ref{fig:bmaster_failure}).
The Bassmaster payout structure also violates our constraint on bucket size monotonicity.
Similar ``partially optimized'' payout structures can be found for poker \cite{pokerPayoutsFailure,poker2}, golf \cite{usopenprizes}, and other tournaments.
\smallskip \noindent
\textbf{Payout Structures for Poker.} \label{sec:poker}
Several popular articles describe the efforts of the World Series Of Poker (WSOP), in conjunction with Adam Schwartz of Washington and Lee University,
to develop an algorithm for determining payout structures for their annual ``Main Event'' \cite{poker1,poker2}.
Schwartz's solution, which was based on Newton's Method, was combined with manual intervention to determine the final payout structure in 2005.
WSOP still utilized considerable manual intervention when determining payouts until an update of the algorithm in 2009 made doing so unnecessary.
While a formal description of the algorithm in unavailable, it appears to be very different from ours; Schwartz has stated that their solution attempts to obtain payout structures with a ``constant second derivative'' (our solutions do not satisfy this property). Their work also departs qualitatively from ours in that they do not consider explicit nice number or bucket size requirements.
\smallskip \noindent
\textbf{Piecewise Function Approximation.}
As mentioned, our payout structure algorithms proceed in two stages. An initial payout curve is generated with a separate payout for every winning position. The curve is then modified to fit our constraints, which requires bucketing payouts so that a limited number of distinct prizes are paid. We seek the bucketed curve closest to our initial payout curve.
This curve modification task is similar to the well studied problem of optimal approximation of a function by a histogram (i.e., a piecewise constant function). This problem has received considerable attention, especially for applications to database query optimization \cite{Ioannidis:2003}. While a number of algorithmic results give exact and approximate solutions \cite{Jagadish:1998,Guha:2006}, unfortunately no known solutions easily adapt to handle our additional constraints beyond bucketing.
\vspace{-1em}
\section{Our Solution}
\vspace{-.5em}
Let $B$ denote the total prize pool, $N$ denote the number of entrants who should win a non-zero amount, and $P_i$ denote the prize that we decide to award to place $i$. In general, $B$, $P_1$ and $N$ are user-defined parameters.
$P_1$ can vary widely in fantasy sports contests, anywhere from $.05\cdot B$ to nearly $.5\cdot B$, but $.15 \cdot B$ is a standard choice (i.e., first place wins 15\% of the prize pool). Typically, $N$ is roughly 25\% of the total number of contest entrants, although it varies as well.
Our solution proceeds in two stages. We first determine an initial payout structure that does not necessarily satisfy Bucketing or Nice Number requirements. The initial payout for place $i$ is denoted $\pi_i$. We then modify the payouts to satisfy our requirements. In particular, we search for feasible payouts $P_1, \ldots, P_N$ that are nearest to our initial payouts in terms of sum-of-squared error.
\vspace{-1em}
\subsection{Determining the Initial Payout Structure}
\label{sec:initial_structure}
\vspace{-.5em}
First, to satisfy the Minimum Payout Requirement, we start by giving each winning place $E$ dollars. This leaves $B-N \cdot E$ additional dollars to disperse among the $N$ winners. How should we do this?
We have decided that it is best to disperse the remaining budget according to a \emph{power law}. That is, the amount of the budget that is given to place $i$ should be proportional to $1/i^{\alpha}$ for some fixed constant $\alpha > 0$. It is easy to see that for \emph{any} positive value of $\alpha$, the resulting payouts will satisfy the Monotonicity Requirement, but there is a unique value of $\alpha$ ensuring that the payouts sum to exactly the Total Prize Pool $B$.
Specifically, we need to choose the exponent $\alpha$ to satisfy
\vspace{-.5em}
$$B-N\cdot E=\sum_{i=1}^N\frac{P_1-E}{i^\alpha}.$$
\vspace{-1.25em}
\noindent We can efficiently solve this equation for $\alpha$ to additive error less than $.01$ via binary search. We then define the ideal payout to place $i$
to be $\pi_i := E + \frac{P_1-E}{i^\alpha}$. This definition ensures both that first place gets paid exactly $\$P_1$, and that the sum of all of the tentative payouts is exactly $B$.
\vspace{-1em}
\begin{figure}[h]
\centering
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=.75\textwidth]{power_law.eps}
\vspace{-.5em}
\caption{Ideal payouts using power law method.}
\label{fig1}
\end{subfigure}%
\hfill
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=.75\textwidth]{exp_law.eps}
\vspace{-.5em}
\caption{Ideal payouts using exponential distribution.}
\label{fig2}
\end{subfigure}
\vspace{-.25em}
\caption{Possible initial payout structure when $N=\text{10,000}$, $P_1=\text{150,000}$, and $B=\text{1 million}$.}
\label{examplefig}
\vspace{-1em}
\end{figure}
Why use a power law? Empirically, a power law ensures that as we move from 1st place to 2nd to 3rd and so on, payouts drop off at a nice pace: fast enough that top finishers are richly rewarded relative to the rest of the pack, but slow enough that users in, say, the 10th percentile still win a lot of money. A power law curve also encourages increased prize differences between higher places, a property cited as a desirable by WSOP organizers \cite{poker2}.
For illustration, consider a tournament where 40,000 entrants vie for \$1 million.
If 1st place wins 15\% of the prize pool and 25\% of entrants should win a non-zero prize, then $P_1 =\$150,000$ and $N=10,000$.
Figure \ref{fig1} reveals the initial payouts determined by our power law method.
Figure \ref{fig2} reveals what initial payouts would be if we used an exponential distribution instead, with prizes proportional to $1/\alpha^i$ rather than $1/i^\alpha$. Such distributions are a popular choice for smaller tournaments, but the plot reveals that they yield much more top-heavy payouts than a power law approach.
In our example, only the top few dozen places receive more than the minimum prize.
\vspace{-.75em}
\begin{figure}[h]
\centering
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=.75\textwidth]{empirical_power_law_fantasy.eps}
\vspace{-.5em}
\caption{Large fantasy sports tournaments.}
\label{empirical_payouts:sub1}
\end{subfigure}%
\hfill
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=.75\textwidth]{empirical_power_law_nonfantasy.eps}
\vspace{-.5em}
\caption{Other well known tournaments.}
\label{empirical_payouts:sub2}
\end{subfigure}
\vspace{-.7em}
\caption{Log plots of existing tournament payout structures indicate linear structure.}
\label{fig:empirical_payouts}
\vspace{-.75em}
\end{figure}
As further justification, we check in Figure \ref{fig:empirical_payouts} that a power law roughly models payouts for existing fantasy sports contests and other large tournaments. Since $\log(i^\alpha) = \alpha\log(i)$, plotting payouts with both axes on a logarithmic scale will reveal a linear trend when the payout distribution approximates a power law. This trend is visible in all of the tournaments checked in Figure \ref{fig:empirical_payouts}.
Obtaining quantitative goodness-of-fit metrics for the power law can be difficult \cite{clauset2009power} especially given the heavy ``value binning'' present in payout structures \cite{virkar2014power}. Nevertheless, standard fitting routines \cite{powerLawCode} yield an average $p$-value of $.237$ for the Figure \ref{fig:empirical_payouts} tournaments. This value is in line with calculated $p$-values for well known empirical power law distributions with binned values \cite{virkar2014power}.
Overall, the power law's simplicity, empirical effectiveness, and historical justification make it an ideal candidate for generating initial prize values.
\vspace{-1em}
\subsection{Satisfying the Remaining Requirements}
\label{sub:rounding}
\vspace{-.5em}
Now that we have an initial payout structure $(\pi_1, \dots, \pi_N)$, our goal becomes to identify a related payout structure that is ``close'' to this ideal one, but satisfies our Bucketing and Nice Number requirements. To measure ``closeness'' of payout structures we have found that (squared) Euclidean distance works well in practice: we define the distance between two payout structures $(P_1, \dots, P_N)$ and $(Q_1, \dots, Q_N)$ to be $\sum_{i=1}^N (P_i - Q_i)^2$.
This metric is equivalent to the popular ``V-optimal'' measure for approximation of a function by a general histogram \cite{Ioannidis:1995}.
With a metric in hand, our task is formalized as an optimization problem, Problem \ref{main_problem}.
Specifically, suppose we are given a target number of buckets $r$. Then the goal is to partition the set $\{1,\ldots, N\}$ into $r$ buckets $S_1,\ldots,S_r$ (each containing a set of consecutive places), and to choose a set of payouts $\Pi_1, \dots, \Pi_r$ for each bucket, forming a final solution $(S_1, \dots, S_r, \Pi_1, \dots, \Pi_r)$.
\vspace{-.5em}
\begin{figure*}[h]
\begin{mymathbox}
\begin{problem}[Payout Structure Optimization]
\label{main_problem}
For a given set of ideal payouts $\{\pi_1, \ldots, \pi_N\}$ for $N$ contest winners, a total prize pool of $B$, and minimum payout $E$, find $(S_1, \dots, S_r, \Pi_1, \dots, \Pi_r)$ to optimize:
\vspace{-.75em}
\begin{eqnarray*}
\min \sum_{j=1}^r \sum_{i\in S_j}(\pi_i-\Pi_{j})^2&\text{subject to:}&\\
E \le \Pi_r<\Pi_{r-1}< \dots< \Pi_1,& & \hspace{-0mm} \text{(Monotonicity \& Min. Payout Requirements)}\\
\sum_{j=1}^r\Pi_j|S_j|=B,&& \text{(Prize Pool Requirement)}\\
\Pi_j \mbox{ is a ``nice number'' },& j \in [r]\footnote{For ease of notation we use $[T]$ to denote the set of integers $\{1,2,\ldots,T\}$.}&\text{(Nice Number Requirement)}\\
\sum_{j=1}^r |S_j|=N,&& \text{(Ensure Exactly $N$ Winners)}\\
0\leq |S_1| \leq |S_{2}| < \dots \leq |S_r|,&&\text{(Monotonic Bucket Sizes}\footnote{Setting $S_1 = \emptyset$, $S_2 = \emptyset$, etc. chooses a payout structure with fewer buckets than the maximum allowed.}\text{)}
\end{eqnarray*}
$\textit{where } S_j = \left\{\sum_{i < j} |S_i|+1, \sum_{i < j} |S_i|+2, \ldots, \sum_{i \leq j} |S_i|\right\} \text{ for } j \in [r]$.
\end{problem}
\end{mymathbox}
\vspace{-1.25em}
\end{figure*}
One advantage of our approach is that Problem \ref{main_problem} is agnostic to the initial curve $(\pi_1, \dots, \pi_N)$. Our algorithms could just as easily be applied to an ideal payout curve generated, for example, using the ``constant second derivative'' methodology of the World Series of Poker \cite{poker2}.
\smallskip
\noindent {\textbf{Problem Feasibility.}
Note that, for a given set of constraints, Problem \ref{main_problem} \emph{could} be infeasible as formulated: it may be that no assignment to $(S_1, \dots, S_r, \Pi_1, \dots, \Pi_r)$ satisfies the Nice Number and Bucket Size Monotonicity constraints while giving payouts that sum to $B$.
While the problem is feasible for virtually all fantasy sports tournaments tested (see Section \ref{sec:experiments}), we note that it is easy to add more flexible constraints that always yield a feasible solution.
For example, we can soften the requirement on $N$ by adding a fixed objective function penalty for extra or fewer winners.
With this change, Problem 3.1 is feasible whenever B is a nice-number: we can award a prize of B to first place and prizes of zero to all other players. Experimentally, there are typically many feasible solutions, the best of which are vastly better than the trivial ``winner-take-all'' solution.
\smallskip
\noindent {\textbf{Exact Solution via Dynamic Programming.}
While Problem \ref{main_problem} is complex, when feasible it can be solved exactly in pseudo-polynomial time via multi-dimensional dynamic programming.
The runtime of the dynamic program depends on the number of potential prize assignments, which includes all of the nice numbers between $E$ and $B$. Since many reasonable definitions (including our Definition \ref{def:nice_numbers}) choose nice numbers to spread out exponentially as they increase,
we assume that this value is bounded by $O(\log B)$.
\vspace{-.25em}
\begin{theorem}[Dynamic Programming Solution]
\label{thm:dynamic}
Assuming that there are $O(\log B)$ nice numbers in the range $[E,B]$, then Problem \ref{main_problem} can be solved in pseudo-polynomial time $O(rN^3B\log^2B)$.
\end{theorem}
\vspace{-.25em}
A formal description of the dynamic program and short proof of its correctness are included in Appendix \ref{app:dyno_proof}.
Unfortunately, despite a reasonable theoretical runtime, the dynamic program requires $O(rN^2 B \log B)$ space, which quickly renders the solution infeasible in practice.
\vspace{-1em}
\section{Integer Linear Program}
\label{sec:offtheshelf}
\vspace{-.5em}
Alternatively, despite its complexity, we show that it is possible to formulate Problem \ref{main_problem} as a standard integer linear program. Since integer programming is computationally hard in general, this does not immediately yield an efficient algorithm for the problem. However, it does allow for the application of off-the-shelf optimization packages to the payout structure problem.
\vspace{-.75em}
\begin{figure*}[h]
\begin{mymathbox}
\begin{problem}[Payout Structure Integer Program]\label{integer_program}
For a given set of ideal payouts $\{\pi_1, \ldots, \pi_N\}$, a total prize pool of $B$, a given set of acceptable prize payouts $\{p_1 > p_2 > \ldots > p_m\}$, and an allowed budget of $r$ buckets solve:
\vspace{-.5em}
\begin{eqnarray*}
\min \sum_{i\in[N],j\in[r],k\in[m]} x_{i,j,k}\cdot (\pi_i - p_k)^2&\text{subject to:}&\\
\textbf{Problem constraints:}\\
\sum_{k\in[m]} (k+1/2)\cdot\tilde{x}_{j,k} - k\cdot\tilde{x}_{j+1,k} \leq 0,&j\in[r-1], &\text{(Monotonicity Requirements)}\\
\sum_{i\in[N],j\in[r],k\in[m]} x_{i,j,k} \cdot p_k=B,&& \text{(Prize Pool Requirement)}\\
\sum_{i\in[N], k\in[m]} x_{i,j,k} - x_{i,j+1,k} \leq 0,&j\in[r-1],&\text{(Monotonic Bucket Sizes)}\\
\textbf{Consistency constraints:}\\
\sum_{j\in[r],k\in[m]} x_{i,j,k} = 1,&i\in[N],&\text{(One Bucket Per Winner)}\\
\sum_{k\in[m]} \tilde{x}_{j,k} \leq 1,&j\in[r],&\text{(One Prize Per Bucket)}\\
\tilde{x}_{j,k} - x_{i,j,k} \geq 0,&i\in[N], j\in[r], k\in[m], &\text{(Prize Consistency)}
\end{eqnarray*}
\end{problem}
\end{mymathbox}
\vspace{-1.25em}
\end{figure*}
To keep the formulation general, assume that we are given a fixed set of acceptable prize payouts, $\{p_1 > p_2 > \ldots > p_m\}$. These payouts may be generated, for example, using Definition \ref{def:nice_numbers} for nice numbers. In our implementation, the highest acceptable prize is set to $p_1 = P_1$, where $P_1$ is the pre-specified winning prize. Additionally, to enforce the minimum payout requirement, we chose $p_m \geq E$. Our integer program, formalized as Problem \ref{integer_program}, involves the following variables:
\vspace{-.5em}
\begin{itemize}[itemsep=-.3em]
\item $N\times r \times m$ \emph{binary} ``contestant variables'' $x_{i,j,k}$. In our final solution, $x_{i,j,k} = 1$ if and only if contestant $i$ is placed in prize bucket $S_j$ and receives payout $p_k$.
\item $r\times m$ \emph{binary} ``auxiliary variables'' $\tilde{x}_{j,k}$.
$\tilde{x}_{j,k} = 1$ if and only if bucket $S_j$ is assigned payout $p_k$. Constraints ensure that $x_{i,j,k}$ only equals $1$ when $\tilde{x}_{j,k} = 1$. If, for a particular $j$, $\tilde{x}_{j,k} = 0$ for all $k$ then $S_j$ is not assigned a payout, meaning that the bucket is not used.
\end{itemize}
\vspace{-.5em}
It is easy to extract a payout structure from any solution to the integer program. Showing that the payouts satisfy our Problem \ref{main_problem} constraints is a bit more involved. A proof is in Appendix \ref{app:ip_proof}.
\vspace{-1em}
\section{Heuristic Algorithm}
\label{sec:heuristic}
\vspace{-.5em}
Next we describe a heuristic algorithm for Problem \ref{main_problem} that is used in production at Yahoo. The algorithm is less rigorous than our integer program and can potentially generate payout structures that violate constraints. However, it scales much better than the IP and experimentally produces stellar payouts. The heuristic proceeds in four stages.
\smallskip
\noindent \textbf{Stage 1: Initialize Bucket Sizes.}
First the algorithm chooses tentative bucket sizes $|S_1|\le \ldots\le |S_r|$. We set $|S_1|=|S_2|=|S_3|=|S_4|=1$. The choice to use 4 ``singleton buckets'' by default is flexible: the algorithm can easily accommodate more. If $N-\sum_{i=1}^4|S_i|=1$, we define $|S_5|=1$ and stop. Otherwise we set $|S_t|=\lceil \beta\cdot |S_{t-1}|\rceil $ where $\beta\ge 1$ is a parameter of the heuristic.
The algorithm stops when $\lceil\beta^2 |S_t| \rceil + \lceil \beta |S_t| \rceil +\sum_{i=1}^t|S_i|>N$ and $\lceil \beta |S_t| \rceil+\sum_{i=1}^t|S_i|\le N$. We define
$$|S_{t+1}|=\left \lfloor N-\sum_{i=1}^t|S_i|/2\right\rfloor,\
|S_{t+2}|=\left \lceil \frac{N-\sum_{i=1}^t |S_i|}{2}\right\rceil.$$
An initial value for $\beta$ can be determined by solving $\beta + \beta^2 + \ldots + \beta^{r-4} = N - 4$ via binary search. If the heuristic produces more than $r$ buckets using the initial $\beta$, we increase $\beta$ and iterate.
\smallskip
\noindent \textbf{Stage 2: Initialize Prizes.}
Next, we begin by rounding the first tentative prize, $\pi_1$, down to the nearest nice number. The difference between $\pi_1$ and the rounded number is called the {\it leftover}, which we denote by $L$.
Iteratively, for each bucket $S_2, \ldots, S_t$ we sum all of the tentative prizes in the bucket with the leftover from the previous buckets. Let $R_t$ be this number and define $\Pi_t$ to equal $R_t/|S_t|$ rounded down to the nearest nice number. If $\Pi_t$ is greater than or equal to the prize in bucket $t-1$, $\Pi_{t-1}$, we simply merge all member of $S_t$ into $S_{t-1}$, assigning them prize $\Pi_{t-1}$.
At the end we may have some non-zero leftover $L$ remaining from the last bucket.
\smallskip
\noindent \textbf{Stage 3: Post-Process Non-monotonic Bucket Sizes.}
Although the initial buckets from Step 1 increase in size, potential bucket merging in Step 2 could lead to violations in the Monotonic Bucket Size constraint.
So, whenever $|S_t|$ is larger than $|S_{t+1}|$, we shift users to bucket $t+1$ until the situation is corrected. As a result we decrease the prizes for some users and increase the leftover $L$. We repeat this process starting from $S_1$ and ending at our lowest prize level bucket.
\smallskip
\noindent \textbf{Stage 4: Spend Leftover Funds.}
Finally, we modify payouts to spend any leftover $L$. We first spend as much as possible on singleton buckets 2 through 4. We avoid modifying first prize because it is often given as a hard requirement -- e.g. we want pay exactly \$1 million to the winner. In order from $i=2$ to $4$ we adjust $\Pi_i$ to equal $\min\{\Pi_i+L,(\Pi_{i-1}+\Pi_i)/2\}$, rounded down to a nice number. This spends as much of $L$ as possible, while avoiding too large an increase in each prize.
If $L$ remains non-zero, we first try to adjust only the final (largest) bucket, $S_k$.
If $L\ge |S_k|$ then we set $\Pi_k=\Pi_k+1$ and $L=L-|S_k|$, i.e. we increase the prize for every user in $S_k$ by $1$. Note that this could lead to nice number violations, which are not corrected. We repeat this process (possibly merging buckets) until $L<|S_k|$. If at this point $L$ is divisible by $\Pi_k$ we increase $|S_k|$ by $L/\Pi_k$ (thereby increasing the number of users winning nonzero prizes beyond $N$).
If $L$ is not divisible by $\Pi_k$, we rollback our changes to the last bucket and attempt to spend $L$ on the last \emph{two} buckets. Compute the amount of money available, which is the sum of all prizes in these buckets plus $L$. Fix the last bucket prize to be the minimal possible amount, $E$. Enumerate over possible sizes and integer prize amounts for the penultimate bucket, again ignoring nice number constraints. If the last bucket can be made to have integer size (with payout $E$), store the potential solution and evaluate a ``constraint cost'' to penalize any constraint violations. The constraint cost charges $100$ for each unit of difference if the number of winners is less than $N$, $1$ for each unit of difference if the number of winners is larger than $N$, and $10$ for each unit of violation in bucket size monotonicity. From the solutions generated return the one with minimal constraint cost.
\vspace{-1em}
\section{Experiments}
\vspace{-.5em}
\label{sec:experiments}
We conclude with experiments that confirm the effectiveness of both the integer program (Section \ref{sec:offtheshelf}) and our heuristic (Section \ref{sec:heuristic}). Both algorithms were implemented in Java and tested on a commodity laptop with a 2.6 GHz Intel Core i7 processor and 16 GB of 1600 MHz DDR3 memory.
For the integer program, we employ the off-the-shelf, open source GNU Linear Programming Kit (GLPK), accessed through the SCPSolver front-end \cite{GLPK,scpsolver}. The package uses a branch-and-cut algorithm for IPs with a simplex method for underlying linear program relaxations.
We construct experimental payout structures for a variety of daily fantasy tournaments from Yahoo, FanDuel, and DraftKings and test on non-fantasy tournaments as well.
For non-Yahoo contests, $P_1$ is set to the published winning prize or, when necessary, to a nearby nice number. The maximum number of buckets $r$ is set to match the number of buckets used in the published payout structure.
For fantasy sports, $E$ is set to the nearest nice number above $1.5$ times the entry fee. For all other contests (which often lack entry fees or have a complex qualification structure) $E$ is set to the nearest nice number above the published minimum prize.
\smallskip
\noindent \textbf{Quantitative Comparison.}
Our results can be evaluated by computing the Euclidean distance between our ideal pay out curve, $\{\pi_1, \ldots, \pi_n\}$, and the bucketed curve $\{\Pi_1, \ldots \Pi_m\}$. In heuristic solutions, if $m$ does not equal $n$, we extend the curves with zeros to compute the distance (which penalizes extra or missing winners). Our experimental data is included in Table \ref{tab:main_table}. Entries of ``--'' indicate that the integer program did not run to completion, possibly because no solution to Problem \ref{integer_program} exists. The cost presented is the sum of squared distances from the bucketed payouts to ideal power law payouts, as defined in Problem \ref{main_problem}. Note that we do not provide a cost for the \emph{source} payouts since, besides the Yahoo contests, these structures were not designed to fit our proposed ideal payout curve and thus cannot be expected to achieve small objective function values.
\vspace{-1.25em}
\begin{table*}[h]
\caption{Accuracy and Runtime (in milliseconds) for Integer Program (IP) vs. Heuristic (Heur.)\label{tab:main_table}}{%
\tiny
\centering
\vspace{-.75em}
\begin{tabular}{||c|c|c|c|c|c||c|c||c|c|c||}
\hhline{|t:======:t:==:t:===:t|}
Source&\specialcell{Prize Pool} & \specialcell{Top Prize} & \specialcell{Min. Prize} &\specialcell{\# of\\ Winners} & \specialcell{\# of\\ Buckets}& \specialcell{IP \\ Cost} & \specialcell{IP Time \\(ms)} & \specialcell{Heur. \\Cost} & \specialcell{Heur. \\Time (ms)} & \specialcell{Heur. \\ Extra \\ Winners} \\
\hhline{||-|-|-|-|-|-||-|-||---||}
Yahoo & 90 & 25 & 2& 30 & 7& .89 &7.6k &2.35 & 1 & 0 \\
Yahoo & 180 & 55 & 3& 30 & 10& 2.82 &725k &3.44 & 1 & 0\\
DraftKings & 500 & 100 & 8& 20 & 10& 6.15 &2.1k &9.21 & 1 &0 \\
Yahoo & 2250 & 650 & 150& 7 & 7& 32.4 &4.0k &187.4 & 1 &0 \\
Yahoo & 3000 & 300 & 2& 850 & 25& -- &-- &86.9 & 7 & 2 \\
FanDuel & 4000 & 900 & 50& 40 & 12& 20.7 &3716k &58.2 & 2 &1 \\
FanDuel & 4000 & 800 & 75& 16 & 7& 46.6 &2.9k &230.1 & 1 &4 \\
DraftKings & 5000 & 1250 & 150& 11 & 8& 52.5 &6.8k &123.5 & 1 &0 \\
Yahoo & 10000 & 1000 & 7& 550 & 25& -- &-- &97.3 & 8 &1 \\
DraftKings & 10000 & 1500 & 75& 42 & 12& 61.3 &1291k &173.7 & 2 &0 \\
FanDuel & 18000 & 4000 & 150& 38 & 10& 161.8 &131k &347.0 & 5 &0 \\
FanDuel & 100000 & 10000 & 2& 23000 & 25& -- &-- &3.1k & 152 &34 \\
Bassmaster & 190700 & 50000 & 2000& 40 & 15& -- &-- & 3.5k\textsuperscript{*} & 3 &0 \\
Bassmaster & 190000\textsuperscript{$\dagger$}\ & 50000 & 2000& 40 & 15& 2.5k &3462k & 2.8k & 1 &0 \\
FLW Fishing & 751588 & 100000 & 9000& 60 & 25& -- &-- & 6.0k\textsuperscript{*} & 3 &0 \\
FLW Fishing & 751500\textsuperscript{$\dagger$}\ & 100000 & 9000& 60 & 25& -- &-- & 6.0k & 2 &0 \\
FanDuel & 1000000 & 100000 & 15& 16000 & 25& -- &-- &5.3k & 203 &7 \\
DraftKings & 1000000 & 100000 & 5& 85000 & 40& -- &-- & 25.9k & 1.2k &0 \\
Bassmaster & 1031500 & 30000 & 10000& 55 & 25& -- &-- & 13.5k\textsuperscript{*} & 14 &0 \\
FanDuel & 5000000 & 1000000 & 40& 46000 & 30& -- &-- &44.3k & 1.0k &0 \\
PGA Golf & 9715981 & 1800000 & 20000& 69 & 69& -- &-- & 254.5k\textsuperscript{*} & 24 &0 \\
PGA Golf & 1000000\textsuperscript{$\dagger$}\ & 1800000 & 20000& 75 & 75& -- &-- & 215.9k\textsuperscript{*} & 23 &9 \\
DraftKings & 10000000 & 2000000 & 25& 125000 & 40& -- &-- & 78.7k & 1.7k &0 \\
Poker Stars& 10393400 & 1750000 & 15000& 160 & 25& -- &-- & 133.0k\textsuperscript{*} & 27 &0 \\
WSOP & 60348000 & 8000000 & 15000& 1000 & 30& -- &-- & 462.3k\textsuperscript{*} & 17 &0 \\
\hhline{|b:======:b:==:b:===:b|}
\multicolumn{11}{l}{\TstrutBig\specialcellleft{\textsuperscript{$\dagger$}\footnotesize{Contest is identical to the contest in the preceding row, but with the prize pool rounded to a nearby number in} \\ \footnotesize{an effort to force a solution involving only nice numbers to exist.}}}\\
\multicolumn{11}{l}{\Tstrut\textsuperscript{*}\footnotesize{Heuristic produced solution with nice number constraint violation for a single bucket.}}
\end{tabular}}
\vspace{-1em}
\end{table*}
As expected, when it succeeds in finding a solution, the integer program consistently outperforms the heuristic. However, the difference is rarely large, with heuristic cost typically below 5x that of the IP. Furthermore, the heuristic runs in less than 1.5 seconds for even the most challenging contests. Its ability to generate payouts when no solution to Problem 4.1 exists is also a substantial advantage: it always returns a solution, but potentially with a minor constraint violation.
\smallskip
\noindent \textbf{Constraint Violations.}
In our experiments, any such heuristic violation was a Nice Number violation, with no Bucket Size Monotonicity violations observed. 6 of the 7 Nice Number violations are unavoidable given the input minimum prize and prize pool. For example, the Fishing League Worldwide (FLW) fishing tournament has a prize pool of \$751,588 and a minimum prize of \$9,000. Since all nice numbers greater than or equal to \$9000 are multiples of \$5000, it is impossible to construct a fully satisfiable set of payouts summing to \$751,588. In all cases besides one (the PGA tournament with prize pool \$9,715,981) simply rounding the prize pool to a nearby number produced an input for which the heuristic output a solution with no constraint violations. However, in settings where the prize pool \emph{must} be a non-nice number (i.e., cannot be rounded up or down, for whatever reason), our heuristic's flexibility is an advantage over the more rigid integer program.
\vspace{-.5em}
\begin{figure*}[h]
\centering
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=.95\textwidth]{fanduel2.eps}
\caption{FanDuel, Baseball}
\end{subfigure}%
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=.92\textwidth]{fanduel1.eps}
\caption{FanDuel, Baseball}
\label{subfig:fduel2}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=.95\textwidth]{draftkings1.eps}
\caption{DraftKings, Baseball}
\label{subfig:dkings1}
\end{subfigure}
\vspace{-.5em}
\caption{Payout structures for small daily fantasy contests}
\label{fig:small_contests}
\vspace{-1.25em}
\end{figure*}
\medskip
\noindent \textbf{Qualitative Comparison.}
Beyond quantitative measures to compare algorithms for Problem \ref{main_problem}, evaluating our two stage framework as a whole requires a more qualitative approach. Accordingly, we include plots comparing our generated payout structures to existing published structures.
For small fantasy sports contests (Figure \ref{fig:small_contests}) both of our algorithms match payouts from FanDuel and DraftKings extremely well, often overlapping for large sections of the discretized payout curve. To our knowledge, FanDuel and DraftKings have not publicly discussed their methods for computing payouts; their methods may involve considerable manual intervention, so matching these structures algorithmically is encouraging.
In some cases, notably in Figures \ref{subfig:dkings1} and \ref{subfig:fduel2}, our payout curve is ``flatter'', meaning there is a smaller separation between the top prizes and lower prizes. Many poker and fantasy sports players prefer flatter structures due to reduced payout variance \cite{roto1,roto2}.
\vspace{-1em}
\begin{figure*}[h]
\centering
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=.75\textwidth]{fanduelLarge1.eps}
\vspace{-.5em}
\caption{FanDuel, Football}
\label{fig:large_contests_fd}
\end{subfigure}%
\hfill
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=.75\textwidth]{DraftKingsLarge1.eps}
\vspace{-.5em}
\caption{DraftKings, Football}
\label{fig:large_contests_dk}
\end{subfigure}
\vspace{-.75em}
\caption{Payout structures for large daily fantasy contests}
\label{fig:large_contests}
\vspace{-1em}
\end{figure*}
We compare larger fantasy contest payouts in Figure \ref{fig:large_contests}, plotting the $x$ axis on a log scale due to a huge number of winning places (16,000 and 125,000 for \ref{fig:large_contests_fd} and \ref{fig:large_contests_dk} respectively).
Again our results are very similar to those of FanDuel and DraftKings.
We also show that our algorithms can easily construct high quality payout structures for non-fantasy tournaments, avoiding the difficulties discussed in Section \ref{sec:prior_work}. Returning to the Bassmaster example from Figure \ref{fig:bmaster_failure}, we show more satisfying structures generated by our IP and heuristic algorithm in Figure \ref{fig:bmaster_fixed}. For the IP, we rounded the prize pool to \$190,000 from \$190,700 to ensure a solution to Problem \ref{integer_program} exists. However, even for the original prize pool, our heuristic solution has just one non-nice number payout of \$2,700 and no other constraint violations. Both solutions avoid the issue of non-monotonic bucket sizes exhibited by the published Bassmaster payout structure.
\vspace{-1em}
\begin{figure}[h]
\centering
\includegraphics[width=.6\textwidth]{bassmaster_fixed.eps}
\vspace{-.75em}
\caption{Alternative Bassmaster Open 2015 payouts}
\label{fig:bmaster_fixed}
\vspace{-1.25em}
\end{figure}
We conclude with an example from the popular World Series of Poker (WSOP). The 2015 Main Event payed out \$60,348,000 to 1000 winners in 31 buckets with far from ``nice number'' prizes \cite{wsop_events}.
However, several months prior to the tournament, organizers published a very different \emph{tentative} payout structure \cite{pokerPayoutsFailure}, that appears to have attempted to satisfy many of the constraints of Problem \ref{main_problem}: it uses mostly nice number prizes and nearly satisfies our Bucket Size Monotonicity constraint.
This tentative structure (visualized in Figure \ref{fig:wsop_tent}) suggests that WSOP organizers originally sought a better solution than the payout structure actually used. Perhaps the effort was abandoned due to time constraints: the WSOP prize pool is not finalized until just before the event.
We show in Table \ref{tab:wsop} that our heuristic can rapidly (in 17 milliseconds) generate an aesthetically pleasing payout structure for the final prize pool with the initially planned top prize of \$8 million, and just one minor nice number violation (our final bucket pays \$20,150). Our output and the actual WSOP payout structure are also compared visually in Figure \ref{fig:wsop_tent} to the power curve used in the first stage of our algorithm. In keeping with the WSOP tradition of paying out separate prizes to places 1-9 (the players who make it to the famous ``November Nine'' final table) we run our heuristic with 9 guaranteed singleton buckets instead of 4 (see Section \ref{sec:heuristic} for details).
\vspace{.5em}
\begin{minipage}{\textwidth}
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=.79\textwidth]{wsop_2015_tentative2.eps}
\vspace{-.5em}
\includegraphics[width=.79\textwidth]{wsop_fix.eps}
\vspace{-.17em}
\captionof{figure}{WSOP 2015 tentative prizes and our alternative ``nice'' payout structure.}
\label{fig:wsop_tent}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\centering
\tiny
\begin{tabular}{||c|c||c|c||}
\hhline{|t:==:t:==:t|}
\multicolumn{2}{||c||}{\specialcell{2015 WSOP \\Payouts}} & \multicolumn{2}{|c||}{\specialcell{Our Alternative \\Payouts}} \\
\hline
Place & Prize & Place & Prize \\
\hline
1 & \$7,680,021 & 1& \$8,000,000 \\
2 & \$4,469,171 & 2& \$4,000,000 \\
3 & \$3,397,103 & 3& \$2,250,000 \\
4 & \$2,614,558 & 4& \$1,750,000 \\
5 & \$1,910,971 & 5& \$1,250,000 \\
6 & \$1,426,072 & 6& \$1,000,000\\
7 & \$1,203,193 & 7 & \$950,000 \\
8 & \$1,097,009 & 8& \$850,000 \\
9 & \$1,001,020 & 9& \$700,000\\
10 & \$756,897 & \multirow{2}{*}{10 - 13}& \multirow{2}{*}{\$650,000} \\
11 - 12& \$526,778 & & \\
13 - 15& \$411,453 & \multirow{2}{*}{14 - 17}& \multirow{2}{*}{\$500,000} \\
16 - 18& \$325,034 & & \\
\multirow{2}{*}{19 - 27}& \multirow{2}{*}{\$262,574} & 18 - 23& \$300,000 \\
& & 24 - 29& \$225,000 \\
28 - 36& \$211,821 & 30 - 35& \$200,000 \\
36 - 45& \$164,086 & 36 - 42& \$150,000 \\
46 - 54& \$137,300 & 43 - 59& \$125,000\\
55 - 63& \$113,764 & \multirow{3}{*}{60 - 77}& \$95,000 \\
64 - 72& \$96,445& & \\
73 - 81& \$79,668 & & \\
82 - 90& \$68,624 & \multirow{2}{*}{78 - 99}& \multirow{2}{*}{\$75,000} \\
91 - 99& \$55,649& & \\
\multirow{2}{*}{100 - 162}& \multirow{2}{*}{\$46,890} & 100 - 128& \$60,000 \\
& & 128 - 164& \$55,000 \\
163 - 225& \$40,433 & 165 - 254& \$45,000 \\
226 - 288& \$34,157 & \multirow{2}{*}{255 - 345}& \multirow{2}{*}{\$35,000} \\
289 - 351& \$29,329 & & \\
352 - 414& \$24,622 & \multirow{2}{*}{346 - 441}& \multirow{2}{*}{\$25,000} \\
415 - 477& \$21,786 & &\\
478 - 549& \$19,500 &\multirow{2}{*}{442 - 710} & \multirow{2}{*}{\$22,500} \\
550 - 648& \$17,282 & & \\
649 - 1000& \$15,000 & 711 - 1000& \$20,150 \\
\hhline{|b:==:b:==:b|}
\end{tabular}
\captionof{table}{Our alternative payouts vs. actual WSOP payouts.\label{tab:wsop}}
\end{minipage}
\end{minipage}
\clearpage
| {
"attr-fineweb-edu": 1.569336,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfGbxK0zjCxN_vAis | \section{Introduction} \vspace{.1 in}
\subsection{Motivation}
Although matching models often assume that agents care only about their own allocation, there are many scenarios where people also care about the allocation received by their friends or family members. For example, couples entering residency may wish to be matched to programs in the same region, siblings may wish to attend the same school, and friends may want to share a hiking trip. Practitioners often employ ad-hoc solutions in an effort to accommodate these preferences.
This paper studies a special case of this problem, in which there are multiple copies of a homogeneous good. Each agent belongs to a group, and is successful if and only if members of her group receive enough copies for everyone in the group.
Examples of such settings include:
\begin{itemize}
\item {\em American Diversity Visa Lottery.} Each year 55,000 visas are awarded to citizens of eligible countries. Applicants are selected by lottery. Recognizing that families want to stay together, the state department grants visas to eligible family members of selected applicants.\footnote{Details of the 2022 Diversity Immigrant Visa Program are available at \url{https://travel.state.gov/content/dam/visas/Diversity-Visa/DV-Instructions-Translations/DV-2022-Instructions-Translations/DV-2022-Instructions-and-FAQs_English.pdf}.}
\item {\em Big Sur Marathon.} Many popular marathons limit the number of entrants and use a lottery to select applicants. The Big Sur Marathon uses several lotteries for different populations (i.e. locals, first-timers, and returning runners from previous years). One of these is a ``Groups and Couples" lottery which ``is open for groups of from 2-15 individuals, each of whom want to run the Big Sur Marathon but only if everyone in the group is chosen." In 2020, 702 tickets were claimed by 236 successful groups selected from 1296 applicants.
\footnote{More information about the 2020 Big Sur Marathon Drawing is available at \url{https://www.bigsurmarathon.org/random-drawing-results-for-the-2020-big-sur-marathon/}}
\item {\em Hiking Permits on Recreation.gov.} Many parks use a permit system to limit the number of hikers on popular trails. For example, the permits to hike Half Dome in Yosemite National Park are awarded through a pre-season lottery, as well as daily lotteries.\footnote{More information available at \url{https://www.nps.gov/yose/planyourvisit/hdpermits.htm}} To enable applicants to hike with friends and family, each applicant is allowed to apply for up to six permits.
\item {\em Discounted Broadway Tickets.} Many popular Broadway shows hold lotteries for discounted tickets. While some people may be happy going to a Broadway show alone, most prefer to share the experience with others. Recognizing this fact, theaters typically allow each applicant to request up to two tickets. On the morning of the show, winners are selected and given the opportunity to purchase the number of tickets that they requested.
\end{itemize}
Inspired by the last application, in the rest of the paper we will refer to a copy of the homogeneous good as a `ticket'.
The settings above present several challenges. First and foremost, the designer must prevent individuals from submitting multiple applications. In high-stakes environments such as the diversity visa lottery, this can be accomplished by asking applicants to provide government identification as part of their application. In applications with lower stakes, this is frequently accomplished by tying each application to an e-mail address, phone number, or social media account. The effectiveness of this approach will vary across settings. If the designer is concerned that individuals may be submitting multiple applications, then this concern should be addressed before anything else. In this paper, we assume that the designer has a way to identify each individual, and verify that nobody has submitted duplicate applications.
A second challenge is that designers do not know who belongs to each group. One solution is to ask applicants to identify members of their group in advance. While this is done for the diversity visa lottery and for affordable housing lotteries, it can be quite cumbersome. It requires additional effort from applicants, which may be wasted if their applications are not selected. In addition, to ensure that applicants do not submit false names, when awarding tickets the designer must verify that the identity of each recipient matches the information on the application form. Perhaps for these reasons, many designers opt for a simpler interface which allows applicants to specify how many tickets they wish to receive, but does not ask them to name who these tickets are for.
Motivated by these observations, we study two types of mechanisms: ``direct" mechanisms which ask applicants to identify members of their group, and mechanisms which only ask each applicant to specify a number of tickets requested. In the former case, the most natural approach is to place groups in a uniformly random order, and sequentially allocate tickets until no more remain. This procedure, which we refer to as the {\em Group Lottery}, is used, for example, to allocate affordable housing in New York City. In the latter case, an analogous procedure is often used: applicants are processed in a uniformly random order, with each applicant given the number of tickets that they requested until no tickets remain. We call this mechanism the {\em Individual Lottery}, and variants of it are used in all of the applications listed above.\footnote{
Recreation.gov goes into great detail about the algorithm used to generate a uniform random order of applicants (\url{https://www.recreation.gov/lottery/how-they-work}), while the FAQ for the Diversity Visa Lottery notes, ``a married couple may each submit a DV Lottery application and if either is selected, the other would also be entitled to a permanent resident card" (\url{https://www.dv-lottery.us/faq/}).}
\subsection{Concerns with existing approaches}
Although the Individual Lottery and the Group Lottery seem natural and are used in practice, they each have flaws. In the Individual Lottery, each member of a group can submit a separate application. This is arguably arguably {\em unfair}, as members of large groups might have a much higher chance of success than individual applicants. In addition, the Individual Lottery may be {\em inefficient}. One reason for this is that there is no penalty for submitting a large request, so some individuals may ask for more tickets than their group needs.\footnote{Applicants are very aware of this. One of the authors received an e-mail from the organizer of a Half Dome trip who noted, ``It costs nothing extra to apply for 6 spots. If you do win, you might as well win big!" Meanwhile, a guide about the lottery for the Broadway show Hamilton advises, ``You can enter the lottery for either one or two seats. Always enter it for two. A friend you bring to Hamilton will be a friend for life" (\url{https://www.timeout.com/newyork/theater/hamilton-lottery}).} Even if this does not occur, multiple members of a group might apply and win tickets, resulting in some of these tickets going to waste.
Anecdotally, we see strong evidence of groups with multiple winners in the Big Sur Marathon lottery. Although the information page suggests that ``a single, designated group leader enters the drawing on behalf of the group," in 2019, the lottery winners included two teams titled ``Taylor's" (with leaders Molly Taylor and Amber Taylor, respectively), as well as a team titled ``What the Hill?" and another titled ``What the Hill?!"\footnote{More information about the drawing can be found at \url{https://web.archive.org/web/20200407192601/https://www.bigsurmarathon.org/drawing-info/}. The list of groups awarded in 2019 is available at \url{http://www.bigsurmarathon.org/wp-content/uploads/2018/07/Group-Winners-for-Website.pdf}}. These examples suggest that groups are (rationally) not abiding by the recommendation that only one member enter the lottery, and that some groups are receiving more tickets than needed.
The instruction that only one member of each group should apply to the Big Sur Marathon lottery suggests that the organizers intended to implement a Group Lottery. Although the Group Lottery overcomes some of the issues described above, it is also not perfectly fair nor perfectly efficient. This is because when only a few tickets remain, (i) small groups still have a chance of success while large groups do not, and (ii) these tickets may be wasted if the next group to be processed is large. Because these issues arise only at the end of the allocation process, one might hope that the resulting allocation will not be too unfair or inefficient.
Our first contribution is to quantify the unfairness and inefficiency of these mechanisms. While the fairness and efficiency of the Individual Lottery suffer when everyone from a group applies, how bad can the problem be? And is the intuition that the Group Lottery is approximately fair and efficient correct? Our answers to these questions are ``very bad", and ``yes," respectively. Although neither mechanism is perfectly fair or efficient, there is a large qualitative and quantitative difference between them. Our second contribution is to identify modifications to each algorithm which use the same user interfaces but offer improved fairness and/or efficiency. We elaborate on these contributions below.
\subsection{Overview of Model and Results}
We consider a model with $k$ identical tickets. The set of agents is partitioned into a set of groups, and agents have {\em dichotomous preferences}: an agent is successful if and only if members of her group receive enough tickets for everyone in the group. We treat the group structure as private information, unknown to the designer. Because there are only $k$ tickets, there can be at most $k$ successful agents. We define the efficiency of a lottery allocation to be the expected number of successful agents, divided by $k$. If this is at least $\beta$, then the allocation is {\em $\beta$-efficient}. A lottery allocation is {\em fair} if each agent has the same success probability, and {\em $\beta$-fair} if for any pair of agents, the ratio of their success probabilities is at least $\beta$.
Given these definitions, we seek lottery allocations that are both approximately efficient and approximately fair. Although this may be unattainable if groups are large, in many cases group sizes are much smaller than the total number of tickets. We define a family of instances characterized by two parameters, $\kappa$ and $\alpha$. The parameter $\kappa$ bounds the ratio of group size to total number of tickets, while $\alpha$ bounds the supply-demand ratio. For any $\kappa$ and $\alpha$, we provide worst-case performance guarantees in terms of efficiency and fairness.
We first consider a scenario where applicants can identify each member of their group. Here, the mechanism typically used is the Group Lottery. We show in Proposition~\ref{prop:GL-incentives} that this mechanism incentivizes agents to truthfully report their groups. Moreover, Theorem~\ref{thm:gl-is-good} establishes that the Group Lottery is $(1 - \kappa)$-efficient and \((1-2\kappa)\)-fair. It is not perfectly efficient, as tickets might be wasted if the size of the group being processed exceeds the number of remaining tickets. It is not perfectly fair, since once only a few tickets remain, a large group can no longer be successful, but a small group can. Proposition \ref{prop:gl-tightness} shows that this guarantee is tight.
Could there be a mechanism with stronger performance guarantees than the Group Lottery? Proposition~\ref{prop:badnews} establishes the limits of what can be achieved. Specifically, it says that there always exists an allocation \(\pi\) that is $(1-\kappa)$-efficient and fair, but for any $\epsilon > 0$, there are examples where any allocation that is $(1- \kappa + \epsilon)$-efficient is not even $\epsilon$-fair. To show the existence of the random allocation \(\pi\),
we use a generalization of the Birkhoff-von Neumann theorem proved by \citet{nguyen2016assignment}. By awarding groups according to the allocation \(\pi\), we can obtain a mechanism that attains the best possible performance guarantees. Therefore, the $2 \kappa$ loss in fairness in the Group Lottery can be thought of as the ``cost" of using a simple procedure that orders groups uniformly, rather than employing a Birkhoff-von Neumann decomposition to generate the allocation \(\pi\).
In many applications, developing an interface that allows applicants to list their group members may be too cumbersome. This motivates the study of a second scenario, where applicants are only allowed to specify the number of tickets they need. The natural mechanism in this setting is the Individual Lottery. Unfortunately, Theorem~\ref{thm:il-is-bad} establishes that the Individual Lottery may lead to arbitrarily inefficient and unfair outcomes. It is perhaps not surprising that the Individual Lottery will be inefficient if agents request more tickets than needed, or if each agent has a large chance of success. However, we show that the waste due to over-allocation may be severe even if all agents request only their group size and demand far exceeds supply. Furthermore, because the probability of success will be roughly proportional to group size, small groups are at a significant disadvantage.
Can we achieve approximate efficiency and fairness without asking applicants to identify each member of their group? We show that this is possible with a minor modification to the Individual Lottery which gives applicants with larger requests a lower chance of being allocated. This eliminates the incentive to inflate demand, and reduces the possibility of multiple winners from the same group. To make the allocation fair, we choose a particular method for biasing the lottery against large requests: sequentially select individuals with probability inversely proportional to their request. We call this approach the {\em \NameProposedMechanism}. In the {\NameProposedMechanism}, a group of four individuals who each request four tickets has the same chance of being drawn next as a group of two individuals who each request two tickets. As a result, outcomes are similar to the Group Lottery.
We prove that the {\NameProposedMechanism} is (\(1-\kappa-\alpha/2\))-efficient and (\(1-2\kappa-\alpha/2\))-fair (in fact, Theorem~\ref{thm:spl-performance} establishes slightly stronger guarantees). Notice that these guarantees coincide with those of the Group Lottery when demand far exceeds supply ($\alpha$ is close to $0$).
Our main results are summarized in Table ~\ref{tab:main-results}. Our conclusion is that the Individual Lottery can be arbitrarily unfair and inefficient. These deficiencies can be mostly eliminated by using a Group Lottery. Perhaps more surprisingly, approximate efficiency and fairness can also be achieved while maintaining the Individual Lottery interface, by suitably biasing the lottery against agents with large requests.
\begin{table}[t]
\centering
\begin{tabular}{l c l l}
\hline
Mechanism & Action Set \hspace{.2 in} & Efficiency \hspace{.1 in} & Fairness \\ \hline
Benchmark & & $1 - \kappa$ & 1\\
Group Lottery & $2^\sN$ & $1 - \kappa$ & $1 - 2\kappa$ \\
Individual Lottery & $\{1, 2, \ldots, k\}$ & 0 & 0\\
Weighted Individual Lottery & $\{1, 2, \ldots, k\}$ & $1 - \kappa -\alpha/2$ & \(1-2\kappa-\alpha/2\) \\\hline
\end{tabular}
\caption{\label{tab:main-results}Summary of main results: worst-case guarantees for the efficiency and fairness of instances in \(I(\kappa, \alpha)\). These guarantees are established in Theorems~\ref{thm:gl-is-good},~\ref{thm:il-is-bad} and~\ref{thm:spl-performance}. Meanwhile, Proposition \ref{prop:badnews} establishes that the best one can hope for is a mechanism that is $(1 - \kappa)$ efficient and $1$-fair. }
\end{table}
\section{Related work}
Our high-level goal of allocating objects efficiently, subject to fairness and incentive compatibility constraints, is shared by numerous papers. The definitions of efficiency, fairness, and incentive compatibility differ significantly across settings, and below, we focus on papers that are closely related to our own.
If the group structure is known to the designer, then our problem simplifies to allocating copies of a homogeneous item to groups with multi-unit demand. This problem has received significant attention. \citet{benassy_1982} introduces the uniform allocation rule, in which each group requests a number of copies, and receives the minimum of its request and a quantity $q$, which is chosen so that every copy is allocated. \citet{sprumont_1991} and \citet{ching_1992} show that when preferences are single-peaked, this is the unique rule that is Pareto efficient, envy-free, and incentive compatible. \citet{ehlers2003probabilistic} extend this characterization to randomized allocation mechanisms. \citet{cachon1999capacity} consider the uniform allocation rule in a setting where groups have decreasing marginal returns from additional items.
In contrast to these papers, we assume that groups have dichotomous preferences, with no value for receiving only a fraction of their request. As a result, uniform allocation would be extremely inefficient. Instead, we propose the Group Lottery, which resembles the ``lexicographic" allocation rule from \citet{cachon1999capacity}. Dichotomous preferences have also been used to model preferences in kidney exchange \cite{roth2005pairwise}, two-sided matching markets \cite{bogomolnaia2004random}, and collective choice problems \cite{bogomolnaia2005collective}.
The all-or-nothing nature of preferences means that our work is related to the ``fair knapsack" problem introduced by \citet{patel2020group}, where a planner must choose a subset of groups to allocate, subject to a resource constraint. Groups are placed into categories, and the number of successful groups from each category must fall into specified ranges. Their model is fairly general, and if groups are categorized by size, then ranges can be chosen to make their fairness notion similar to ours. However, they do not quantify the cost of imposing fairness constraints. By contrast, we show that in our setting, fairness can be imposed with little or no cost to efficiency. Furthermore, approximate efficiency and fairness can be achieved in our setting using mechanisms that are much simpler than their dynamic-programming based algorithms.
Closer to our work is that of \citet{nguyen2016assignment}. They consider a setting in which each group has complex preferences over bundles of heterogeneous items, but only wants a small fraction of the total number of items. They find approximately efficient and fair allocations using a generalization of the Birkhoff-von Neumann theorem. Although their notion of fairness is different from ours, we use their results to prove Proposition~\ref{prop:badnews}. However, our papers have different goals: their work identifies near-optimal but complex allocation rules, while we study the performance of simple mechanisms deployed in practice, and close variants of these mechanisms.
An important difference between all of the aforementioned papers and our own is that we assume that the group structure is unknown to the designer. In theory, this can be solved by asking agents to identify the members of their group (as in the Group Lottery), but in many contexts this may be impractical. Hence, much of our analysis considers a scenario where agents are asked to report only a single integer (interpreted as the size of their group). We show what can be achieved is this setting, through our analysis of the Individual Lottery and {\NameProposedMechanism}. We are unaware of any prior work with related results.
We close by highlighting two papers with results that are used in our analysis. \citet{serfling1974probability} introduces the martingale when sampling without replacement from a finite population. This martingale is key in the proof of Proposition~\ref{lemma:mart-bounds}, which establishes bounds on the expected hitting time for the sample sum. This, in turn, is used to establish our fairness result for the Group Lottery. \citet{johnson2005univariate} state a simple bound on the probability that a Poisson random variable deviates from its expectation at least by a given number. We use this result in our analysis of the {\NameProposedMechanism}, where we use a Poisson random variable to bound the probability that a group has at least \(r\) members awarded.
\section{The model}
\subsection{Agents, Outcomes, Utilities}
A designer must allocate $k\in\N$ indivisible identical tickets to a set of agents $\sN = \{1,...,n\}$. A \emph{feasible allocation} is represented by $x \in \{0,1,...,k\}^n$ satisfying $\sum_{i\in \sN}x_i \le k$, where $x_i$ indicates the number of tickets that agent $i$ receives. We let $\X$ be the set of all feasible allocations.
A \textit{lottery allocation} is a probability distribution $\ra$ over $\X$, with $\ra_x$ denoting the probability of allocation $x$. Let $\AllocationSpace$ be the set of all lottery allocations.
The set $\sN$ is partitioned into groups according to $\G$: that is, each $G \in \G$ is a subset of $\sN$, $\cup_{G\in\G}G = \sN$, and for each $G,G'\in\G$ either $G = G'$ or $G\cap G' = \emptyset$. Given agent $i\in \sN$, we let $G_i\in\G$ be the group containing $i$. Agents are successful if and only if the total number of tickets allocated to the members of their group is at least its cardinality. Formally, each $i\in \sN$ is endowed with a utility function $u_i: \X \to \{0,1\}$ given by
\begin{equation} u_{i}(x) = \ind{\sum_{j\in G_i} x_{j}\ge |G_i|}.\end{equation}
We say that agent $i$ is {\em successful} under allocation $x$ if $u_i(x) = 1$.
In a slight abuse of notation, we denote the expected utility of agent $i\in \sN$ under the lottery allocation $\ra$ by
\begin{equation} u_i(\ra) = \sum_{x \in \X} \ra_x u_i(x).\end{equation}
\subsection{Performance Criteria}
We define the \textit{expected utilization} of a lottery allocation $\ra$ to be
\begin{equation}\label{def:lot_utilization}
U(\ra) = \frac{1}{k}\sum_{i\in \sN}u_i(\ra).
\end{equation}
\begin{definition}[Efficiency]
{\em A lottery allocation $\ra$ is {\bf efficient} if $U(\ra) = 1.$ It is {\bf $\beta$-efficient} if $U(\ra)\ge \beta$.}
\end{definition}
\begin{definition}[Fairness]
{\it A lottery allocation $\ra$ is {\bf fair} if for every $i,i' \in \sN,$ $u_i(\ra) = u_{i'}(\ra)$. It is {\bf $\beta$-fair} if for every $i, i' \in \sN$, $u_i(\ra) \ge \beta u_{i'}(\ra)$.}
\end{definition}
\subsubsection*{Alternative Definitions of Fairness.}
With dichotomous preferences, our notion of efficiency seems quite natural. Our fairness definition states that agents in groups of different sizes should have similar expected utilities. There are other notions of fairness that one might consider. Two that arise in other contexts are {\em equal treatment of equals} and {\em envy-free}. Below we present the natural analog of these in our setting, and discuss their relation to our definition.
\begin{definition}
{\it Lottery allocation $\ra$ satisfies {\bf equal treatment of equals} if for every pair of agents \(i,j\) such that \(|G_i| = |G_j|\), we have $u_i(\ra) = u_j(\ra)$.}
\end{definition}
This is clearly weaker than our fairness definition, and is an easy property to satisfy. In particular, the group request outcomes of the three mechanisms we study all satisfy equal treatment of equals.
To define envy-freeness, we introduce additional notation. For any \(x \in \X\), we let $N_G(x)$ be the number of tickets allocated to members of $G$. For any \(\ra \in \AllocationSpace\), let $N_G(\ra)$ be a random variable representing this number. Let $u_G(N) = \bP(N \geq |G|)$ be the expected utility of $G$ when the number of tickets received by members of $G$ is equal to $N$.
\begin{definition}
{\it Lottery allocation $\ra$ is {\bf group envy-free} if no group envies the allocation of another: $u_G(N_G(\ra)) \geq u_{G}(N_{G'}(\ra))$ for all $G, G' \in \G$.}
\end{definition}
This notion is neither stronger nor weaker than our fairness definition. To see this, suppose that there is a group of size $1$ and another of size $2$. The group of size $1$ gets one ticket with probability $\epsilon$, and otherwise gets zero tickets. The group of size $2$ gets two tickets with probability $\epsilon$ and otherwise gets one ticket. This is fair (according to our definition) but not even approximately group envy-free. Conversely, if both groups get one ticket with probability $\epsilon$ and zero tickets otherwise, then the allocation is group envy-free but not fair.
However, the conclusions we draw would also hold for this new fairness notion. The group request outcome of the Individual Lottery may not be even approximately group envy-free. The group request outcome of the Group Lottery is group envy-free. The group request outcome of the {\NameProposedMechanism} is approximately group envy-free, with the same approximation factor as in Theorem~\ref{thm:spl-performance}.
\subsection{Actions and Equilibria}
The designer can identify each agent (and therefore prevent agents from applying multiple times), but does not know the group structure a priori. Therefore, the designer must deploy a mechanism that asks individual agents to take actions. When studying incentives induced by a mechanism, however, we assume that members of a group can coordinate their actions.
Formally an anonymous {\em mechanism} consists of an action set $A$ and an allocation function $\pi : A^{\sN} \rightarrow \AllocationSpace$, which specifies a lottery allocation $\pi(\a)$ for each possible {\em action profile} $\a \in A^{\sN}$.
\begin{definition}\label{def:dominant-strategy}
{\it The actions $\a_{G_i} \in A^{G_i}$ are {\bf dominant} for group $G_i$ if for any actions $\a_{-G_i} \in A^{\sN \backslash G_i}$,
\begin{equation}\a_{G_i} \in \arg\max_{\a'_{G_i}\in A^{G_i}} u_i(\pi(\a'_{G_i},\a_{-G_i})).\label{eq:opt}\end{equation}
The action profile $\a \in A^{\sN}$ is a {\bf dominant strategy equilibrium} if for each group $G\in\G$, actions $\a_{G}$ are dominant for $G$.}
\end{definition}
Note that although actions are taken by individual agents, our definition of dominant strategies allows all group members to simultaneously modify their actions. We believe that this reasonably captures many settings, where group members can coordinate but the mechanism designer has no way to identify groups a priori.
\section{Results}
In general, it might not be feasible to achieve approximate efficiency, even if the group structure is known. Finding an efficient allocation involves solving a knapsack problem where each item represents a group, and the knapsack's capacity is the total number of tickets. If it is not possible to select a set of groups whose sizes sum up to the number of tickets, then some tickets will always be wasted. This issue can be particularly severe if groups' sizes are large relative to the total number of tickets. Therefore, one important statistic will be the ratio of the maximum group size to the total number of tickets.
Additionally, one concern with the Individual Lottery is that tickets might be wasted if groups have multiple winners. Intuitively, this is more likely when the number of tickets is close to the number of agents. Therefore, a second important statistic will be the ratio of tickets to agents.
These thoughts motivate us to define a family of instances characterized by two parameters: \(I(\kappa,\alpha)\). The parameter $\kappa$ captures the significance of the ``knapsack" structure, and $\alpha$ captures the ``abundance" of the good. For any \(\kappa,\alpha \in (0,1)\), define the family of instances \(\)
\begin{equation}\label{alpha-kappa-family}
I(\kappa, \alpha) = \left\{(n,k,\G) :\frac{\max_{G\in\G} |G|-1}{k} \leq \kappa, \frac{k}{n} \leq \alpha \right\}.
\end{equation}
Therefore, when analyzing a mechanism we study the worst-case efficiency and fairness guarantees in terms of $\kappa$ and $\alpha$. Ideally, we might hope for a solution that is both approximately fair and approximately efficient. Theorem~\ref{thm:gl-is-good} shows that this is achieved by the Group Lottery, which asks agents to reveal their groups. By contrast, Theorem~\ref{thm:il-is-bad} establishes that the Individual Lottery may lead to arbitrarily inefficient and unfair outcomes. Finally, Theorem~\ref{thm:spl-performance} establishes that the {\NameProposedMechanism} is approximately fair and approximately efficient, and similar to the Group Lottery when there are many more agents than tickets ($\alpha$ is small).
\subsection{Group Lottery}\label{S:GL}
In this section, we present the Group Lottery ($GL$) and show that it is approximately fair and approximately efficient. In this mechanism, each agent is asked to report a subset of agents, interpreted as their group. We say that a group $S \subseteq \sN$ is valid if all its members declared the group $S$. Valid groups are placed in a uniformly random order and processed sequentially (agents that are not part of a valid group will not receive tickets). When a group is processed, if enough tickets remain then every member of the group is given one ticket. Otherwise, members of the group receive no tickets and the lottery ends.\footnote{There is a natural variant of this mechanism which skips over large groups when few tickets remain, and gives these tickets to the next group whose request can be accommodated. This variant may be arbitrarily unfair, as can be seen by considering an example with an odd number of tickets, one individual applicant, and many couples. Then the individual applicant is always successful, while the success rate of couples can be made arbitrarily small by increasing the number of couples.}
We now introduce notation that allows us to study this mechanism. For any finite set \(E\), we let \(\s_E\) be the set of finite sequences of elements of \(E\), and let \(\o_{E}\) be the set of sequences such that each element of \(E\) appears exactly once. We refer to an element \(\sigma \in \o_{E}\) as an {\em order} over \(E\), with \(\sigma_t \in E\) and \(\sigma_{[t]} = \bigcup_{t' \leq t} \sigma_{t'}\) denoting the subset of \(E\) that appears in the first \(t\) positions of \(\sigma\).
Next we provide a formal description of the mechanism. The action set is the power set of $\sN$. Given an action profile $\a$, we call a set of agents $S \subseteq \sN$ a \textit{valid group} if for every agent $i \in S$ we have that $a_i = S$. We define a function \(\tau\) that will let us to characterize the number of valid groups that obtain their full request.
Fix a finite set \(E\) and a size function $| \cdot | : E \rightarrow \N$. For any $c \in \N$ and $\sigma \in \s_E$ satisfying $\sum_{t} |\sigma_t| \geq c$, define
\begin{equation}
\tau(c,\order) = \min\left\{T \in \N: \sum_{t=1}^T |\order_t| \ge c\right\}. \label{eq:def:tau}
\end{equation}
Fix an arbitrary action profile \(\a\) and let $V$ be the resulting set of valid groups. For any order $\order \in \o_V$, we let $\tau = \tau(k+1, \order)$ be as in \eqref{eq:def:tau} where the size of each valid group is its cardinality. Then the number of valid groups that are processed and obtain their full request is \(\tau - 1\). We define
\begin{equation}\label{eq:xgl}
\xgl_i(\a,\sigma) = \sum_{j=1}^{\tau-1} \ind{i \in \sigma_j}.
\end{equation}
For any \(x' \in \X\), the allocation function of the Group Lottery is
\[\GL_{x'}(\a) = \sum_{\order \in \o_V}\ind{x' = \xgl(\a,\order)}\frac{1}{|V|!}.\]
\subsubsection{Incentives.}
In every mechanism that we study, there is one strategy that intuitively corresponds to truthful behavior. We refer to this as the \emph{group request strategy}. In the Group Lottery, this is the strategy in which each agent declares the members his or her group.
\begin{definition}\label{def:GL-GR}
{\em In the Group Lottery, group $G \in \G$ follows the {\bf group request} strategy if $a_i = G$ for all $i \in G$.}
\end{definition}
\begin{proposition}\label{prop:GL-incentives}
In the Group Lottery, the group request strategy is the only dominant strategy.
\end{proposition}
The intuition behind Proposition \ref{prop:GL-incentives} is as follows. Potential deviations for group $G$ include splitting into two or more groups, or naming somebody outside of the group as a member. We argue that in both cases the group request is weakly better. First, neither approach will decrease the number of other valid groups. Second, if there are at least $|G|$ tickets remaining and a valid group containing a member of $G$ is processed, then under the group request $G$ gets a payoff of 1. This might not be true under the alternatives strategies.
In light of Proposition \ref{prop:GL-incentives}, we will assume that groups follow the group request strategy when analyzing the performance of the Group Lottery.
\subsubsection{Performance.}
We next argue that the Group Lottery is approximately fair and approximately efficient.
Of course, the Group Lottery is not perfectly efficient, as it solves a packing problem greedily, resulting in an allocation that does not maximize utilization. Similarly, it is not perfectly fair, as once there are only a few tickets left, small groups still have a chance of being allocated but large groups do not. Thus, in the Group Lottery smaller groups are always weakly better than larger groups. This is formally stated in Lemma~\ref{lemma:gl-small-groups-advantage} located in Appendix~\ref{appendix:GL}.
\begin{theorem}[``GL is Good"]\label{thm:gl-is-good}
Fix $\kappa,\alpha \in (0,1)$. For every instance in $I(\kappa,\alpha)$, the group request equilibrium outcome of the Group Lottery is $(1-\kappa)$-efficient and $(1-2 \kappa)$-fair.
\end{theorem}
In Proposition~\ref{prop:gl-tightness} (located in Appendix~\ref{appendix:GL}), we construct instances where the fairness of the Group Lottery is arbitrarily close to the guarantee provided in Theorem~\ref{thm:gl-is-good}. These instances are fairly natural: groups all have size one or two, and the total number of tickets is odd. These conditions are met by the Hamilton Lottery, which we discuss in Section~\ref{sec:discussion}.
\subsubsection{Proof Sketch of Theorem~\ref{thm:gl-is-good}.}
The efficiency guarantee is based on the fact that for any order over groups, the number of tickets wasted can be at most $\max_G |G| - 1$. Therefore, the tight lower bound on efficiency of the group lottery is $1-\frac{\max_G |G|-1}{k} \ge 1 -\kappa$.\\
We now turn to the the fairness guarantee. We will show that for any pair of agent \(i,j\),
\begin{equation}\label{eq:gl-utility-ratio}
\frac{u_i(\GL(\a))}{u_j(\GL(\a))} \ge (1-2 \kappa).
\end{equation}
Because all groups are following the group request strategy, the set of valid groups is \(\G\). Fix an arbitrary agent \(i\). We construct a uniform random order over \(\G\) using Algorithm~\ref{alg:simultaneous}: first generate a uniform random order \(\rOrder^{-i}\) over \(\G\setminus G_i\), and then extend it to \(\G\) by uniformly inserting \(G_i\) in \(\rOrder^{-i}\). Moreover, if groups in \(\G\setminus G_i\) are processed according \(\rOrder^{-i}\), then \(\tau(k - |G_i| + 1,\rOrder^{-i})\) represents the last step in which at least \(|G_i|\) tickets remains available. Therefore, if \(G_i\) is inserted in the first \(\tau(k - |G_i| + 1,\rOrder^{-i})\) positions it will get a payoff of \(1\). This is formalized in the next lemma.
\begin{lemma}\label{lemma:gl-utility-bounds}
For any instance in \(I(\kappa,\alpha)\) and any agent \(i\), if we let \(\a\) be the group request strategy under the Group Lottery and \(\rOrder^{-i}\) be a uniform order over \(\G\setminus G_i\), then
\begin{equation}
u_i(\GL(\a)) = \frac{\E[\tau(k - |G_i| + 1,\rOrder^{-i})]}{m} \le\frac{k}{n} \left(1 + \kappa\right),
\end{equation}
where \(\tau(k - |G_i| + 1,\rOrder^{-i})\) is as in \eqref{eq:def:tau} using the cardinality of each group as the size function.
\end{lemma}
Lemma~\ref{lemma:gl-small-groups-advantage} in Appendix~\ref{appendix:GL} states that if two groups are selecting the group request strategy under the Group Lottery, then the utility of the smaller group will be at least the utility of the larger group. Therefore, we assume without loss of generality that \(|G_i| \ge |G_j|\). From Lemma~\ref{lemma:gl-utility-bounds}, it follows that
\begin{align}
\frac{u_i(\GL(\a))}{u_j(\GL(\a))}
& = \frac{\E[\tau(k-|G_i|+1,\rOrder^{-i})]}{\E[\tau(k-|G_j|+1,\rOrder^{-j})]} \ge \frac{\E[\tau(k-|G_i|+1,\rOrder^{-i})]}{\E[\tau(k-|G_j|+1,\rOrder^{-i})]}.
\end{align}
To complete the proof, we express the denominator on the right hand side as the sum of the numerator and the difference
\[\E[\tau(k-|G_j|+1,\rOrder^{-i})] - \E[\tau(k-|G_i|+1,\rOrder^{-i})], \]
which reflects the advantage of the small group \(G_j\). We bound this ratio by taking a lower bound on the numerator $\E[\tau(k-|G_i|+1,\rOrder^{-i})]$ and an upper bound on the difference $\E[\tau(k-|G_j|+1,\rOrder^{-i})] - \E[\tau(k-|G_i|+1,\rOrder^{-i})]$. Both bounds follow from the lemma below.
\begin{proposition}\label{lemma:mart-bounds}
Given a sequence of numbers \(\{a_1,\ldots,a_n\}\) such that \(a_t \ge 1\), define \(\mu = \sum_i a_i/n\) and \(\bar a = \max a_i\). Let \(\order\) be an order over \(\{1,\ldots,n\}\). For $k \in \{1, \ldots, \sum_i a_i\}$, we let \(\tau = \tau(k,\order)\) be as in \eqref{eq:def:tau} where the size of \(i\) is \(a_i\), that is, \(|\order_t| = a_{\order_t}\).
If \(\rOrder\) is a uniform random order of $\{1, \ldots, n\}$, then
\begin{equation}\label{eq:mart-bounds}
1 + \frac{k-\bar a}{\mu}\le \E[\tau(k,\rOrder)] \le \frac{k +\bar a -1}{\mu}.
\end{equation}
Furthermore, if \(k,k' \in \N\) are such that \(k + k' \le \sum_i a_i\) then
\begin{equation}\label{tau-diff}
\E[\tau(k',\rOrder)] + \E[\tau(k,\rOrder)] \geq \E[\tau(k' + k,\rOrder)].
\end{equation}
\end{proposition}
Equation \eqref{eq:mart-bounds} establishes that the expected time to reach $k$ is approximately $k$ divided by the average size $\mu$, while \eqref{tau-diff} establishes that hitting times are sub-additive. Both results are well known when the values \(a_{\rOrder_t}\) are sampled with replacement from \(\{a_1,\ldots,a_n\}\). Proposition \ref{lemma:mart-bounds} establishes the corresponding results when values are sampled {\em without} replacement. The proof of \eqref{eq:mart-bounds} employs a martingale presented in~\citet{serfling1974probability}, while the proof of \eqref{tau-diff} uses a clever coupling argument.\footnote{We thank Matt Weinberg for suggesting the appropriate coupling.} Although both statements are intuitive, we have not seen them proven elsewhere, and we view Proposition \ref{lemma:mart-bounds} as a statement whose importance extends beyond the setting in which we deploy it.
\subsubsection{A Fair Group Lottery.}
Theorem~\ref{thm:gl-is-good} establishes that the Group Lottery has strong performance guarantees. However, this mechanism is not perfectly fair, as small groups have an advantage over large groups, nor perfectly efficient, as the last few tickets might be wasted. It is natural to ask whether there exists a mechanism that overcomes these issues. Proposition~\ref{prop:badnews} shows that the best we can hope for is a mechanism that is $(1-\kappa)$-efficient and fair. We then describe a fairer version of the Group Lottery which attains these performance guarantees. We conclude with a discussion of advantages and disadvantages of this fair Group Lottery.
\begin{proposition}\label{prop:badnews}\hfill
\begin{enumerate}
\item
Fix $\kappa,\alpha \in (0,1)$. For every instance in $I(\kappa,\alpha)$, there exist a random allocation that is $(1-\kappa)$-efficient and fair.
\item For any $\epsilon > 0$, there exists \(\kappa,\alpha \in (0,1)\) and an instance in \(I(\kappa, \alpha)\) such that no random allocation is
$(1 - \kappa + \epsilon)$-efficient and $\epsilon$-fair.
\end{enumerate}
\end{proposition}
The first statement follows from a result in~\citet{nguyen2016assignment}, which implies that any utility vector such that (i) the sum of all agents' utilities is at most \(k - \max_{G \in \G}|G| + 1\), and (ii) members of the same group have identical utility, can be induced by a lottery over feasible allocations. To prove the second part of Proposition \ref{prop:badnews}, we construct an instance where a particular group must be awarded in order to avoid wasting a fraction \(\kappa\) of the tickets. Therefore, to improve beyond \((1-\kappa)\)-efficiency it is necessary to allocate that group more frequently. The complete proof of Proposition~\ref{prop:badnews} is located in Appendix~\ref{appendix:RA}.
Proposition~\ref{prop:badnews} establishes that the best guarantee we can hope for is a mechanism that is $(1-\kappa)$-efficient and fair. In fact, this can be achieved by first asking agents to identify their groups (as in the Group Lottery), and then allocating according to the random allocation referred to in the first part of Proposition~\ref{prop:badnews}. When using this mechanism, it is dominant for a group to truthfully report its members, as long as it can not influence the total number of tickets awarded.
In the following discussion, we refer to this mechanism as the \emph{Fair Group Lottery}.
To conclude this section, we discuss the trade-offs between the Fair Group Lottery and the Group Lottery. In light of its stronger performance guarantees, one might conclude that the Fair Group Lottery is superior. However, we think that there are several practical reasons to favor the standard Group Lottery. First, the computation of the Fair Group Lottery outcome is not trivial: \citet{nguyen2016assignment} give two procedures, one which they acknowledge might be ``impractical for large markets,'' and the other which returns only an approximately fair allocation. By contrast, the Group Lottery is simple to implement in code, and can even be run physically by writing applicants' names on ping-pong balls or slips of paper. In some settings, a physical implementation that allows applicants to witness the process may increase their level of trust in the system. Even when implemented digitally, the ability to explain the procedure to applicants may provide similar benefits.
A final benefit of the Group Lottery is that it provides natural robustness. Although we assume that the number of tickets is known in advance and that all successful applicants claim their tickets, either assumption may fail to hold in practice. When using the Group Lottery, if additional tickets become available after the initial allocation, they can be allocated by continuing down the list of groups. This intuitive policy preserves the fairness and efficiency guarantees from Theorem \ref{thm:gl-is-good}. By contrast, if tickets allocated by the Fair Group Lottery go unclaimed, there is no obvious ``next group" to offer them to, and any approach will likely violate the efficiency and fairness guarantees that this mechanism purports to provide.
For these reasons, we see the Group Lottery as a good practical solution in most cases: the \(2\kappa\) loss of fairness identified in Theorem \ref{thm:gl-is-good} seems a modest price to pay for the benefits outlined above. There is a loose analogy to be drawn between the Fair Group Lottery and the Vickrey auction: although the Vickrey auction is purportedly optimal, practical considerations outside of the standard model prevent it from being widely deployed \citep{ausubel2006lovely}. Similarly, a Fair Group Lottery is only likely to be used in settings satisfying several specific criteria: the institution running the lottery is both sophisticated and trusted, fairness is a primary concern, and applicants are unlikely to renege.
\subsection{Individual Lottery}
As noted in the introduction, asking for (and verifying) the identity of each participant may prove cumbersome. In this section we consider the widely-used {\em Individual Lottery}, in which the action set is \(A=\{1,\ldots,k\}\).\footnote{In practice, agents are often limited to asking for $\ell < k$ tickets. We refer to this mechanism as the {\em Individual Lottery with limit \(\ell\)}, and discuss it briefly at the end of the section. Appendix~\ref{appendix:IL} provides a complete analysis of this mechanism, and demonstrates that like the Individual Lottery without a limit, it can be arbitrarily unfair and inefficient.} Agents are placed in a uniformly random order and processed sequentially. Each agent is given a number of tickets equal to the minimum of their request and the number of remaining tickets.\footnote{As for the Group Lottery, one might imagine using a variant in which agents whose request exceeds the number of remaining tickets are skipped. The negative results in Theorem~\ref{thm:il-is-bad} would still hold when using this variant.}
More formally, given an action profile \(\a\in A^{\sN}\) and an order over agents \(\sigma \in \o_{\sN}\), we let \(\xil(\a,\sigma)\in \X\) be the feasible allocation generated by the Individual Lottery:
\begin{equation}\label{eq:xil}
\xil_{\sigma_t}(\a,\sigma) = \min\left\{a_{\sigma_t}, \max\left\{k - \sum_{i \in \sigma_{[t-1]}} a_i,0\right\}\right\},
\end{equation}
for $t \in \{1,\ldots, n\}$.
For any \(x' \in \X\), the allocation function of the Individual Lottery is
\[\IL_{x'}(\a) = \frac{1}{n!}\sum_{\order \in \o_\sN}\ind{x' = \xil(\a,\order)}.\]
\subsubsection{Incentives.}
As in the Group Lottery, we refer to the strategy that correspond to truthful behavior as the group request strategy. In the Individual Lottery, this is the strategy in which each agent declares his or her group size.
\begin{definition}\label{def:IL-GR}
\em In the Individual Lottery, we say that group \(G\) follows the {\bf group request strategy} if \(a_i = |G|\) for all \(i \in G\).
\end{definition}
Our next result establishes that because agents' request do not affect the order in which they are processed, each agent should request at least his or her group size.
\begin{proposition}\label{prop:IL-incentives}
In the Individual Lottery, the set of actions \(\a_{G}\) is dominant for group \(G\) if and only if \(a_i \geq |G|\) for all \(i \in G\).
\end{proposition}
\subsubsection{Performance.}
Proposition~\ref{prop:IL-incentives} states that it is a dominant strategy to follow the group request strategy, but that there are other dominant strategies in which agents inflate their demand (select \(a_i > |G_i|\)).\footnote{Our model assumes that agents are indifferent between all allocations that allow all members of their group to receive a ticket. While we believe this to be a reasonable approximation, in practice, groups might follow the group request strategy if each ticket has a cost, or inflate their demand if tickets can be resold on a secondary market.} Our next result implies that the group request equilibrium Pareto dominates any other dominant strategy equilibrium.
\begin{proposition}\label{prop:IL-monotonicity}
Let \(i\) be any agent, fix any \(\a_{-i} \in \{1, 2, \ldots, k\}^{\sN \backslash \{i\}}\), and let \(a_i' > a_i \geq |G_i|\). Then for every agent $j \in \sN$,
\[u_j(\IL(a_i,\a_{-i})) \ge u_j(\IL(a_i',\a_{-i})).\]
\end{proposition}
Even when agents request only the number of tickets needed by their group, the outcome will be inefficient if there are multiple winners from the same group. One might expect that this is unlikely if the supply/demand ratio $\alpha$ is small. However, Theorem~\ref{thm:il-is-bad} show that even in this case, the individual lottery can be arbitrarily unfair and inefficient.
\begin{theorem}[``IL is Bad'']\label{thm:il-is-bad}
For any \(\alpha, \kappa, \epsilon \in (0, 1)\), there exists an instance in \(I(\kappa,\alpha)\) such that any dominant strategy equilibrium outcome of the Individual Lottery is not \(\epsilon-\)efficient nor \(\epsilon-\)fair.
\end{theorem}
\subsubsection{Proof Sketch of Theorem~\ref{thm:il-is-bad}.}
We will construct an instance in \(I(\kappa, \alpha)\), where the outcome of the Individual Lottery is arbitrarily unfair and arbitrarily inefficient. In this instance, there are \(n\) agents and \(k = \alpha n\) tickets. Furthemore, agents are divided into one large group of size \(n^{3/4}\) and \(n - n^{3/4}\) groups of size one. If \(n\) is large enough, then this instance is in \(I(\kappa, \alpha)\) and the following two things happen simultaneously:
\begin{enumerate}
\item The size of the large group \(n^{3/4}\) is small relative to the number of tickets \(k = \alpha n\).
\item The fraction of tickets allocated to small groups is insignificant.
\end{enumerate}
Hence, the resulting allocation is unfair as the large group has an advantage over small groups, and inefficient as a vanishing fraction of the agents get most of the tickets.
Formally, let agents \(i,j\) be such that \(|G_i| = 1\) and \(|G_j| = n^{3/4}\). We will start by proving the efficiency guarantee. By Proposition~\ref{prop:IL-monotonicity} it follows that the group request is the most efficient dominant action profile. Therefore, we assume without loss of generality that this action profile is being selected. The utilization of this system is
\begin{equation}\label{eq:il-bad-instance-utilization}
\frac{n^{3/4} u_j(\IL(\a))}{k} + \frac{(n - n^{3/4}) u_i(\IL(\a))}{k}.
\end{equation}
We now argue that both terms in~\eqref{eq:il-bad-instance-utilization} can be made arbitrarily small by making \(n\) sufficiently large. We begin by studying the first term in~\eqref{eq:il-bad-instance-utilization}. Using the fact that utilities are upper bounded by \(1\), it follows that
\[\frac{n^{3/4} u_j(\IL(\a))}{k} \le \frac{n^{3/4}}{k} = \frac{1}{\alpha n^{1/4}}.\]
Hence, to ensure that \((1)\) holds it suffices to have \(n\) growing to infinity. We now analyze the second term in~\eqref{eq:il-bad-instance-utilization}. Because the group request action profile is being selected, this term is equal to the fraction of tickets allocated to small groups.
Moreover, we show the following upper bound on utility of agent \(i\):
\begin{equation}\label{eq:il-large-group-ub-body}
u_i(\IL(\a)) \le \frac{k}{(n^{3/4})^2} = \frac{\alpha}{n^{1/2}}.
\end{equation}
The intuition behind this bound is as follows. If we restrict our attention only to agents in \(G_i\) and \(G_j\), then we know that \(i\) will get a payoff \(0\) unless it is processed after at most \(k/n^{3/4} - 1\) members of \(G_j\). Because the order over agents is uniformly distributed, this event occurs with probability
\[\frac{k/n^{3/4}}{n^{3/4} + 1} \le \frac{k}{(n^{3/4})^2}.\]
From the first inequality in~\eqref{eq:il-large-group-ub-body}, it follows that
\[\frac{(n - n^{3/4}) u_i(\IL(\a))}{k} \le \frac{n}{n^{3/2}} \le \frac{1}{n^{1/2}}.\]
Notice that the right hand side goes to \(0\) as \(n\) grows, so \((2)\) holds.
We now turn to the fairness guarantee. To this end, we use a trivial upper bound on the utility of agent \(j\), based on the fact that the first agent to be processed always obtains a payoff of \(1\). Thus,
\begin{equation
u_j(\IL(\a)) \ge \frac{n^{3/4}}{n} = n^{-1/4}.
\end{equation}
Note that this lower bound is attained when all agents in small groups request \(k\) tickets. Combining the bound above and the second inequality in~\eqref{eq:il-large-group-ub-body}, we obtain
\begin{equation
\frac{u_i(\IL(\a))}{u_j(\IL(\a))} \le \frac{\alpha n^{1/4}}{n^{1/2}}.
\end{equation}
We conclude noting that the right side goes to \(0\) as \(n\) grows. The full proof of Theorem~\ref{thm:il-is-bad} is located in Appendix~\ref{appendix:IL}.
\subsubsection{Limiting the Number of Tickets Requested.}
In many applications, a variant of the Individual Lottery is used where a limit is imposed on the number of tickets an agent can request. For example, in the Hamilton Lottery agents can request at most \(2\) tickets, and in the Big Sur Marathon groups can have at most 15 individuals. This motivates us to study the Individual Lottery with limit \(\ell\). Formally, the only difference between this and the standard Individual Lottery is the action set, which is \(A = \{1,\ldots,\ell\}^n\), with the limit \(\ell\) chosen by the designer.
The choice of limit must balance several risks. Imposing a limit of $\ell$ reduces the risk from inflated demand, but harms groups with more than $\ell$ members. The latter effect reduces fairness and may also reduce efficiency if there are many large groups. In fact, we show in Proposition~\ref{prop:il-ell-bad} that the Individual Lottery with limit \(\ell\) is still arbitrarily unfair and arbitrarily inefficient in the worst case.
\begin{proposition}\label{prop:il-ell-bad}
For any \(\alpha, \kappa, \epsilon \in (0, 1)\) and \(\ell \in \N\), there exists an instance in \(I(\kappa,\alpha)\) such that, regardless the action profile selected, the outcome of the Individual Lottery with limit \(\ell\) is not \(\epsilon-\)efficient nor \(\epsilon-\)fair.
\end{proposition}
The proof of Proposition~\ref{prop:il-ell-bad} is in Appendix~\ref{appendix:IL}. In the example considered in the proof, problems stem from the fact that most groups have more than $\ell$ members. However, even if group sizes are upper bounded by \(\ell\), the Individual Lottery with limit \(\ell\) still performs poorly in the worst case. In particular, Propositions~\ref{prop:il(ell)-eff} and~\ref{prop:il(ell)-fairness} show that, if no group have more than \(\ell\) members, every dominant strategy equilibrium outcome of the Individual Lottery with limit \(\ell\) is \(1/\ell-\)efficient and \(1/\ell\)-fair. Moreover, these guarantees are tight in the worst case. We give a complete analysis of this variant of the Individual Lottery in Appendix~\ref{appendix:IL}.
\subsection{\NameProposedMechanism}
The example presented in Theorem~\ref{thm:il-is-bad} is an extreme case that we shouldn't see too often in practice. However, it illustrates the major issues of the Individual Lottery. In this section, we show that minor modifications to the Individual Lottery can yield strong performance guarantees even in these extreme cases.
We study the {\NameProposedMechanism} ($IW$), whose only departure from the Individual Lottery is the order in which agents are placed. Instead of using a uniform random order, the {\NameProposedMechanism} uses a random order biased against agents with large requests. Theorem~\ref{thm:spl-performance} shows that the {\NameProposedMechanism} is approximately fair and approximately efficient, and similar to the Group Lottery when there are many more agents than tickets.
Formally, each agent selects an action in $\{1,....,k\}$. For each \(\order \in \o_{\sN}\), we let random order over agents $\rOrder$ be such that
\begin{equation}\label{eq:size_proportional_lottery}
\bP(\rOrder=\order \vert \a) = \prod_{t=1}^{n}\frac{1/a_{\order_t}}{\sum \limits_{i \in N \backslash \order_{[t-1]} }1/a_{\order_i}}.
\end{equation}
There are several ways to generate \(\rOrder\). This order can be thought of as the result of sequentially sampling agents without replacement, with probability inversely proportional to the number of tickets that they request. One property that motivates the study of the {\NameProposedMechanism} is that when agents declare their group size, every group that has not been drawn is equally likely to be draw next.
Let $\rOrder \in \o_{\sN}$ be distributed according to \eqref{eq:size_proportional_lottery}. For any \(x' \in \X\), the allocation function of the {\NameProposedMechanism} is
\[\SPL_{x'}(\a) = \sum_{\order \in \o_\sN}\ind{x' = \xil(\a,\order)}\bP(\rOrder = \order),\]
with $\xil$ defined as in~\eqref{eq:xil}.
We define group request strategy as in Definition~\ref{def:IL-GR}: each agent $i$ requests $|G_i|$ tickets.
\subsubsection{Incentives.}
In this section, we will see that under the {\NameProposedMechanism}, there are instances where no strategy is dominant for every group. However, we will argue that if demand significantly exceeds supply, then it is reasonable to assume that groups will select the group request strategy.
We start by showing in Proposition~\ref{prop:SPL-incentives}, that for groups of size three or less the group request is the only dominant strategy.
\begin{proposition}\label{prop:SPL-incentives}
In the {\NameProposedMechanism}, if $G\in\G$ is such that $|G| \leq 3$, then the group request is the only dominant strategy for $G$.
\end{proposition}
The following example shows that for groups of more than three agents, deviating from the group request is potentially profitable.
\begin{example}\label{ex:SPL-incentives}
Consider an instance with \(n\) agents and \(n-1\) tickets. We divide the agents into one group of size $4$ and $n-4$ groups of a single agent. If \(n \ge 17\), then the optimal strategy for the large group will depend on the action profile selected by the small groups. In particular, if the small groups are following the group request strategy, then members of the larger group benefit from each requesting $2$ tickets instead of $4$. The analysis of this example is located in Appendix~\ref{app:spl}.
\end{example}
In the example above, when \(n \le 16\), it is actually optimal for the large group to play the group request. Thus, this deviation is only profitable when \(n \ge 17\), and the group success probability is bigger than \(92\%\). In general, when agents request fewer tickets than their group size, their chance of being selected increases, but now multiple agents from the group must be drawn in order to achieve success. This should be profitable only if the chance of each agent being drawn is high.
We formalize this intuition in Conjecture~\ref{conj:spl-incentives}. Roughly speaking, we conjecture that in scenarios where the success probability of a group is below $1 - 1/e \approx 63\%$, the group request strategy maximizes its conditional expected utility. Proposition~\ref{prop:spl-gr-optimality-restricted} lends additional support to the conjecture. This proposition establishes that our conjecture holds when restricted to a broad set of strategies. In order to present our conjecture, we need first to introduce some definitions.
In what follows, we fix an arbitrary group \(G\). Given any action profile \(\a\), we generate an order over agents $\rOrder$ using the following algorithm:
\begin{algorithm}[H]
\caption{\label{alg:exponentials}}
\begin{enumerate}
\item Draw $\{X_i\}_{i\in \sN}$ as i.i.d. exponentials, with $\bP(X_i > t) = e^{-t}$ for $t \geq 0$.
\item Place agents in increasing order of $a_i X_i$: that is, output $\Sigma$ such that
\[a_{\Sigma_1}X_{\Sigma_1} < \cdots < a_{\Sigma_n}X_{\Sigma_n}.\]
\end{enumerate}
\end{algorithm}
From Proposition~\ref{prop:spl-random-order} in Appendix~\ref{app:spl}, it follows that \(\rOrder\) is distributed according to~\eqref{eq:size_proportional_lottery} conditional on $\a$. We will refer to \(a_i X_i\) as the {\em score} obtained by agent \(i\). Note that a lower score is better as it increases the chances of getting awarded.
The usual way to study the incentives of group \(G\), is to find a strategy that maximizes its utility given the actions of other agents. Here, we will assume that \(G\) has an additional information: the scores of other agents. Thus, we study the problem faced by \(G\) of maximizing its success probability given actions and scores of everyone else. This problem seems to be high-dimensional and very complex, however, we will show that all the information relevant for \(G\) can be captured by a sufficient statistic \(T\). Define,
\begin{equation}\label{eq:T-def}
T = \inf\left\{t \in \R: \sum_{j \not\in G} a_j \ind{a_j X_j < t} > k - |G|\right\}.
\end{equation}
We show in Lemma~\ref{lemma:threshold} located in Appendix~\ref{app:spl}, that \(G\) gets a utility of \(1\) if and only if the sum of the requests of its members whose score is lower than \(T\) is at least \(|G|\). Therefore, we can formulate the problem faced by \(G\) as follows:
\begin{equation}\label{eq:spl-GT-problem}
\begin{array}{rlll}
\max &\bP(\sum_{i\in G} a_i\ind{a_i X_i < T} \ge |G|)\\
\subjectto &a_i \in \{1,\ldots,k\}\quad \forall i \in G.
\end{array}
\end{equation}
Notice that under the group request strategy, the objective value in~\eqref{eq:spl-GT-problem} evaluates to
\begin{equation}\label{eq:spl-GT-problem-gr}
1 - e^{-T}.
\end{equation}
This follows because \(G\) will get a payoff of \(1\) if and only if at least one of its members has a score lower than \(T\), that is, \(\min_{i\in G}\{a_i X_i\} < T\). Moreover, using that \(\sum_{i\in G} 1/a_i = 1\), and by the well-known properties of the minimum of exponential random variables, we have that $\min_{i\in G}\{a_i X_i\} \sim Exp(1)$.
Definitions out of the way, we can present our conjecture and the proposition supporting it.
\begin{conjecture}\label{conj:spl-incentives}If \(T \le 1\), then no other strategy yields a higher objective value in~\eqref{eq:spl-GT-problem} than the group request.
\end{conjecture}
From equation~\eqref{eq:spl-GT-problem-gr}, we see that the utility yield by the group request is an increasing function of \(T\). Therefore, we can think of \(T\) as an indicator of how competitive the market is. Thus, the interpretations of Conjecture~\ref{conj:spl-incentives} is that if the market is moderately competitive ($G$ has success probability below $1 - 1/e \approx 63\%$), then the group request is optimal. While we haven't proved the conjecture, we do have a proof that it holds for a broad subset of strategies. For \(r \in \{0,\ldots, |G|-1\}\), we define \(\B_r \subseteq A_G\) to be the set of strategies for which the sum of any \(r\) requests is less than \(|G|\), while the sum of any \(r+1\) requests is greater than or equal to \(|G|\). Let
\begin{equation}
\B = \bigcup_{r=0}^{|G|-1} \B_r.
\end{equation}
\begin{proposition}\label{prop:spl-gr-optimality-restricted}
If \(T \le 1\), then no other strategy in \(\B\) yields a higher objective value in~\eqref{eq:spl-GT-problem} than the group request.
\end{proposition}
Note that \(\B\) is rich enough such that for any group of size greater than \(3\), it contains a strategy that is better than the group request for \(T\) large enough.
\proof[Proof sketch of Proposition~\ref{prop:spl-gr-optimality-restricted}]
In this proof, we will say that an agent is awarded if and only if it has a score lower than \(T\).
From~\eqref{eq:spl-GT-problem-gr}, it suffices to show that under any strategy in \(\B_r\), the objective value in~\eqref{eq:spl-GT-problem} is at most \(1-e^{-T}\).
We start by studying a relaxation of the problem defined in~\eqref{eq:spl-GT-problem}. In this relaxation, the number of times an agent \(i\) is awarded follows a Poisson distribution with rate \(T/a_i\), and the total number of times \(G\) is awarded follows a Poisson distribution with rate \(\sum_{i\in G}T/a_i\). Note that if the set of feasible strategies is \(\B_r\), then by Lemma~\ref{lemma:threshold} in Appendix~\ref{app:spl} it follows that \(G\) needs to be awarded at least \(r+1\) times. Finally, using a Poisson tail bound, we show that this event happens with probability at most \(1 -e^{-T}\). This bound can only be applied if the expected number of times \(G\) is awarded is at most \(r+1\). This follows because \(T\le 1\) and Lemma~\ref{lemma:spl-reciprocals-ub} in Appendix~\ref{app:spl}, which establishes that for any \(\a_G' \in \B_r,~\sum_{i\in G}1/a'_i \le r+1\).
\endproof
\subsubsection{Performance.}
We now study the performance of the {\NameProposedMechanism}, under the assumption that groups are selecting the group request strategy. We think that this assumption is reasonable for two reasons: (i) for groups of size at most three, the group request is the only dominant strategy, and (ii) for larger groups, we conjecture that in scenarios where its success probability is moderate (at most \(63\%\)), the group request strategy is optimal. The main result of this section is Theorem~\ref{thm:spl-performance}, which establishes that the {\NameProposedMechanism} is approximately efficient and fair.
To state these guarantees, we define for any \(x>0\),
\begin{equation}
g(x) = \frac{1-e^{-x}}{x}.
\end{equation}
\begin{theorem}\label{thm:spl-performance}
Fix $\kappa,\alpha \in (0,1)$. For every instance in $I(\kappa,\alpha)$, the group request outcome of the {\NameProposedMechanism} is $(1 - \kappa)g(\alpha)$-efficient and \((1-2\kappa)g(\alpha)\)-fair.
\end{theorem}
These guarantees resemble the ones offered for the Group lottery. Recall that Theorem~\ref{thm:gl-is-good} establishes that the Group Lottery is \((1-\kappa)\)-efficient and \((1-2\kappa)\)-fair. It is not perfectly efficient, as the last tickets might be wasted. Similarly, it is not perfectly fair, as once there are only a few tickets left, small groups still have a chance of being allocated but large groups do not. These issues persist under the {\NameProposedMechanism}. In addition, the {\NameProposedMechanism} has the additional concern that multiple member of a group may be selected. This explains the multiplicative factor of \(g(\alpha)\) in the theorem statement. Because \(g(\alpha) \ge 1 - \alpha/2\), when \(\alpha\) is close to \(0\) the guarantees for the Group Lottery and the {\NameProposedMechanism} coincide. Although it is intuitive that a small supply-demand ratio implies a small chance of having groups with multiple winners, the previous section shows that this may not be the case under the standard Individual Lottery.
\subsubsection{Proof Sketch of Theorem~\ref{thm:spl-performance}.}
In order to prove the efficiency and fairness guarantees, we first introduce a new mechanism: the {\em Group Lottery with Replacement (GR).} This is a variant of the Group Lottery in which valid groups can be processed more than once. Formally, the set of actions, the set of valid groups \(V\), the group request strategy and the allocation rule \(\xgl\) are defined exactly as in the Group Lottery. However, the allocation function \(\GLR\) is different, in particular, this mechanism process valid groups according to a sequence of \(k\) elements \(\sq \in \s_V\) , where \(\sq_t\) is independently and uniformly sampled with replacement from \(V\).
Hence, for any \(x' \in \X\), the allocation function of the Group Lottery is
\[\GLR_{x'}(\a) = \sum_{\order \in \o_V}\ind{x' = \xgl(\a,\order)}\bP(\sq = \order),\]
with $\xgl$ defined as in~\eqref{eq:xgl}.
Having defined this new mechanism, we now present a lemma that will be key in proving both guarantees. This lemma establishes a dominance relation between the {\NameProposedMechanism}, the Group Lottery and the Group Lottery with Replacement, when the group request action profile is being selected. As we will see, every agent prefers the Group Lottery to the {\NameProposedMechanism}, and the {\NameProposedMechanism} to the Group Lottery with Replacement.
\begin{lemma}\label{prop:mech-dominance}
For any instance and any agent \(i\in \sN\), if \(\a\) denote the corresponding group request action profile for each mechanism below, then
\begin{equation}\label{eq:mech-dominance}
u_i(\GLR(\a)) \le u_i(\SPL(\a)) \le u_i(\GL(\a)).
\end{equation}
\end{lemma}
The key idea to prove Lemma~\ref{prop:mech-dominance} is that the order or sequence used in each of these mechanism can be generated based on a random sequence of agents \(\sq'\). Roughly speaking, each order or sequence is generated from \(\sq'\) as follows:
\begin{itemize}
\item Group Lottery with replacement: replace every agent
by its group.
\item \NameProposedMechanism: remove every agent that has already appeared in a previous position.
\item Group Lottery: replace every agent by its group, and then remove every group that has already appeared in a previous position.
\end{itemize}
Note that because in each mechanism the group request strategy is being selected, whenever a group or agent is being processed, it is given a number of tickets equal to the minimum of its group size and the number of remaining tickets.
This implies that, under the Group Lottery with Replacement, a group could be given more tickets than needed because one of its members appeared more than once in the first positions of \(\sq'\). This situation is avoided in the {\NameProposedMechanism}, hence, making all agents weakly better. Similarly, under the {\NameProposedMechanism}, a group could be given more tickets than needed because its members appeared more than once in the first positions of \(\sq'\). This situation is avoided in the Group Lottery, hence, making all agents weakly better. The full proof is located in Appendix~\ref{app:mech-dominance}.
We now turn to the efficiency guarantee. From Lemma~\ref{prop:mech-dominance}, it follows that for any instance the utilization under the {\NameProposedMechanism} is at least the utilization under the Group Request with Replacement. Therefore, it suffices to show that for any instance in \(I(\kappa,\alpha)\), the Group Lottery with Replacement is \((1 - \kappa)g(\alpha)\)-efficient. To this end, we present in Lemma~\ref{lemma:glr-utility-lb-body} a lower bound on the utility of any agent under the Group Lottery with Replacement.
\begin{lemma}\label{lemma:glr-utility-lb-body}
For any instance in \(I(\kappa,\alpha)\) and any agent \(i\), if we let \(\a\) be the group request under the Group Lottery with Replacement, then \vspace{-.1 in}
\begin{equation}\label{eq:glr-utility-lb}
u_i(\GLR(\a)) \ge \frac{k}{n}(1 - \kappa) g(\alpha).
\end{equation}
\end{lemma}
The proof of Lemma~\ref{lemma:glr-utility-lb-body} is in Appendix~\ref{app:glr}. This lemma immediately give us the desired lower bound on the utilization of the Group Lottery with replacement.
We now show the fairness guarantee. From Lemma~\ref{prop:mech-dominance}, we have that for any instance and any pair of agents \(i,j\),
\begin{equation}\label{eq:GL-utility-ratios}
\frac{u_i(\SPL(\a))}{u_j(\SPL(\a))} \ge \frac{u_i(\GLR(\a))}{u_j(\GL(\a))}.
\end{equation}
Hence, it suffices to show that the ratio on the right hand side is at least \((1-2\kappa)g(\alpha)\). In Lemma~\ref{lemma:gl-utility-bounds} we proved an upper bound on the utility of an agent under the Group Lottery. Meanwhile, in Lemma~\ref{lemma:glr-utility-lb-body} we established a lower bound on the utility of an agent under the Group Lottery with Replacement. Combining equation~\eqref{eq:GL-utility-ratios}, Lemma~\ref{lemma:gl-utility-bounds} and Lemma~\ref{lemma:glr-utility-lb-body} yields our fairness factor of \((1-2\kappa)g(\alpha)\).
\section{Discussion}\label{sec:discussion}
We consider a setting where groups of people wish to share an experience that is being allocated by lottery. We study the efficiency and fairness of simple mechanisms in two scenarios: one where agents identify the members of their group, and one where they simply request a number of tickets. In the former case, the Group Lottery is \((1-\kappa)\)-efficient and \((1-2\kappa)\)-fair. However, its natural and widespread counterpart, the Individual Lottery, suffers from deficiencies that can cause it to be arbitrarily inefficient and unfair. As an alternative, we propose the Weighted Individual Lottery. This mechanism uses the same user interface as the Individual Lottery, and Theorem~\ref{thm:spl-performance} establishes that it is \((1-\kappa)g(\alpha)\)-efficient and \((1-2\kappa)g(\alpha)\)-fair.
Although our bounds are based on worst-case scenarios, they can be combined with publicly available data to provide meaningful guarantees. In 2016 the Hamilton Lottery received approximately $n = 10,000$ applications daily for $k = 21$ tickets, with a max group size of $s = 2$.\footnote{Source: \url{https://www.bustle.com/articles/165707-the-odds-of-winning-the-hamilton-lottery-are-too-depressing-for-words}.} Hence, in this case \(\kappa \le .05\) and \(g(\alpha) \ge .99\). Therefore, by Theorem~\ref{thm:gl-is-good}, the Group Lottery outcome is at least $95\%$ efficient and $90\%$ fair. Furthermore, Theorem~\ref{thm:spl-performance} gives nearly identical guarantees for the {\NameProposedMechanism}. Meanwhile, the 2020 Big Sur Marathon Groups and Couples lottery received $1296$ applications for $k = 702$ tickets, with a maximum group size of $15$.\footnote{Source: \url{https://www.bigsurmarathon.org/random-drawing-results-for-the-2020-big-sur-marathon/}.} This yields \(\kappa = 14/702 \approx .02\), so Theorem \ref{thm:gl-is-good} implies $98\%$ efficiency and $96\%$ fairness for the Group Lottery. Determining $\alpha$ is tricker, as we do not observe the total number of agents $n$. Using the very conservative lower bound \(n \ge 1296\) (based on the assumption that every member of every group submitted a separate application) yields \(g(\alpha) \ge .77\). Thus, Theorem \ref{thm:spl-performance} implies $76\%$ efficiency and $74\%$ fairness for the {\NameProposedMechanism}. Its true performance would likely be much better, but accurate estimates would rely on understanding how many groups are currently submitting multiple applications.
Our analysis makes the strong assumption of dichotomous preferences. In practice, the world is more complicated: groups may benefit from extra tickets that can be sold or given to friends, and groups that don't receive enough tickets for everyone may choose to split up and have a subset attend the event. Despite these considerations, we believe that dichotomous preferences capture the first-order considerations in several markets while maintaining tractability. The mechanisms we have proposed, while imperfect, are practical and offer improvements over the Individual Lottery (which is often the status quo). Furthermore, we conjecture that the efficiency and fairness of the Group Lottery would continue to hold in a model where the utility of members of group $G$ is a more general function of the number of tickets received by $G$, so long as this function is convex and increasing on $\{1, 2, \ldots, |G|\}$, and non-increasing thereafter.
One exciting direction for future work is to adapt these mechanisms to settings with heterogeneous goods. For example, while the daily lottery for Half Dome allocates homogeneous permits using the Individual Lottery, the pre-season lottery allocates 225 permits for each day. Before the hiking season, each applicant enters a number of permits requested (up to a maximum of six), as well as a ranked list of dates that would be feasible. Applicants are then placed in a uniformly random order, and sequentially allocated their most preferred feasible date. This is the natural extension of the Individual Lottery to a setting with with heterogeneous goods, and has many of the same limitations discussed in this paper. It would be interesting to study the performance of (generalizations of) the Group Lottery and {\NameProposedMechanism} in this setting.
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 1.62793,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUc6I5qWTA7nAgcTnm | \section{Introduction}
Playing in front of the home crowd is beneficial to team success in all of sports. Home fans use supportive chants to motivate their team and scream and shout at the opponent. Referees also appear to be influenced by the audience as their decisions have repeatedly shown to favour the home team \citep{dohmen2016referee}. These impacts can help to explain why chances to win a matchup are typically higher during home than away games. Such home advantage is also a well-known fact to bookmakers, who price in all information available and hence offer, on average, lower odds for bets on the home than on the away team.
With the outbreak of the COVID-19 pandemic in spring 2020, all professional and amateur sports had to be cancelled because public gatherings were prohibited. About two months later, the German Bundesliga was among the first to resume playing, still in the same stadiums as before, but with fans absent. With games being played in empty stadiums, the home advantage eroded immediately \citep{fischer2020does, reade2020echoes}. While it took some rounds for this change to become pronounceable, it upheld until the season finished at the end of June 2020. With the German Bundesliga being a heavyweight in the betting market, the questions emerges whether bookmakers adapt their odds concerning the now non-existent home advantage.
This paper analyses betting market (in-)efficiencies in the German Bundesliga for matches behind closed doors after the COVID-19 break in spring 2020. Our analysis focuses on the 83 matches\footnote{In addition to rounds 26-34, two postponed matches were played behind closed doors.} at the end of the season 2019/2020 in the highest German football division and compares it to previous seasons. Our analyses show that bookmakers did not change their pricing during the period of matches after the start of no-attendance games. This opened highly profitable strategies to bettors with return on investment of around fifteen percent.
The paper is organised as follows: Section 2 provides an overview of the literature on the home advantage in sports as well as the home bias in sports betting. It is proceeded by descriptive statistics of our data in Section 3. Section 4 analyses the efficiency of betting markets before and after the COVID-19 break and strategies generating positive returns for bettors. Finally, Section 5 concludes with a discussion of our results.
\section{The home advantage and the home bias}
Our analysis considers the bookmakers' evaluation of the home advantage in football. We hence first present the state of research on the home advantage in football, to be followed by the incorporation of the home advantage into betting odds.
\subsection{The advantage of playing at home}
It is well documented in the literature that teams enjoy an advantage when playing at home. In football, teams have higher chances to win when playing at home than playing the exact team in an away game.\footnote{For meta-analysis non-exclusive to football see, e.g., \citet{courneya1992home, jamieson2010home}.} While the magnitude of such home advantage is more pronounced in football than in other sports \citep{jamieson2010home} and decreases to some extend over time \citep{palacios2004structural}, the exact mechanism of why teams have better winning records at home is still not well understood.
Research agrees that it is not a singly source which generates the home advantage, but is rather assembled by a combination of different factors, which appear impossible to be disentangled empirically \citep{pollard2014components}. Since the away team has to travel, their journey as well as their unfamiliarity with the venue/stadium could reduce winning chances \citep{schwartz1977home, courneya1992home}. Concerning tactical formation, home teams tend towards a more offensive style of play compared to away teams, which could again benefit their chances of winning \citep{schwartz1977home, carmichael2005home}. Also, referees have shown to favour home teams, known as the \textit{home bias}. Such (unintentional) favouritism displays in longer extra-times when the home team is trailing compared to when the home team is winning \citep{sutter2004favoritism, garicano2005favoritism}.\footnote{See \citet{dohmen2016referee} for a review of the literature.}
The recent COVID-19 pandemic opened the opportunity to determine if the home advantage in football still exists when no spectators are present. The results are unanimous and find the home advantage to disappear once games were played behind closed doors. \citet{reade2020echoes} analyse various European football competitions including the German Bundesliga, while \citet{Dilger2020} and \citet{fischer2020does} focus on Germany only. The studies also agree that the referee bias towards the home teams dissapeared, potentially due to less social pressure of home fans on the referee (see also \citealp{endrich2020home}).
\subsection{(Miss-)Pricing of the home advantage by bookmakers}
Following the concept of efficient markets, asset prices (equivalent to betting odds) should contain all available information \citep{fama1970}. Since bookmakers face uncertainty of events and have to adjust to new information \citep{deutscher2018betting}, they keep a risk premium, referred to as margin. Such margins decreased in recent years due to increasing competition between bookmakers (\citealp{vstrumbelj2010online}). Efficient betting markets imply that market participants (bettors) cannot use simple strategies to beat the market and make profits, given the margin kept by the bookmaker. Those simple strategies include systematic betting on, e.g., home teams, favourites, popular or recently promoted teams.\footnote{\citet{winkelmann2020} present an overview of studies on biases in European betting markets.}
As the location of the game has shown to benefit the home team, the betting odds for home teams are on average lower than for away teams. The again so-called \textit{home bias} refers to increased payouts (equivalent to higher odds) for the home team compared to the fair odds. Such biased odds can arise from the bookmakers inability to predict game outcomes, their knowledge that fans are somewhat unable to assess teams' strength \citep{na2019not} or their goal to have a so-called balanced book, which is given if wagers on both outcomes (home and away win) level out such that the bookmakers secure a profit independent of the game outcome \citep{hodges2013fixed}. Since bettors rather bet on underdogs than favourites, bookmakers offer favourable odds on home wins \citep{franke2020market}. If such bias towards home wins is large enough to exceed the margin kept by the bookmaker, a profitable strategy would suggest to systematically bet on the home team. Supporting empirical evidence comes from e.g.\ \citet{forrest2008sentiment} and \citet{vlastakis2009efficient}.
The COVID-19 pandemic offers an unique natural experiment:
The direct effect on the disappearance of the home advantage in the German Bundesliga is strengthened by imperfect information of the bookmaker due to a small number of comparable games behind closed doors prior to the pandemic.\footnote{Prior to COVID-19} This leads to the question if and how fast bookmakers responded by adjusting the betting odds. From the bettors' perspective this opens the question if mispricing by bookmakers created profitable strategies.
\section{Data}
Due to spread of the COVID-19 pandemic in early 2020 the German Bundesliga prohibited the attendance of spectators after round 25 on March 9th 2020. By then 223 matches were played\footnote{25 rounds with nine games each; two games had been postponed due to other circumstances, one of them has been played on March 12th behind closed doors.} and 82 had been rescheduled between the 16th of May and the 27th of June 2020. The Bundesliga was the first of the top European football leagues to restart matches, but without spectators. Match data retrieved from \url{www.football-data.co.uk} cover results and pre-game betting odds for all games of the German Bundesliga from season 2014/15 until 2019/20. We focus on data covering the current season while using the five preceding seasons as reference to cover potential within season dynamics. We split the 2019/20 season into two periods, considering matches with and without fans. Previous seasons are separated after round 25 corresponding to the COVID-19 break in season 2019/20 for comparison.
\subsection{Descriptive match statistics}
Table \ref{tab:results} provides an overview on match results. Prior to the most recent season, home teams won nearly half of their matches, with an almost equal split between away wins (29.42\%) and draws (25.60\%). At the end of a season the number of home wins increases by more than three percentage points while games resulting in a draw and away wins are less likely than before.
The 2019/20 German Bundesliga season stands out due to the large number of away wins which is nearly 20\% higher compared to previous seasons even in rounds with spectators. The proportion of home wins and draws is smaller by 2 and about 3.5 percentage points, respectively. While the proportion of draws increases slightly after the COVID-19 break and equals the amount at the end of the season in previous years, we find a strong increase in the number of away wins, which are more than 25\% higher than in previous rounds and 65\% higher than at the end of previous seasons. At the same time the number of home wins decreases and is about 25\% lower than before the COVID-19 break and about 35\% lower than in previous seasons (see Table \ref{tab:results}). These findings confirm previous results which revealed an eroded home effect for the German Bundesliga after the COVID-19 break (see e.g.\ \citealp{Dilger2020}).
\begin{table}[h]
\centering
\scalebox{0.97}{
\begin{tabular}{c|c|ccc}
& Matches & Home wins & Draws & Away wins \\
\hline
Seasons 2014/15-2018/19 Round 1-25 & 1125 & 44.98\% & 25.60\% & 29.42\% \\
Seasons 2014/15-2018/19 Round 26-34 & 405 & 49.63\% & 23.46\% & 26.91\%\\
Season 2019/20 with spectators & 223 & 43.05\% & 21.97\% & 34.98\% \\
Season 2019/20 without spectators & 83 & 32.53\% & 22.89\% & 44.58\%\\
\end{tabular}}
\caption{Proportion of match outcomes before and after the COVID-19 break as well as for previous seasons.}
\label{tab:results}
\end{table}
Similar results occur regarding the average number of goals scored by the home and away team. In previous years, the number of total goals increased at the end of the season from 2.82 goals on average to 3.06 goals. However, in 2019/20 considerably more goals were already scored in games with spectators, which mainly origins from the increased number of away goals. While away teams scored even more goals after the COVID-19 break, the number of home goals decreased by nearly 20\%. For the first time, the number of away goals surpasses the number of home goals (see Table \ref{tab:goals}).
\begin{table}[h]
\centering
\scalebox{0.97}{
\begin{tabular}{c|cc|c}
& Home goals & Away goals & Total goals \\
\hline
Seasons 2014/15-2018/19 Round 1-25 & 1.58 & 1.24 & 2.82 \\
Seasons 2014/15-2018/19 Round 26-34 & 1.80 & 1.26 & 3.06 \\
Season 2019/20 with spectators & 1.74 & 1.51 & 3.25 \\
Season 2019/20 without spectators & 1.43 & 1.66 & 3.10 \\
\end{tabular}}
\caption{Average home and away goals before and after the COVID-19 break as well as for previous seasons.}
\label{tab:goals}
\end{table}
\subsection{Implied winning probabilities and bookmaker's margins}
To determine the precision of bookmakers' forecasts, we first turn our attention to the betting odds. As betting odds contain a margin, they have to be adjusted as follows to obtain the implied winning probability $\hat{\pi}_i$ given by the bookmaker (see e.g.\ \citealp{deutscher2013sabotage}):
$$
\hat{\pi}_{i}=\frac{1/O_i}{1/O_h+1/O_d+1/O_a}, \,\,\,\, i = h,d,a
$$
where $O_i$ represents the average odds over all bookmakers for home wins ($i=h$), away wins ($i=a$), and draws ($i=d$).\footnote{The data set covers odds of between 30 and 56 bookmakers for each match. The pairwise correlation between betting odds offered by different bookmakers is at least 0.969 for home wins and 0.945 for away wins.} The difference in the implied probabilities $\textit{ImpProbDiff}=\hat{\pi}_h-\hat{\pi}_a$ indicates whether the bookmaker denotes the home team to be the favourite ($\textit{ImpProbDiff}>0$), whereas $\textit{ImpProbDiff}<0$ coincides with a favoured away team. If the (absolute) difference in the implied winning probability between two teams in a specific match is large, one team can clearly declared to be the favourite. In contrast, a small difference indicates that the bookmaker assigns nearly equal abilities to both teams.
Bookmakers use their margin to account for possible mispricing and to remain profitable. Margins for each match are calculated as $\sum\limits_{i\in\{h,d,a\}}O_{m,i}^{-1}-1$ for matches $m=1,\ldots,M$. Table \ref{tab:margins} compares margins for season 2019/20 to previous seasons, indicating that margins are lower in the most recent season. This confirms previous results and is in line with decreased margins over time caused by higher market competition as argued by \citet{forrest2005odds} and \citet{vstrumbelj2010online}.
\begin{table}[h]
\centering
\begin{tabular}{c|cc|c}
& Margins \\
\hline
Seasons 2014/15-2018/19 Round 1-25 & 5.09\% \\
Seasons 2014/15-2018/19 Round 26-34 & 5.11\% \\
Season 2019/20 with spectators & 4.83\% \\
Season 2019/20 without spectators & 4.79\% \\
\end{tabular}
\caption{Bookmaker's margins before and after the COVID-19 break as well as for previous seasons.}
\label{tab:margins}
\end{table}
Furthermore, margins typically increase as the assessment of teams becomes more difficult, e.g.\ if the bookmaker expect two teams to have nearly equal abilities. Table \ref{tab:regmar} displays results of a regression model where we explain the margin for a specific match by the absolute difference between the implied winning probability for a home and away win as well as the season. We find significantly lower margins if this difference increases, i.e.\ if one team can clearly declared to be the favourite.
\begin{table}[!htbp] \centering
\caption{Margins explained by absolute difference in implied winning probabilities, season and round.}
\label{tab:regmar}
\scalebox{0.8}{
\begin{tabular}{@{\extracolsep{5pt}}lc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
& \multicolumn{1}{c}{\textit{Dependent variable:}} \\
\cline{2-2}
\\[-1.8ex] & Margin \\
\hline \\[-1.8ex]
\textit{Absolute difference in implied probabilities} & $-$0.002$^{***}$ \\
& (0.0003) \\
& \\
\textit{Season} & $-$0.001$^{***}$ \\
& (0.00004) \\
& \\
\textit{Constant} & 0.056$^{***}$ \\
& (0.0002) \\
& \\
\hline \\[-1.8ex]
Observations & 1,836 \\
R$^{2}$ & 0.468 \\
\hline
\hline \\[-1.8ex]
\textit{Note:} & \multicolumn{1}{r}{$^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01} \\
\end{tabular}}
\end{table}
However, according to Table \ref{tab:margins}, there is only a slight increase in the margins at the end of previous seasons while margins slightly decrease after the COVID-19 break. While increased margins at the end of previous seasons can be referred to the difficulty in the prediction of match outcomes when there are matches with two teams without any possibilities to face promotion to the international competition or are under threat of possible relegation, findings on season 2019/20 are somewhat surprising. For matches behind closed doors, previous experience of bookmakers is fairly low. Therefore the assessment of teams is expected to be difficult. It could be expected that bookmakers account for this by increasing margins.
\section{Analysing market inefficiencies}
Bookmakers had close to no experience on games behind closed doors, since only a small number of such games had been played in the past. Nevertheless, as indicated by our findings from the previous section, they did not account for this increased uncertainty by increasing their margins. Our descriptive analysis also reveals a disappeared home advantage, equal to more away wins after the COVID-19 break at the end of season 2019/20 in the German Bundesliga. This opens the question whether bookmakers incorporated increased winning probabilities for away teams in their odds. Otherwise, bettors would have been able to gain positive returns when betting consistently on the away team. In accordance to the home bias as described above, this phenomenon can be denoted as an away bias. In the following, we provide regression analyses to statistically test for increased chances to win a bet when betting on away teams (especially in games played behind closed doors).
\subsection{Model}
Our analysis covers bets on home and away teams since margins for draws only vary slightly in football \citep{pope1989information}. Due to the differences in the winning probability when playing at home or away we include the binary variable \textit{Away} indicating whether we bet on the away team (\textit{Away=1}) or the home team (\textit{Away=0}). In addition, the variable \textit{COVID-19} equals one if the match has been played behind closed doors. To control for possible dynamic changes during the course of the season the variable $\textit{Betting after round 25}$ equals one if we bet after this round. We further include an interaction term between \textit{Away} and \textit{COVID-19}. This term allows for differences in the away bias between matches with and without spectators. In the second model, we additionally test for adjustments by the bookmaker during the period of matches without spectators, thus including the \textit{Round after round 25} taking value 1 for round 26 up to value 9 for round 34 as well as an interaction term with the \textit{COVID-19} variable.
Efficient markets would imply that neither betting on away teams nor the \textit{COVID-19} variable affects the chance to win a bet significantly. To test if the market considered here is efficient, we run a logistic regression model to detect whether any variable beyond the implied probability has explanatory power on the dependent binary variable which indicates if a bet was \textit{Won}. The linear predictor with $\text{logit}(\Pr(Won_i = 1)) = \eta_i$ is linked by the logit function and given as:
\begin{equation*}
\begin{split}
\eta_i &=\beta_0+\beta_1 \text{\textit{Implied Probability}}_i+\beta_2 \text{\textit{Away}}_i + \beta_3 \text{\textit{Betting after round 25}}_i\\
&+ \beta_4 \text{\textit{COVID-19}}_i + \beta_5 \text{\textit{Away}}_i \cdot \text{\textit{COVID-19}}_i + \beta_6 \text{\textit{Round after round 25}}_i\\
&+ \beta_7 \text{\textit{Round after round 25}}_i \cdot \text{\textit{COVID-19}}_i.
\end{split}
\end{equation*}
We use maximum likelihood to fit the models to the data using the function \texttt{glm()} in R \citep{R}. This approach follows the concept of various further studies as \citet{forrest2008sentiment}, \citet{franck2011sentimental}, and \citet{feddersen2017sentiment}.
\subsection{Results}
Table \ref{tab:regmod} displays the estimated coefficients and standard errors of our two regression models. The \textit{Implied Probability} calculated based on bookmakers' odds has strong explanatory power on the actual outcome of the game, which is intuitively plausible. The negative and significant effect of \textit{Away} reveals a home bias for matches with spectators in the German Bundesliga which is in line with the existing literature. However, there is no significant change in the chances to win a bet in the last quarter of the season as indicated by the insignificant effect of the dummy variable for \textit{Betting after round 25}. Model 1 allows for a comparison between a bet on a match without spectators and a match at the end of a previous season with equal implied probability by the bookmaker. Here we find a considerable decrease in the odds to win a bet on the home team while the odds to win a bet on the away team are considerably increased.
Model 2 additionally tests for an adjustment of betting odds by the bookmaker during the period of matches behind closed doors. We do not find significant changes in the chances to win a bet for subsequent rounds, thus concluding that the misassessment is not limited to the first rounds after the restart of the German Bundesliga but kept up for the remainder of the season.
\begin{table}[!htbp] \centering
\caption{Estimation results of our model fitted to the whole data set.}
\label{tab:regmod}
\scalebox{0.4}{
\begin{tabular}{@{\extracolsep{5pt}}lcc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
& \multicolumn{2}{c}{\textit{Dependent variable:}} \\
\cline{2-3}
\\[-1.8ex] & \multicolumn{2}{c}{Won} \\
\\[-1.8ex] & Model 1 & Model 2\\
\hline \\[-1.8ex]
\textit{Implied probability} & 4.530$^{***}$ & 4.530$^{***}$ \\
& (0.231) & (0.231) \\
& & \\
\textit{Away} & $-$0.162$^{**}$ & $-$0.162$^{**}$ \\
& (0.080) & (0.080) \\
& & \\
\textit{Betting after round 25} & 0.032 & 0.076 \\
& (0.089) & (0.174) \\
& & \\
\textit{COVID-19} & $-$0.606$^{**}$ & $-$0.916$^{**}$ \\
& (0.268) & (0.445) \\
& & \\
\textit{Away} $\cdot$ \textit{COVID-19} & 1.136$^{***}$ & 1.143$^{***}$ \\
& (0.358) & (0.359) \\
& & \\
\textit{Round after round 25} & & $-$0.009 \\
& & (0.030) \\
& & \\
\textit{Round after round 25} $\cdot$ \textit{COVID-19} & & 0.063 \\
& & (0.071) \\
& & \\
\textit{Constant} & $-$2.207$^{***}$ & $-$2.207$^{***}$ \\
& (0.118) & (0.118) \\
& & \\
\hline \\[-1.8ex]
Observations & 3,672 & 3,672 \\
Akaike Inf. Crit. & 4,322.426 & 4,325.643 \\
\hline
\hline \\[-1.8ex]
\textit{Note:} & \multicolumn{2}{r}{$^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01} \\ \end{tabular}}
\end{table}
We use the results in model 1 to compare implied probabilities given by the bookmaker to winning probabilities as expected under our model (see Figure \ref{fig:ImpProb}). The left panel represents bets on home teams while bets on the away team are illustrated in the right panel. Bets at the end of a previous season (with spectators) are depicted in red while bets on a game behind closed doors are represented in blue colour. An efficient market would correspond to the dashed diagonal line. We find expected winning probabilities to deviate only slightly from the efficient market line for matches with spectators at the end of previous seasons. As indicated by our model, the figure confirms significantly higher expected probabilities to win a bet when betting on away teams after the COVID-19 break, while these chance are significantly lower for bets on home teams (see Figure \ref{fig:ImpProb}). This underlines that bookmakers were unable to incorporate the disappearing home advantage for matches without the attendance of spectators into their odds, potentially opening the possibility for positive returns to bettors who systematically bet on away wins.
\begin{figure}[!htb]
\centering
\includegraphics[scale=.7]{figures/plot_impprob.pdf}
\caption{Comparison between implied probabilities by the bookmaker and expected winning probability under the model for home teams (left panel) and away teams (right panel) for matches at the end of previous seasons (red) and matches behind closed doors (blue).}
\label{fig:ImpProb}
\end{figure}
\subsection{Returns}
Since the bookmakers take a margin, significant impact of variables aside from the implied probabilities does not necessarily turn into profitable strategies. Still, our model indicates significantly higher chances to win a bet when betting on away games after the COVID-19 break. We use this result to calculate return on investments (ROIs) for simple strategies, i.e.\ betting consistently on home or away teams. For previous seasons we find on average higher returns when betting on the home team (see Table \ref{tab:returns}). This confirms the significant negative effect of the \textit{Away} variable in our regression model. However, until round 25 the average return is still negative due to the bookmakers' margin. The increased number of home wins at the end of previous seasons (see Table \ref{tab:results}) leads to positive returns of about 6.24\% when betting on the home team at this time (Table \ref{tab:returns}).
Even if our regression model indicates a home bias over all matches between 2014 and round 25 of season 2019/20, the descriptive analysis shows a considerably higher proportion of away wins also at the beginning of season 2019/20 (see Table \ref{tab:results}). Bookmakers were not able to include this shift into their odds leading to positive returns of about 5.53\% when betting on away teams. As the home advantage disappears after the COVID-19 pandamic leading to even more away wins, positive and considerable returns of nearly 15\% can be generated with this strategy in matches behind closed doors. Meanwhile, betting on home teams generates an average return of -33.84\% (Table \ref{tab:returns}).
\begin{table}[h]
\centering
\begin{tabular}{c|cc}
& Betting on home win & Betting on away win \\
\hline
Seasons 2014/15-2018/19 Round 1-25 & -1.37\% & -11.69\% \\
Seasons 2014/15-2018/19 Round 26-34 & 6.24\% & -15.52\% \\
Season 2019/20 with spectators & -6.64\% & 5.53\% \\
Season 2019/20 without spectators & -33.84\% & 14.71\% \\
\end{tabular}
\caption{Returns on investment when betting on the home and away team before and after the COVID-19 break as well as for previous seasons.}
\label{tab:returns}
\end{table}
The results open the question whether these considerable returns are mainly driven by a few bets with high odds, i.e.\ wins of away teams which can be clearly denoted to be the underdog, or several wins of teams with low odds. As stated above, we denote the difference in percentage points between the implied probabilities given by the bookmaker for a home and away win by \textit{ImpProbDiff}. While positive values of this variable indicate higher implied probabilities for the home team, \textit{ImpProbDiff} takes negative values if the bookmaker denote the away team to be the favourite. Table \ref{tab:odds} covers different ranges for this variable together with the number of matches, home wins, draws, and away wins in this group.
\begin{table}[h] \centering
\begin{tabular}{c|cccc}
ImpProbDiff & Matches & Home wins & Draws & Away wins \\
\hline \\[-1.8ex]
Heavy home favourite & & & & \\
$[\hspace{0.345cm} 0.90;\hspace{0.345cm} 0.75)$ & $3$ & $2$ & $1$ & $0$ \\
$[\hspace{0.345cm} 0.75;\hspace{0.345cm} 0.60)$ & $6$ & $3$ & $2$ & $1$ \\
$[\hspace{0.345cm} 0.60;\hspace{0.345cm} 0.45)$ & $7$ & $6$ & $1$ & $0$ \\
$[\hspace{0.345cm} 0.45;\hspace{0.345cm} 0.30)$ & $9$ & $5$ & $2$ & $2$ \\
$[\hspace{0.345cm} 0.30;\hspace{0.345cm} 0.15)$ & $12$ & $3$ & $4$ & $5$ \\
$[\hspace{0.345cm} 0.15;\hspace{0.345cm} 0.00)$ & $13$ & $4$ & $3$ & $6$ \\
Balanced match & & & & \\
$[\hspace{0.345cm} 0.00;-0.15)$ & $10$ & $1$ & $4$ & $5$ \\
$[-0.15;-0.30)$ & $4$ & $1$ & $0$ & $3$ \\
$[-0.30;-0.45)$ & $9$ & $2$ & $1$ & $6$ \\
$[-0.45;-0.60)$ & $6$ & $0$ & $1$ & $5$ \\
$[-0.60;-0.75]$ & $4$ & $0$ & $0$ & $4$ \\
Heavy away favourite & & & & \\
\hline \\[-1.8ex]
\end{tabular}
\caption{Match outcomes depending on difference in implied winning probabilities given by the bookmaker.}
\label{tab:odds}
\end{table}
In about 60\% of the matches behind closed doors in the German Bundesliga, the home team was denoted to be the favourite according to the bookmaker, i.e.\ implied winning probabilities for the home team were higher than for the away team (see Table \ref{tab:odds}).\footnote{On average, the home team had implied winning probabilities exceeding the value of the away team by 7.73 percentage points.} Considering matches with a large difference of more than 45 percentage points, we find that only 11 of 16 such heavy home favourites could win their game while nine out of ten away teams with considerably larger implied winning probabilities won their game. Focusing on 39 relatively close matches, i.e.\ those where the difference in the implied probability did not exceed 0.3, we find 19 away in contrast to only 9 home wins. Even if the bookmaker denoted the home team to be a slight favourite, we find 11 away wins compared to only seven home wins in these matches (see Table \ref{tab:odds}). Accordingly, bookmakers undervalue the strength of away teams for matches behind closed doors, especially for balanced competitions.
\section{Discussion}
We analyse the home advantage and its evaluation by the bookmakers in Bundesliga games that were played behind closed doors. While the number of home wins increased at the end of previous seasons, it disappeared during the COVID-19 seasons, equal to considerably more away wins at the end of season 2019/20. Furthermore, the number of home goals decreased as the number of away goals increased, confirming the disappearance of the home advantage. Our analysis shows the bookmakers' struggle to price such change. Their odds imply that the home advantage remained intact and opened opportunities for betters to generate substantial profits of around 15\% when betting on away teams in games that were played behind closed doors. A more fine-grained analysis shows that especially in close competitions away teams won considerably more games than expected by the bookmakers, confirming their struggle to implement the impact of games without spectators on the home advantage. Returns of presented magnitude are very rare in betting markets and can have significant impact on bookmakers business as well as other stakeholders. Since many Bundesliga teams are sponsored by bookmakers, they also rely on efficient betting odds and profitable business for the bookmaker. Given the high turnover in football betting, the overall impact of inefficient markets is considerably.
While we clearly indicate that bookmakers were not able to incorporate the disappearance of the home advantage for matches behind closed doors in the German Bundesliga into their betting odds it remains unclear why there the home advantage somewhat disappeared in the 2019/20 season prior to COVID-19. Furthermore, the magnitude of the home advantage and, therefore, also the decrease may not be the same for all teams, but depend on the team's popularity or the stadium capacity. Thus, it would be interesting to extend the analysis to further top European football leagues as the Englisch Premier League, the Spanish La Liga or the Italian Serie A after their restart with matches behind closed doors some weeks after the German Bundesliga. Finally, it is expected that spectators will be able to return into the stadium in autumn with strong restrictions. This opens the possibility for a comparison between matches behind closed doors, a reduced stadium capacity and the full support of spectators.
\newpage
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 1.563477,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUarPxK1ThhAqzUQkY | \section{Introduction}
Photography is commonplace during vacations. People enjoy capturing the best views at picturesque locations to mark their visit but the act of taking a photograph may sometimes take away from experiencing the moment. With the proliferation of wearable cameras, this paradigm is shifting. A person can now wear an egocentric camera that is continuously recording their experience and enjoy their vacation without having to worry about missing out on capturing the best picturesque scenes at their current location. However, this paradigm results in ``too much data" which is tedious and time-consuming to manually review. There is a clear need for summarization and generation of highlights for egocentric vacation videos.
The new generation of egocentric wearable cameras (i.e. GoPros, Google Glass, etc) are compact, pervasive, and easy to use. These cameras contain additional sensors such as GPS, gyros, accelerometers and magnetometers. Because of this, it is possible to obtain large amounts of long-running egocentric videos with the associated contextual meta-data in real life situations. We seek to extract a series of aesthetic highlights from these egocentric videos in order to provide a brief visual summary of a users' experience.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{pipeline}
\par\end{centering}
\caption{\label{fig:block-diagram}Our method generates picturesque summaries and vacation highlights from a large dataset of egocentric vacation videos.}
\vspace{-1.0em}
\end{figure}
Research in the area of egocentric video summarization has mainly focused on life-logging \cite{gemmell2004acm, doherty2008civr} and activities of daily living \cite{fathi2011iccv, pirsiavash2012cvpr, ut2012cvpr}. Egocentric vacation videos are fundamentally different from egocentric daily-living videos. In such unstructured ``in-the-wild" environments, no assumptions can be made about the scene or the objects and activities in the scene. Current state-of-the-art egocentric summarization techniques leverage cues such as people in the scene, position of the hands, objects that are being manipulated and the frequency of object occurrences \cite{fathi2011iccv, ut2012cvpr, fathi2011cvpr, ren2010cvpr, ren2009cvpr, pirsiavash2012cvpr}. These cues that aid summarization in such specific scenarios are not directly applicable to vacation videos where one is roaming around in the world. Popular tourist destinations may be crowded with many unknown people in the environment and contain ``in-the-wild" objects for which building pre-trained object detectors is non-trivial. This, coupled with the wide range of vacation destinations and outdoor and indoor activities, makes joint modeling of activities, actions, and objects an extremely challenging task.
A common theme that exists since the invention of photography is the desire to capture and store picturesque and aesthetically pleasing images and videos. With this observation, we propose to transform the problem of egocentric vacation summarization to a problem of finding the most picturesque scenes within a video volume followed by the generation of summary clips and highlight photo albums. An overview of our system is shown in Figure \ref{fig:block-diagram}. Given a large set of egocentric videos, we show that meta-data such as GPS (when available) can be used in an initial filtering step to remove parts of the videos that are shot at ``unimportant" locations. Inspired by research on exploring high-level semantic photography features in images \cite{luo2008eccv, gooch2001artistic, liu2010cgf,fang2014mm,luo2011iccv,yan2013cvpr}, we develop novel algorithms to analyze the composition, symmetry and color vibrancy within shot boundaries. We also present a technique that leverages egocentric context to extract images with a horizontal horizon by accounting for the head tilt of the user.
To evaluate our approach, we built a comprehensive dataset that contains 26.5 hours of 1080p HD egocentric video at 30 fps recorded from a head-mounted Contour cam over a 14 day period while driving more than 6,500 kilometers from the east coast to the west coast of the United States. Egocentric videos were captured at geographically diverse tourist locations such as beaches, swamps, canyons, caverns, national parks and at several popular tourist attractions.
\noindent\textbf{Contributions:} This paper makes several contributions aimed at automated summarization of video: (1) We introduce a novel concept of extracting highlight images using photograph quality measures to summarize egocentric vacation videos, which are inherently unstructured. We use a series of methods to find aesthetic pictures, from a large number of video frames, and use location and other meta data to support selection of highlight images. (2) We present a novel approach that accounts for the head tilt of the user and picks the best frame among a set of candidate frames. (3) We present a comprehensive dataset that includes 26.5 hours of video captured over 14 days. (4) We perform a large-scale user-study with 200 evaluators; and (5) We show that our method generalizes to non-egocentric datasets by evaluating on two state-of-the-art photo collections with 500 user-generated and 1000 expert photographs respectively.
\section{Related Work}
We review previous work in video summarization, egocentric analysis and image quality analysis, as these works provide the motivations and foundations for our work.
\noindent\textbf{Video Summarization:} Research in video summarization identifies key frames in video shots using optical flow to summarize a single complex shot \cite{wolf1996icassp}. Other techniques used low level image analysis and parsing to segment and abstract a video source \cite{zhang1997pr} and used a ``well-distributed" hierarchy of key frame sequences for summarization \cite{liu2002eccv}. These methods are aimed at the summarization of specific videos from a stable viewpoint and are not directly applicable to long-term egocentric video.
In recent years, summarization efforts have started focussing on leveraging objects and activities within the scene. Features such as ``informative poses" \cite{caspi2006vc} and ``object of interest", based on labels provided by the user for a small number of frames \cite{liu2010pami}, have helped in activity visualization, video summarization, and generating video synopsis from web-cam videos \cite{pritch2007iccv}.
Other summarization techniques include visualizing short clips in a single image using a schematic storyboard format \cite{goldman2006tog} and visualizing tour videos on a map-based storyboard that allows users to navigate through the video \cite{pongnumkul2008uist}. Non-chronological synopsis has also been explored, where several actions that originally occurred at different times are simultaneously shown together \cite{rav2006cvpr} and all the essential activities of the original video are showcased together \cite{pritch2008pami}. While practical, these methods do not scale to the problem we are adressing of extended videos over days of actvities.
\noindent\textbf{Egocentric Video Analysis:} Research on egocentric video analysis has mostly focused on activity recognition and activities of daily living. Activities and objects have been thoroughly leveraged to develop egocentric systems that can understand daily-living activities. Activities, actions and objects are jointly modeled and object-hand interactions are assessed \cite{fathi2011iccv, pirsiavash2012cvpr} and people and objects are discovered by developing region cues such as nearness to hands, gaze and frequency of occurrences \cite{ut2012cvpr}. Other approaches include learning object models from egocentric videos of household objects \cite{fathi2011cvpr}, and identifying objects being manipulated by hands \cite{ren2010cvpr, ren2009cvpr}. The use of objects has also been extended to develop a story-driven summarization approach. Sub-events are detected in the video and linked based on the relationships between objects and how objects contribute to the progression of the events \cite{ut2013cvpr}.
Contrary to these approaches, summarization of egocentric vacation videos simply cannot rely on objects, object-hand interactions, or a fixed category of activities. Vacation videos are vastly different with respect to each other, with no fixed set of activities or objects that can be commonly found across all such videos. Furthermore, in contrast to previous approaches, a vacation summary or highlight must include images and video clips where the hand is not visible and the focus is on the picturesque environment.
Other approaches include detecting and recognizing social interactions using faces and attention \cite{fathi2012cvpr}, activity classification from egocentric and multi-modal data \cite{spriggs2009cvpr}, detecting novelties when a sequence cannot be registered to previously stored sequences captured while doing the same activity \cite{aghazadeh2011cvpr}, discovering egocentric action categories from sports videos for video indexing and retrieval \cite{kitani2011cvpr}, and visualizing summaries as hyperlapse videos \cite{kopf2014tog}.
Another popular area of research and perhaps more relevant is of ``life logging." Egocentric cameras such as SenseCam \cite{gemmell2004acm} allow a user to capture continuous time series images over long periods of time. Keyframe selection based on image quality metrics such as contrast, sharpness, noise, etc \cite{doherty2008civr} allow for quick summarization in such time-lapse imagery. In our scenario, we have a much larger dataset spanning several days and since we are dealing with vacation videos, we go a step further than image metrics and look at higher level artistic features such as composition, symmetry and color vibrancy.
\noindent\textbf{Image Quality Analysis:} An interesting area of research in image quality analysis is trying to learn and predict how memorable an image is. Approaches include training a predictor on global image features to predict how memorable an image will be \cite{isola2011cvpr} and feature selection to determine attributes that characterize the memorability of an image \cite{isola2011nips}. The aforementioned research shows that images containing faces are the most memorable. However, focusing on faces in egocentric vacation videos causes an unique problem. Since an egocentric camera is always recording, we end up with a huge number of face detections in most of the frames in crowded tourist attractions like Disneyland and Seaworld. To include faces in our vacation summaries, we will have to go beyond face detection and do face recognition and social network analysis on the user to recognize only the faces that the user actually cares about.
The other approach for vacation highlights is to look at the image aesthetics. These include high-level semantic features based on photography techniques \cite{luo2008eccv}, finding good composition for graphics image of a 3D object \cite{gooch2001artistic} and cropping and retargeting based on an evaluation of the composition of the image like the rule-of-thirds, diagonal dominance and visual balance \cite{liu2010cgf}. We took inspiration from such approaches and developed novel algorithms to detect composition, symmetry and color vibrancy for egocentric videos.
\section{Methodology}
Figure \ref{fig:block-diagram} gives an overview of our summarization approach. Let us look at each component in detail.
\subsection{Leveraging GPS Data}
Our pipeline is initiated by leveraging an understanding of the locations the user has traveled throughout their vacation. The GPS data in our dataset is recorded every 0.5 seconds where it is available, for a total of 111,170 points. In order to obtain locations of interest from the data we aggregate the GPS data by assessing the distance of a new point $p_{n}$ relative to the original point $p_{1}$ that the node was created with using the haversine formula which computes the distance between two GPS locations. When the distance is greater than a constant distance $d_{\mathit{max}}$ (defined as 10 km for our dataset) scaled by the speed $s_{p_{n}}$ at which the person was traveling at point $p_{n}$, we create a new node using the new point as the starting location. Lastly, we define a constant $d_{\mathit{min}}$ as the minimum distance that the new GPS point would have to be in order to break off into a new node in order to prevent creating multiple nodes at a single sightseeing location. In summary, a new node is created when $haversine(p_{1}, p_{n}) > s_{p_{n}} * d_{\mathit{max}} + d_{\mathit{min}}$. This formulation aggregates locations in which the user was traveling at a low speed (walking or standing) into one node and those in which the user was traveling at a high speed (driving) into equidistant nodes on the route of travel. The aggregation yields approximately 1,200 GPS nodes in our dataset.
In order to further filter these GPS nodes, we perform a search of businesses / monuments in the vicinity (through the use of Yelp's API) in order to assess the importance of each node using the wisdom of the crowd. The score for each GPS node, $N_{\mathit{score}}$, is given by $N_{\mathit{score}} = \frac{\sum_{l=1}^{L} R_{l} * r_{l}}{L}$, where $L$ is the number of places returned by the Yelp API in the vicinity of the GPS node $N$, $R_{l}$ is the number of reviews written for each location, and $r_{l}$ is the average rating of each location. This score can then be used as a threshold to disregard nodes with negligible scores and obtain a subset of nodes that represent ``important" points of interest in the dataset.
\subsection{Egocentric Shot Boundary Detection}
Egocentric videos are continuous and pose a challenge in detecting the shot boundaries. In an egocentric video, the scene changes gradually as the person moves around in the environment. We introduce a novel GIST \cite{GIST} based technique that looks at the scene appearance over a window in time. Given $N$ frames $I=<f_{1},f_{2},\ldots,f_{N}>$, each frame $f_{i}$ is assigned an appearance score $\gamma_{i}$ by aggregating the GIST distance scores of all the frames within a window on size $W$ centered at $i$.
\vspace{-1.0em}
\begin{equation}
\gamma_{i}=\frac{\sum_{p=i-\left\lfloor W/2\right\rfloor }^{i+\left\lceil W/2\right\rceil -2}\sum_{q=p+1}^{i+\left\lceil W/2\right\rceil -1}G(f_{p}).G(f_{q})}{[W*(W-1)]/2}
\end{equation}
\vspace{-1.0em}
where $G(f)$ is the normalized GIST descriptor vector for frame $f_{i}$. The score calculation is done over a window to assess the appearances of all the frames with respect to each other within that window. This makes it robust against any outliers within the scene. Since $\gamma_{i}$ is the average of dot-products, its value is between 0 and 1. If consecutive frames belong to the same shot, then their $\gamma$-values will be close to 1. To assign frames to shots, we iterate over $i$ from 1 to $N$ and assign a new shot number to $f_{i}$ whenever $\gamma_{i}$ falls below a threshold $\beta$ (for our experiments, we set $\beta$ = 0.9).
\subsection{Composition}
\label{subsec:composition}
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{segmentation_example}
\par\end{centering}
\caption{\label{fig:composition-segmentation} The left frame shows a highlight detected by our approach. The right frame illustrates the rule-of-thirds grid, overlayed on a visualization of the output of the segmentation algorithm for this particular frame.}
\vspace{-1.0em}
\end{figure}
Composition is one of the characteristics considered when assessing the aesthetics of a photograph \cite{obrador2010role}. Guided by this idea we model composition with a metric that represents the traits of what distinguishes a good composition from a bad composition. The formulation is weighted by a mixture of the average color of specific segments in an image and its distance to an ideal \textbf{rule-of-thirds} composition (see Figure \ref{fig:composition-segmentation}). Our overall results rely on this metric to obtain the highlights of a video clip (see Figure \ref{fig:sample-each} for examples).
\noindent\textbf{Video Segmentation:} The initial step in assessing a video frame is to decompose the frame into cohesive superpixels. In order to obtain these superpixels, we use the public implementation of the hierarchical video segmentation algorithm introduced by Grundmann et. al. \cite{grundmann2010efficient}. We scale the composition score by the number of segments that are produced at a high-level hierarchy (80\% for our dataset) with the intuition that a low number of segments at a high-level hierarchy parameterizes the simplicity of a scene. An added benefit of this parameterization is that a high level of segments can be indicative of errors in the segmentation due to the violation of color constancy which is the underlying assumption of optical flow in the hierarchical segmentation algorithm. This implicitly gets rid of blurry frames. By properly weighting the composition score with the number of segments produced at a higher hierarchy level, we are able to distinguish the visual quality of individual frames in the video.
\noindent\textbf{Weighting Metric:} The overall goal for our composition metric is to obtain a representative score for each frame. First we assess the average color of each segment in the LAB colorspace. We categorize the average color into one of 12 color bins based on their distance, which determines their importance as introduced by Obrador et al. \cite{obrador2010role}. A segment with diverse colors is therefore weighted more heavily than a darker, less vibrant segment. Once we obtain a weight for each segment, we determine the best rule-of-thirds point for the entire frame. This is obtained by computing the score for each of the four points, and simply selecting the maximum.
\noindent\textbf{Segmentation-Based Composition Metric:} Given $M$ segments for frame $f_{i}$, our metric can be succinctly summarized as the average of the score of each individual segment. The score of each segment is given by the product of its size $s_{j}$ and the weight of its average color $w(c_{j})$, scaled by the distance $d_{j}$ to the rule-of-thirds point that best fits the current frame. So, for frame $f_{i}$, the composition score $S_{\mathit{comp}}^{i}$ is given by:
\vspace{-0.5em}
\begin{equation}
S_{\mathit{comp}}^{i}=\frac{\sum_{j=1}^{M} \frac{s_{j} * w(c_{j})}{d_{j}}}{M}
\end{equation}
\vspace{-1.0em}
\subsection{Symmetry}
Ethologists have shown that preferences to symmetry may appear in response to biological signals, or in situations where there is no obvious signaling context, such as exploratory behavior and human aesthetic response to patterns \cite{enquist1994nature}. Thus, symmetry is the second key factor in our assessment of aesthetics. To detect symmetry in images, we detect local features using SIFT \cite{lowe2004ijcv}, select $k$ descriptors and look for self similarity matches along both the horizontal and vertical axes. When a set of best matching pairs are found, such that the area covered by the matching points is maximized, we declare that a maximal-symmetry has been found in the image. For frame $f_{i}$, the percentage of the frame area that the detected symmetry covers is the symmetry score $S_{\mathit{sym}}^{i}$.
\subsection{Color Vibrancy}
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\columnwidth]{color_vibrancy}
\par\end{centering}
\caption{\label{fig:color_vibrancy}This visualization demonstrates the difference between a dark frame and a vibrant frame in order to illustrate the importance of vibrancy.}
\vspace{-0.5em}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{head_tilt}
\par\end{centering}
\caption{\label{fig:head-tilt}Image on left shows a frame with low score on head tilt detection whereas the image on the right has a high score.}
\vspace{-1.0em}
\end{figure}
The vibrancy of a frame is helpful in determining whether or not a given shot is picturesque. We propose a simple metric based on the color weights discussed in Section \ref{subsec:composition} to determine vibrancy. This metric is obtained by quantizing the colors of a single frame into twelve discrete bins and scaling them based on the average distance from the center of the bin. This distance represents the density of the color space for each bin which is best appreciated by the visualization in Figure \ref{fig:color_vibrancy}. The vibrancy score for frame $f_{i}$ is given by:
\vspace{-1.5em}
\begin{equation}
S_{\mathit{vib}}^{i}=\sum_{j=1}^{n_{b}} \frac{w(c_{j}) * b_{\mathit{size}}}{b_{\mathit{dist}}}
\end{equation}
\vspace{-1.0em}
where $n_{b}$ is the number of color bins (12 in our case), $w(c_{j})$ is the color weight, $b_{\mathit{size}}$ is the bin size (number of pixels in the bin) and $b_{\mathit{dist}}$ is the average distance of all the pixels to the actual bin color.
\subsection{Accounting For Head Tilt}
Traditional approaches on detecting aesthetics and photographic quality in images take standard photographs as input. However, when dealing with egocentric video, we also have to account for the fact that there is a lot of head motion involved. Even if we get high scores on composition, symmetry, and vibrancy, there is still a possibility that the head was tilted when that frame was captured. This diminishes the aesthetic appeal of the image.
While the problem of horizon detection has been studied in the context of determining vanishing points, determining image orientations and even using sensor data on phones and wearable devices \cite{wang2012ubicomp}, it still remains a challenging problem. However, in the context of egocentric videos, we approach this by looking at a time window around the frame being considered. The key insight is that while a person may tilt and move his head at any given point in time, the head remains straight \textit{on average}. With this, we propose a novel and simple solution to detect head tilt in egocentric videos. We look at a window of size $W$ around the frame $f_{i}$ and average all the frames in that window. If $f_{i}$ is similar to average frame, then the head tilt is deemed to be minimal. For comparing $f_{i}$ to the average image, we use the SSIM metric \cite{hore2010image} as the score $S_{\mathit{head}}^{i}$ for frame $f_{i}$. Figure \ref{fig:head-tilt} shows two sample frames with low and high scores.
\subsection{Scoring and Ranking}
We proposed four different metrics (composition, symmetry, vibrancy, head tilt) for assessing aesthetic qualities in egocentric videos. Composition and symmetry are the foundation of our pipeline, and vibrancy and head tilt are metrics for fine-tuning our result for a picturesque output. The final score for frame $f_{i}$ is given by:
\vspace{-0.5em}
\begin{equation}
S_{\mathit{final}}^{i}=S_{\mathit{vib}}^{i}*(\lambda_{1}*S_{\mathit{comp}}^{i}+\lambda_{2}*S_{\mathit{sym}}^{i})
\end{equation}
\vspace{-1.0em}
Our scoring algorithm assesses all of the frames based on a vibrancy weighted sum of composition and symmetry (empirically determined as ideal: $\lambda_{1} = 0.8$, $\lambda_{2} = 0.2$). This enables us to obtain the best shots for a particular video. Once we have obtained $S_{\mathit{final}}^{i}$, we look within its shot boundary to find the best $S_{\mathit{head}}^{i}$ that depicts a well composed frame.
\section{Egocentric Vacation Dataset}
To build a comprehensive dataset for our evaluation, we drove from the east coast to the west coast of the United States over a 14 day period with a head-mounted Contour cam and collected egocentric vacation videos along with contextual meta-data such as the GPS, speed and elevation. Figure \ref{fig:dataset-map} shows a heatmap of the locations where data was captured. Hotter regions indicate availability of more data.
The dataset has over 26.5 hours of 1080p HD egocentric video (over 2.8 million frames) at 30 fps. Egocentric videos were captured at geographically diverse locations such as beaches, swamps, canyons, national parks and popular tourist locations such as the NASA Space Center, Grand Canyon, Hoover Dam, Seaworld, Disneyland, and Universal Studios. Figure \ref{fig:dataset-images} shows a few sample frames from the dataset. To the best of our knowledge, this is the most comprehensive egocentric dataset that includes both HD videos at a wide range of locations along with a rich source of contextual meta-data.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{dataset_map}
\par\end{centering}
\caption{\label{fig:dataset-map}A heatmap showing the egocentric data collected while driving from the east coast to the west coast of the United States over a period of 14 days. Hotter regions on the map indicate the availability of larger amounts of video data.}
\vspace{-0.5em}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{dataset_images}
\par\end{centering}
\caption{\label{fig:dataset-images}Sample frames showing the diversity of our egocentric vacation dataset. The dataset includes over 26.5 hours of HD egocentric video at 30 fps.}
\vspace{-0.5em}
\end{figure}
\begin{figure*}
\begin{centering}
\includegraphics[width=1\textwidth]{good_overall}
\par\end{centering}
\caption{\label{fig:sample-all}10 sample frames that were ranked high in the final output. These are the types of vacation highlights that our system outputs.}
\vspace{-1.0em}
\end{figure*}
\section{Evaluation}
We performed tests on the individual components of our pipeline in order to assess the output of each individual metric. Figure \ref{fig:sample-each} shows three sample images that received high scores in composition alone and three sample images that received high scores in symmetry alone (both computed independent of other metrics). Based on this evaluation, which gave us an insight into the importance of combining frame composition and symmetry, we set $\lambda_1=0.8$ and $\lambda_2=0.2$. Figure \ref{fig:sample-all} depicts 10 sample images that were highly ranked in the final output album of 100 frames. In order to evaluate our results, which are inherently subjective, we conduct A/B testing on two baselines with a notable set of subjects on Amazon Mechanical Turk.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{good_composition_symmetry}
\par\end{centering}
\caption{\label{fig:sample-each}Top row shows 3 samples frames that were ranked high in composition alone and the bottom row shows 3 sample frames that were ranked high in symmetry alone.}
\vspace{-1.0em}
\end{figure}
\subsection{Study 1 - Geographically Uniform Baseline}
\begin{figure}
\begin{centering}
\includegraphics[width=0.75\columnwidth]{results_baseline_1}
\par\end{centering}
\caption{\label{fig:results_baseline_1}This figure demonstrates the agreement percentage for the top k images of our pipeline. For instance, for the top 50\% images, we have an agreement percentage of 86.67\%. This represents the number of users in our study that believed that our images were more picturesque than the baseline.}
\vspace{-1.0em}
\end{figure}
Our first user study consists of 100 images divided over 10 Human Intelligence Tasks (HIT) for 200 users (10 image pairs per HIT). To get good quality, we required participants to have an approval rating of 95\% and a minimum of 1000 approved HITs. The HITs took an average time of 1 minute and 6 seconds to complete and the workers were all rewarded \$0.06 per HIT. Due to the subjective nature of the assessment, we opted to approve and pay all of our workers within the hour.
\noindent\textbf{Baseline:} For this baseline we select $x$ images that are equally distributed across the GPS data of the entire dataset. This was performed by uniformly sampling the GPS data and selecting the corresponding video for that point. After selecting the appropriate video we select the closest frame in time to the GPS data point. We were motivated to explore this baseline due to the nature of the dataset (data was collected from the East to the West coast of the United States). The main benefit of this baseline is that it properly represents the locations throughout the dataset and is not biased by the varying distribution of videos that can be seen in the heatmaps in Figure \ref{fig:dataset-map}.
\noindent\textbf{Experiment Setup:} The experiment had a very straightforward setup. The title of the HIT informed the user of their task, ``Compare two images, click on the best one.". The user was presented with 10 pairs of images for each task. Above each pair of images, the user was presented with detailed instructions, ``Of these two (2) images, click which one you think is better to include in a vacation album.". The left / right images and the order of the image pairs were randomized for every individual HIT in order to remove bias. Upon completion the user was able to submit the HIT and perform the next set of 10 image comparisons. Every image the user saw within a single HIT and the user study was unique and therefore not repeated across HITs. The image pair was always the same, so users were consistently comparing the same pair (albeit with random left / right placement). Turkers were incredibly pleased with the experiment and we received extensive positive feedback on the HITs.
\noindent\textbf{Results:} Figure \ref{fig:results_baseline_1} demonstrates the agreement percentage of the user study from the top five images to the top 100, with a step size of 5. For our top 50 photo album, we obtain an agreement percentage from 200 turkers of 86.67\%. However, for the top 5-30 photos, we obtain an agreement of greater than 90\%. We do note the inverse correlation between album size and agreement which is due to the increasing prevalence of frames taken from inside the vehicle while driving and the general subjectiveness of vacation album assessment.
\subsection{Study 2 - Chronologically Uniform Baseline }
\begin{figure}
\begin{centering}
\includegraphics[width=0.75\columnwidth]{results_baseline_2}
\par\end{centering}
\caption{\label{fig:results_baseline_2}This figure demonstrates the average agreement percentage among 50 master turkers for our top k frames. For instance, for our top 50 frames, we obtain an agreement percentage of 68.68\%.}
\vspace{-0.5em}
\end{figure}
Our second user study consists of 100 images divided over 10 HITs (10 per HIT) for 50 Master users (Turkers with demonstrated accuracy). These HITs took an average time of 57 seconds to complete and the workers were all rewarded \$0.10 per HIT.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{comparison}
\par\end{centering}
\caption{\label{fig:comparison}Three sample highlights from the Egocentric Social Interaction dataset \cite{fathi2012cvpr}}
\vspace{-1.0em}
\end{figure}
\noindent\textbf{Baseline:} In this user study we developed a more challenging baseline in which we do not assume an advantage by using of GPS data. Our pipeline and the chronological uniform baseline are both given clips after the GPS data has parsed out the ``unimportant" locations. The baseline uniformly samples in time across the entire subset of videos and selects those frames for comparison. We do note that the distribution of data is heavily weighted on important regions of the dataset where a lot of data was collected, which adds to the bias of location interest and the challenging nature of this baseline.
\noindent\textbf{Experimental Setup:} The protocol for the chronologically uniform baseline was identical. Due to the difficult baseline, we increase the overall requirements for Mechanical Turk workers and allowed only ``Masters" to work on our HITs. We decreased our sample size to 50 Masters due to the difficulty of obtaining turkers with Masters certification. The title and instructions from the previous user study were kept identical along with the randomization of the two images within a pair, and the 10 pairs within a HIT.
\noindent\textbf{Results:} For the top 50 images, we obtain an agreement percentage of 68.67\% (See Figure \ref{fig:results_baseline_2}). We once again note the high level of agreement for the top 5 images, 97.7\% agree the images belong in a vacation photo album. These results reinforce our pipeline as a viable approach to determining quality frames from a massive dataset of video. We also note the decrease in accuracy beyond 50 images, in which the agreement percentage between turkers reaches 51.42\% for all the top 100 images. We believe this is due to the difficulty of the baseline, and the hard constraint on the number of quality frames in interesting locations that are properly aligned and unoccluded.
\begin{figure}
\begin{centering}
\includegraphics[width=0.95\columnwidth]{single_assessment}
\par\end{centering}
\caption{\label{fig:assessment}Left: 95\% agreement between turkers that they would include this picture in their vacation album. Top Right: 62\% agreement. Bottom Right: 8\% agreement.}
\vspace{-1.5em}
\end{figure}
\subsection{Assessing Turker Agreement}
In Figure \ref{fig:assessment}, we can see three output images that had varying levels of agreement percentages between turkers. The left image with 95\% agreement between Turkers is a true-positive, which is a good representation of a vacation image. The top-right and bottom-right images are two sample false positives that were deemed to be highlights by our system. These received 62\% and 8\% agreement respectively. We observe false positives when the users' hand breaches the rule of thirds' region (like the top-right image), thereby firing erroneous high scores in composition. Also, random bright colored objects (like the red bag in front of the greenish-blue water in the bottom-right image) resulted in high scores on color vibrancy.
\subsection{Generalization on Other Datasets}
\noindent\textbf{Egocentric Approaches:} Comparing our approach to other egocentric approaches is challenging due to the applicability of other approaches to our dataset. State-of-the-art techniques on egocentric videos such as \cite{ut2012cvpr,ut2013cvpr} focus on activities of daily living and rely on detecting commonly occurring objects, while approaches such as \cite{fathi2011iccv,fathi2011cvpr} rely on detecting hands and their relative position to the objects within the scene. In contrast, we have in-the-wild vacation videos without any predefined or commonly occurring object classes. Other approaches, such as \cite{gygli2014eccv}, perform superframe segmentation on the entire video corpus which does not scale to 26.5 hours of egocentric videos. Further, \cite{fathi2012cvpr} uses 8 egocentric video feeds to understand social interactions which is distinct from our dataset and research goal. However, we are keen to note that the Social Interactions dataset collected at Disneyland by \cite{fathi2012cvpr} was the closest dataset we could find to resemble a vacation dataset due to its location. We ran our pipeline on this dataset, and our results can be seen in Figure \ref{fig:comparison}. The results are representative of vibrant, well-composed, symmetric shots which reinforce the robustness of our pipeline. We do note that these results are obtained without GPS preprocessing which was not available / applicable to that dataset.
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{improvements_4}
\par\end{centering}
\caption{\label{fig:improvement}Left: Percentage of images with an increase in the final score for both the Human-Crop dataset \cite{fang2014mm} and Expert-Crop dataset \cite{luo2011iccv,yan2013cvpr}. Right: Percentage of images in the Human-Crop dataset with an increase in the final score as a function of the composition and symmetry weights.}
\vspace{-0.5em}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[width=1\columnwidth]{improved_symmetry}
\par\end{centering}
\caption{\label{fig:improved_symetry}Two examples of the original images and the images cropped by expert photographers. Note the improvement in the overall symmetry of the image.}
\vspace{-1.5em}
\end{figure}
\noindent\textbf{Photo Collections:} In order to analyze the external validity of our approach on non-egocentric datasets, we tested our methodology on two state-of-the-art photo collection datasets. The first dataset \cite{fang2014mm} consists of 500 user-generated photographs. Each image was manually cropped by 10 Master users on Amazon Mechanical Turk. We label this dataset the ``Human-Crop dataset". The second dataset \cite{luo2011iccv,yan2013cvpr} consists of 1000 photographs taken by amateur photographers. In this case, each image was manually cropped by three expert photographers (graduate students in art whose primary medium is photography). We label this dataset the ``Expert-Crop dataset". Both datasets have aesthetically pleasing photographs spanning a variety of image categories, including architecture, landscapes, animals, humans, plants and man-made objects.
To assess our metrics effectiveness we ran our pipeline (with $\lambda_1=0.8$ and $\lambda_2=0.2$) on both the original uncropped images and the cropped images provided by the human labelers. Since the cropped images are supposed to represent an aesthetic improvement, our hypothesis was that we should see an increase in our scoring metrics for the cropped images relative to the original shot. For each image in the dataset, we compare the scores of each of the cropped variants (where the crops are provided by the labelers) to the scores of the original image. The scores for that image are considered an improvement only if we see an increase in a majority of its cropped variants. Figure \ref{fig:improvement} (left) shows the percentage of images that saw an improvement in each of the four scores: composition, vibrancy, symmetry and the overall final score. We can see that the final score was improved for 80.74\% of the images in the Human-Crop dataset and for 63.28\% of the images in the Expert-Crop dataset.
We are keen to highlight that the traditional photography pipeline begins with the preparation and composition of the shot in appropriate lighting and finishes with post-processing the captured light using state-of-the-art software. Hence, the cropping of the photograph is a sliver of the many tasks undertaken by a photographer. This is directly reflected in the fact that we do not see a large increase in the composition and vibrancy scores for the images as those metrics are somewhat irrespective of applying a crop window within a shot that has already been taken. The task of cropping the photographs has its most direct effect in making the images more symmetrical. This is reflected in the large increase in our symmetry scores. Two examples of this can be seen in Figure \ref{fig:improved_symetry}. To test this hypothesis further, we ran an experiment on the Human-Crop dataset where we varied the composition weight $\lambda_1$ between 0 and 1 and set the symmetry score $\lambda_2=1-\lambda_1$. From Figure \ref{fig:improvement} (right), we can see that the percentage of images that saw an increase in the final score increases as $\lambda_1$ (the composition weight) decreases and $\lambda_2$ (the symmetry weight) increases. Also note that we see a larger improvement in our scores for the Human-Crop dataset when compared to the Expert-Crop dataset. This behavior is representative of the fact that the Expert-Crop dataset has professional photographs that are already very well-composed (and cropping provides only minor improvements) when compared to the Human-Crop dataset that has user-generated photographs where there is more scope for improvement with the use of a simple crop.
\section{Conclusion}
In this paper we presented an approach that identifies picturesque highlights from egocentric vacation videos. We introduce a novel pipeline that considers composition, symmetry and color vibrancy as scoring metrics for determining what is picturesque. We reinforce these metrics by accounting for head tilt using a novel technique to bypass the difficulties of horizon detection. We further demonstrate the benefits of meta-data in our pipeline by utilizing GPS data to minimize computation and better understand the places of travel in the vacation dataset. We exhibit promising results from two user studies and the generalizability of our pipeline by running experiments on two other state-of-the-art photo collection datasets.
\newpage
{
\small\bibliographystyle{ieee}
| {
"attr-fineweb-edu": 1.820312,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeKK6NNjgB0Ss3BD6 | \section{Introduction}\label{sec:intro}
The hot-hand phenomenon generally refers to an athlete who has performed well in the recent past performing better in the present. Having a ``hot goalie'' is seen as crucial to success in the National Hockey League (NHL) playoffs. A goaltender who keeps all pucks out of the net for 16 games (4 series of 4 wins) will win his team the Stanley Cup---obviously. In this paper, we use data from the NHL playoffs to investigate whether goaltenders get hot, in the sense that if a goaltender has had a high recent save probability, then that goaltender will have a high save probability for the next shot that he faces.
NHL fans, coaches, and players appear to believe that goaltenders can get hot. A famous example is Scotty Bowman using backup Mike Vernon as the starting goaltender for the Detroit Red Wings during the 1997 playoffs, despite Chris Osgood having started in goal during most of the regular season, because Bowman believed Vernon was the hot goaltender \parencite{morrison1998takes}. The Red Wings won the Stanley Cup that year for the first time in 42 years.
NHL goaltenders let in roughly one in ten shots. More precisely, during the 2018-19 regular season, 93 goaltenders playing for 31 teams faced a total of 79,540 shots, which resulted in 7,169 goals, for an average save percentage of 91.0\%. Among goaltenders who played at least 20 games, the season-long save percentage varied from a high of 93.7\% (Ben Bishop, Dallas Stars) to a low of 89.7\% (Joonas Korpisalo, Columbus Blue Jackets)—a range from 1.3 percentage points (pps) below to 2.7 pps above the overall average save percentage. In the playoffs of the same year, the overall average save percentage was 91.6\%. Among goaltenders who played in two or more playoff games, the save percentage varied from 93.6\% (Robin Lehner, New York Islanders) to 85.6\% (Andrei Vasilevskiy, Tampa Bay Lightning)—--a range from 6 pps below to 2 pps above the overall average save percentage.
It is crucial to determine whether the hot-hand phenomenon is real, for NHL goaltenders, in order to understand whether coaches are justified in making decisions about which of a team's two goaltenders should start a particular game based on perceptions or estimates of whether that goaltender is hot. If the hot hand is real, then appropriate statistical models could potentially be used to predict the likely performance of a team's two goaltenders in an upcoming game, or even in the remainder of a game that is in progress, during which the goaltender currently on the ice has performed poorly.
Our major finding is statistically significant \emph{negative} slope coefficients for the variable of interest measuring the influence of the recent save performance on the probability of saving the next shot on goal; in other words, we have demonstrated that contrary to the hot-hand theory, better past performance usually results in \emph{worse} future performance. This negative impact of recent good performance is robust, according to our analysis, to both varying window sizes and defining the window size based on either time or number of shots.
The remainder of the paper is organized as follows: in Section \ref{sec:LR}, we review related literature; in Section \ref{sec:data}, we describe our data set; in Section \ref{sec:method}, we specify our regression models; and in Section \ref{sec:results}, we present our results. Section \ref{sec:conclusion} concludes.
\section{Literature review}\label{sec:LR}
We summarize five streams of related work addressing the following: (1) whether the hot hand is a real phenomenon or a fallacy, (2) whether statistical methods have sufficient power to detect a hot hand, (3) whether offensive and defensive adjustments reduce the impact of a hot hand, (4) estimation of a hot-hand effect for different positions in a variety of sports, and (5) specification of statistical models to estimate the hot hand.
\emph{(1) Is the hot hand a real phenomenon or a fallacy?} The hot hand was originally studied in the 1980s in the context of basketball shooting percentages \autocite{gilovich1985hot,tversky1989hot,tversky1989cold}. These studies concluded that even though players, coaches, and fans all believed strongly in a hot-hand phenomenon, there was no convincing statistical evidence to support its existence. Instead, \textcite{gilovich1985hot} attributed beliefs in a hot hand to a psychological tendency to see patterns in random data; an explanation that has also been proposed for behavioral anomalies in various non-sports contexts, such as financial markets and gambling \autocite{miller2018surprised}. Contrary to these findings, recent papers by \textcite{miller2018surprised,miller2019bridge} demonstrate that the statistical methods used in the original studies were biased, and when their data is re-analyzed after correcting for the bias, strong evidence for a hot hand emerges.
\emph{(2) Do statistical methods have sufficient power to detect a hot hand?} The \textcite{gilovich1985hot} study analyzed players individually. This approach may lack sufficient statistical power to detect a hot hand, even if it exists \autocite{Wardrop1995,wardrop1999statistical}. Multivariate approaches that pool data for multiple players have been proposed to increase power \autocite{arkes2010revisiting}. We follow this approach, by pooling data for multiple NHL goaltenders over multiple playoffs.
\emph{(3) Do offensive and defensive adjustments reduce the impact of a hot hand?} A hot hand, even if it is real, might not result in measurable improvement in performance if the hot player adapts by making riskier moves or if the opposing team adapts by defending the hot player more intensively. For example, a hot basketball player might attempt riskier shots and the opposing team might guard a player they believe to be hot more closely. The extent to which such adjustments can be made varies by sport, by position, and by the game situation. For example, there is little opportunity for such adjustments for basketball free throws \autocite{gilovich1985hot} and there is less opportunity to shift defensive resources towards a single player in baseball than in basketball \autocite{green2017hot}, because the fielding team defends against a single batting team player at a time. An NHL goaltender must face the shots that are directed at him and thus has limited opportunities to make riskier moves if he feels that he is hot. Furthermore, the opposing team faces a single goaltender and therefore has limited opportunities to shift offensive resources away from other tasks and towards scoring on the goaltender. Therefore, NHL goaltenders provide an ideal setting in which to measure whether the hot-hand phenomenon occurs.
\emph{(4) Estimation of a hot-hand effect for different positions in a variety of sports.} In addition to basketball shooters, the list of sports positions for which hot-hand effects have been investigated includes baseball batters and pitchers \autocite{green2017hot}, soccer penalty shooters \autocite{otting2019regularized}, dart players \autocite{otting2020thehot}, and golfers \autocite{livingston2012hot}.
In ice hockey, a momentum effect has been investigated at the team level \autocite{kniffin2014within}. A hot-hand effect has been investigated for ice hockey shooters \autocite{vesper2015putting}, but not for goaltenders, except for the study by \textcite{morrison1998takes}. The latter study focused on the duration of NHL playoff series, noted a higher-than-expected number of short series, and proposed a goaltender hot-hand effect as a possible explanation. This study did not analyze shot-level data for goaltenders, as we do.
\emph{(5) Specification of statistical models to estimate the hot hand.} Hot-hand researchers have used two main approaches in specifying their statistical models: (1) Analyze success rates, conditional on outcomes of previous attempts \autocite{albright1993statistical,green2017hot} or (2) incorporate a latent variable or ``state'' that quantifies ``hotness'' \autocite{green2017hot,otting2019regularized}. We follow the former approach. With that approach, the history of past performance is typically summarized over a ``window'' that is defined in terms of a fixed number of past attempts---the ``window size.'' It is not clear how to choose the window size. We vary the window size over a range that covers the window sizes used in past work. Furthermore, in addition to shot-based windows, we also use time-based windows---an approach that complicates data preparation and has not been used by other investigators.
We contribute to the hot-hand literature by investigating NHL goaltenders, a position that has not been studied previously, and which provides a setting in which there are limited opportunities for either team to adapt their strategies in reaction to a perception that a goaltender is hot. In terms of methodology, we use multilevel logistic regression, which allows us to pool data across goaltender-seasons to increase statistical power, and we use a wide range of both shot-based and time-based windows to quantify a goaltender's recent save performance.
\section{Data and variables}\label{sec:data}
Our data set consists of information about all shots on goal in the NHL playoffs from 2008 to 2016. The season-level data is from \url{www.hockey-reference.com} \autocite{HockeyReference} and the shot-level data is from \url{corsica.hockey} \autocite{Corsica}. We have data for 48,431 shots, faced by 93 goaltenders, over 9 playoff seasons, with 91.64\% of the shots resulting in a save. We divided the data into 224 groups, containing from 2 to 849 shot observations, based on combinations of goaltender and playoff season. The data set includes 1,662 shot observations for which one or more variables have missing values. Removing those observations changes the average save proportion from 91.64\% to 91.61\% and the number of groups from 224 to 223. We exclude observations with missing values from our regression analysis but we include these observations when computing the variable of interest (recent save performance), as discussed in Subsection \ref{Saving}.
\subsection{Dependent variable: Shot outcome}
The dependent variable, $y_{ij}$, equals 1 if shot $i$ in group $j$ resulted in a save and 0 if the shot resulted in a goal for the opposing team. A shot that hits the crossbar or one of the goalposts is not counted as a shot on goal and is not included in our data set.
\subsection{Variable of interest: Recent save performance}\label{Saving}
The primary independent variable of interest, $x_{ij}$, is the recent save performance, immediately before shot $i$, in goaltender-season group $j$. It is not obvious how to quantify this variable and therefore we investigate several possibilities. In all cases, we define the recent save performance as the ratio of the number of saves to the number of shots faced by the goaltender, over some ``window''. We compare shot-based windows, over the last 5, 10, 15, 30, 60, 90, 120, and 150 shots faced by the goaltender and time-based windows, over the last 10, 20, 30, 60, 120, 180, 240, and 300 minutes played by the goaltender. We chose these window sizes to make the shot-based and time-based window sizes comparable, given that a goaltender in the NHL playoffs faces an average of $30$ shots on goal per 60 minutes of playing time. The largest window sizes (150 shots and 300 minutes) correspond, roughly, to 5 games, an interval length that \textcite{green2017hot} suggested was needed to determine whether a baseball player was hot.
A window could include one or more intervals during which the group $j$ goaltender was replaced by a backup goaltender. Shots faced by the backup goaltender are excluded from the computation of $x_{ij}$. A window could include time periods between consecutive games, which could last several days.
For a given window, we exclude shot observations for which the number of shots or the time elapsed prior to the time of the shot is less than the window size. This reduces the number of shots used in the analysis by $20\%$ to $50\%$, depending on the window size. As stated previously, we included shots with missing values in the other independent variables in the computation of $x_{ij}$ but we excluded those shots from the regression analysis that we describe in Section~\ref{sec:method}.
\subsection{Control variables: Other influential factors}
We include a vector, $Z_{ij}$, of six control variables for shot $i$ of group $j$ that we expect could have an impact on the shot outcome. The control variables are: {\sf Game score}, {\sf Position}, {\sf Home}, {\sf Rebound}, {\sf Strength}, and {\sf Shot type}. All of these variables are categorical, except {\sf Position}, which can be expressed either as one categorical variable or as two numerical variables. In what follows, we elaborate on each of the control variables.
{\sf Game score} indicates whether the goaltender's team is Leading (base case), Tied, or Trailing in the game. {\sf Home} is a binary variable indicating whether the goaltender was playing on home ice or not (base case). {\sf Rebound} is a binary variable indicating whether the shot occurred within 2 seconds of uninterrupted game time of another shot from the same team \autocite{Corsica} or not (base case). {\sf Strength} represents the difference between the number of players from the goaltender's team on the ice and the number of players from the other team on the ice. {\sf Strength} can take values of $+2, +1, 0, -1,$ and $-2$ (base case), and we treat this variable as categorical, to allow for nonlinear effects. {\sf Shot type} denotes the shot type: Backhand (base case), Deflected, Slap, Snap, Tip-in, Wrap-around, or Wrist.
The numerical specification for {\sf Position} is based on a line from the shot origin to the midpoint of the crossbar of the goal. We use $d$ (distance) for the length in feet of this line and $\alpha$ for the angle that this line makes with a line connecting the midpoints of the crossbars of the two goals (Figure \ref{CST}a). A limitation of this specification is that the save probability could depend on $d$ and $\alpha$ in a highly nonlinear manner. Rather than introduce nonlinear terms, we also investigate a categorical specification, which we describe next.
For the categorical specification of {\sf Position}, we divide the ice surface into three regions: Top, Slot, and Corner (base case) (Figure 1(b)), and categorize each shot based on the region from which the shot originated. We expect the save probability to be lower for shots from the Slot region than from the Top or Corner regions.
A limitation of both specifications for {\sf Position} is that we do not have data on whether the shot originated from the left or right side of the ice.
\begin{figure}
\centering
\caption{Numerical and categorical specifications for {\sf Position} of shot origin.}\label{CST}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\linewidth]{Court_Part1.ps}
\caption{{Numerical specification}}
\end{subfigure}%
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\linewidth]{Court_Part2.ps}
\caption{{Categorical specification}}
\end{subfigure}
\end{figure}
\section{Regression models}\label{sec:method}
We used multilevel logistic regression, with partial pooling, also referred to as mixed effects modelling. We center the variable of interest \autocite{enders2007centering, hox2011handbook} by subtracting the group mean, that is, we set $x_{ij}^\ast = x_{ij} - \bar{x}_j$, where $\bar{x}_j$ is the average of $x_{ij}$ in group $j$. The centered variable $x_{ij}^\ast$ represents the deviation in performance of the goaltender-season group $j$ from his average performance for the current playoffs. Our interest is in whether such deviations persist over time.
We allow the intercepts and the slope coefficients of the variables of interest to vary by group, but the control variable slope coefficients are the same for all groups, as shown in the following partial pooling specification:
\begin{align}
\Pr(y_{ij}=1) = \mbox{logit}^{-1} (\alpha_{j} + \beta_{j}x_{ij}^\ast + \gamma Z_{ij}),
\end{align}
\noindent where $\mbox{logit}(p) \equiv \ln(p/(1-p))$, $\alpha_{j}$ is the intercept for group $j$, $\beta_{j}$ is the slope for group $j$, and $\gamma$ is the global vector of coefficients for the control variables. We represent the intercept and slope of the variable of interest as the sum of a fixed effect and a random effect, that is: $\alpha_j = \bar{\alpha} + \alpha_j^\ast$ and $\beta_j = \bar{\beta} + \beta_j^\ast$.
All results that we report in Section \ref{sec:results} were obtained using Markov chain Monte Carlo (MCMC), using the {\sf rstan} and {\sf rstanarm} R packages. We used the default prior distributions for the {\sf rstanarm} package. The default distributions are weakly informative---Normal distributions with mean $0$ and scale $2.5$ (for the coefficients) and $10$ (for the intercepts) \autocite{PriorDistributions}. We used default values for the number of chains ($4$), the number of iterations (2,000), and the warm-up period (1,000 iterations).
We also used maximum likelihood (ML), using the {\sf lme4} R package, and found the MCMC and ML estimates to be nearly identical, except for a few instances where the ML estimation algorithm did not converge. The lack of ML estimation convergence in some instances is consistent with the findings in \textcite{eager2017mixed}. MCMC is considered a good surrogate in situations where an ML estimation algorithm has not been established \autocite{wollack2002recovery}.
\section{Results}\label{sec:results}
In this section, we first provide detailed results for the longest window sizes, using the categorical specification for {\sf Position}. Second, we investigate the robustness of our main finding to the window size, window type, individual goaltender-seasons, and which specification was used for {\sf Position}. Third, we illustrate the estimated impact of the control variables on the save probability for the next shot. Fourth, we provide diagnostics for the MCMC estimation.
\subsection{Baseline results}
Table \ref{table:Results} provides means and 95\% credible intervals for the intercept and slope fixed effects and for the control variable coefficients, for our baseline models: the models with 150-shot and 300-minute windows, and a categorical specification for {\sf Position}. The window sizes for the baseline models are comparable to those used by \textcite{green2017hot}.
\begin{table}
\centering
\begin{threeparttable}
\caption{Regression results for the 150-shot model and the 300-minute model}\label{table:Results}
\begin{tabular}{lrrrrrr}
&\multicolumn{3}{c}{\textbf{$k = 150$ shots}}
&\multicolumn{3}{c}{\textbf{$t = 300$ minutes}}\\
Variable
& Mean & 2.5\% & 97.5\%
& Mean & 2.5\% & 97.5\% \\
\hline
Intercept fixed effect
&$2.10$ &$1.64$ &$2.57$
&$2.13$ &$1.67$ &$2.62$
\\
Recent save performance ($x_{ij}^\ast$) fixed effect
&$-8.45$ &$-11.64$ &$-5.31$
&$-8.59$ &$-11.49$ &$-5.53$
\\
{\sf Game score: Tied}
&$-0.05$ &$-0.16$ &$0.06$
&$-0.03$ &$-0.15$ &$0.08$
\\
{\sf Game score: Trailing}
&$-0.12$ &$-0.25$ &$0.00$
&$-0.11$ &$-0.23$ &$0.02$
\\
{\sf Home}
&$-0.02$ &$-0.12$ &$0.09$
&$0.02$ &$-0.08$ &$0.12$
\\
{\sf Rebound}
&$-1.09$ &$-1.25$ &$-0.94$
&$-1.10$ &$-1.26$ &$-0.93$
\\
{\sf Strength: $+2$}
&$1.86$ &$-1.22$ &$5.61$
&$1.82$ &$-1.09$ &$5.68$
\\
{\sf Strength: $+1$}
&$0.76$ &$0.24$ &$1.28$
&$0.73$ &$0.22$ &$1.22$
\\
{\sf Strength: $0$}
&$0.94$ &$0.51$ &$1.36$
&$0.94$ &$0.52$ &$1.33$
\\
{\sf Strength: $-1$}
&$0.45$ &$0.00$ &$0.87$
&$0.45$ &$0.03$ &$0.85$
\\
{\sf Shot type: Deflected}
&$-0.98$ &$-1.34$ &$-0.63$
&$-1.07$ &$-1.41$ &$-0.72$
\\
{\sf Shot type: Slap}
&$-0.03$ &$-0.24$ &$0.18$
&$-0.12$ &$-0.34$ &$0.09$
\\
{\sf Shot type: Snap}
&$-0.05$ &$-0.25$ &$0.15$
&$-0.11$ &$-0.32$ &$0.10$
\\
{\sf Shot type: Tip-in}
&$-0.56$ &$-0.78$ &$-0.34$
&$-0.62$ &$-0.85$ &$-0.40$
\\
{\sf Shot type: Wrap-around}
&$0.40$ &$-0.10$ &$0.93$
&$0.31$ &$-0.19$ &$0.83$
\\
{\sf Shot type: Wrist}
&$0.02$ &$-0.15$ &$0.19$
&$-0.04$ &$-0.21$ &$0.13$
\\
{\sf Position: Top}
&$0.70$ &$0.51$ &$0.89$
&$0.68$ &$0.49$ &$0.86$
\\
{\sf Position: Slot}
&$-0.82$ &$-0.94$ &$-0.69$
&$-0.84$ &$-0.97$ &$-0.71$
\\
\end{tabular}
\end{threeparttable}
\end{table}
Our main finding from the baseline models is that a goaltender's recent save performance has a \emph{negative} and a statistically significant fixed effect value for both window sizes, which is \emph{contrary} to the hot-hand theory.
The two baseline models give consistent results for the control variables. The only two control variable coefficients that disagree in sign, {\sf Home} and {\sf Shot type: Wrist}, have 95\% credible intervals that contain zero. The posterior mean values for the significant control variables have the same sign, have similar magnitude, and are in the direction we expect, for both window types. Specifically, the posterior mean values indicate that a goaltender performs better when his team has more skaters on the ice, when facing a wrap-around shot, and when facing a shot from the top region. A goaltender performs worse when facing a rebound shot, a deflected shot, a slap shot, a snap shot, a tip-in shot, or a shot from the slot region.
\subsection{Robustness of main finding}
Our main finding, that recent save performance has a negative fixed effect value, holds for all window sizes and types (Figure \ref{XGraph}). (Although not shown in the figure, all of the fixed effects are statistically significant.) The fixed effect magnitude increases with the window size.
\begin{figure}
\centering
\caption{Recent save performance fixed effects, for all window types and sizes.}\label{XGraph}
\includegraphics[width=9cm]{CleanedXComparison.ps}
\end{figure}
The fact that the slope fixed effects, $\hat{\bar{\beta}}$, are negative leaves open the possibility that the slopes for some individual goaltender-seasons are positive, but Figures \ref{CoefComp}-\ref{Coef15vs30} show that this is not the case. These figures show that all of the individual-group slope coefficients, $\hat{\beta}_j$, for both the longest window sizes (Figure \ref{CoefComp}) and the shortest window sizes (Figure \ref{Coef15vs30}), are strongly negative. Figures \ref{CoefComp}-\ref{Coef15vs30} also show positive correlation between the slope estimates from the shot-based vs. the time-based window models.
\begin{figure}
\centering
\caption{Estimated slopes ($\hat{\beta}_j$s) for all groups, for $k = 150$ window vs. $t = 300$ window.}\label{CoefComp}
\includegraphics[width=6.5cm]{CoefficientsComparison.ps}
\end{figure}
\begin{figure}
\centering
\caption{Estimated slopes ($\hat{\beta}_j$s) for all groups, for $k = 15$ window vs. $t = 30$ window.}\label{Coef15vs30}
\includegraphics[width=6.5cm]{k15versust30.ps}
\end{figure}
The coefficients of the significant control variables remained consistent in sign and similar in magnitude across all window sizes and types (Figure \ref{ControlGraph}). The control variable point estimates for all window types and sizes are within a $95\%$ Bayesian confidence interval (or a credible interval) for the 300-minute baseline model. Furthermore, changing the specification for {\sf Position} from categorical to numerical, for the $t = 300$ baseline model, resulted in a slope fixed effect that remained negative and was similar in magnitude. The signs of the coefficients for all statistically significant control variables in this model variant agreed with the $t = 300$ baseline model.
\begin{figure}
\centering
\caption{Control variable slope coefficient estimates, for all window types and sizes. The point estimates are represented by small circles. $95\%$ Bayesian confidence intervals from the $t = 300$-minute baseline model are included for comparison.}\label{ControlGraph}
\includegraphics[width=11cm]{ControlComparison.ps}
\end{figure}
\subsection{Magnitude of the impact of the independent variables on the save probability}
Figures \ref{ProbXVariable}--\ref{ControlProb} illustrate the impact of recent save performance and the control variables on the estimated save probability for the next shot, using the $t$ = 300-minute baseline model. In creating Figure \ref{ProbXVariable}, we set all control variables to their baseline values. In creating Figure \ref{ControlProb}, for each panel, we set $x_{ij}^\ast = 0$ and we set all control variables except the one being varied in the panel to their baseline values.
From Figure \ref{ProbXVariable}, we see that as the deviation of a goaltender's recent save performance from his current-playoff average increases from the 2.5th to the 97.5th percentile, his estimated save probability for the next shot decreases by 5 pps. For comparison, this range is larger than the 3.7 pp range of average save percentages during the 2018-19 regular season (as discussed in Section \ref{sec:intro}) but smaller than the 8 pp range of average save percentages during the playoffs of the same season.
Given that we define $x_{ij}^\ast$ to be the \emph{deviation} in performance of the goaltender-season group $j$ from his average performance for the current playoffs, the percentiles for $x_{ij}^\ast$ correspond to different recent save performances for different groups. To illustrate the effect in more concrete terms, consider the largest group: The 699 shots faced by Tim Thomas during the 2011 playoffs. The group average was $\bar{x}_j = 94$\%. We set each control variable category equal to its sample proportion in the group $j$ data. For shots where Thomas' recent save performance was at the 2.5th percentile, corresponding to $x_{ij} = 91.3$\%, his estimated save probability for the next shot was 95.2\%---a 3.9 pp increase. At the other extreme, for shots where the recent save performance was at the 97.5th percentile, corresponding to $x_{ij} = 97.2$\%, the estimated save probability for the next shot was 92.4\%---a 4.8 pp decrease.
\begin{figure}
\caption{Estimated save probability against $2.5\rm{th}$, $50\rm{th}$ and $97.5\rm{th}$ percentile of $x_{ij}^\ast$, the recent save performance.}
\centering
\includegraphics[width=10cm]{EstimateProbabilities.ps}
\label{ProbXVariable}
\end{figure}
Figure \ref{ControlProb} depicts the save probability against different values for the control variables. {\sf Home} and {\sf Game score} have minimal impact on the estimated save probability. Different values for {\sf Position}, {\sf Rebound}, {\sf Strength}, and {\sf Shot type}, in contrast, have a substantial impact on estimated save probability: a shot from the Slot is 16 pps less likely to be saved than a shot from the Top; a rebound shot is 15 pps less likely to be saved than a non-rebound shot; and a shot from a team that has a 2-man advantage is 9 pps less likely to be saved than shot from a team that is 2 men short. For {\sf Shot type}, the save probability decreases by 18 pps when moving from wrap-around (the shot type least likely to result in a goal) to a deflected shot (the type most likely to result in a goal).
\begin{figure}
\caption{Control Variables versus estimated save Probability}\label{ControlProb}
\centering
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{ReboundProb.ps}
\caption{{\sf Rebound}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{HomeProb.ps}
\caption{{\sf Home}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{PositionProb.ps}
\caption{{\sf Position}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{LeadTrailProb.ps}
\caption{{\sf Game score}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{StrengthProb.ps}
\caption{{\sf Strength}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{DetailProb.ps}
\caption{{\sf Shot type}}
\end{subfigure}
\end{figure}
\subsection{MCMC diagnostics}
We computed two diagnostic statistics: $\hat{R}$ and $n_{\rm{eff}}$. To check whether a chain has converged to the equilibrium distribution we can compare the chain's behavior to other randomly initialized chains. The potential scale reduction statistic, $\hat{R}$, allows us to perform this comparison, by computing the ratio of the average variance of draws within each chain to the variance of the pooled draws across chains; if all chains are at equilibrium, the two variances are equal and $\hat{R} = 1$, and this is what we found for all of our models.
The effective sample size, $n_{\rm{eff}}$, is an estimate of the number of independent draws from the posterior distribution for the estimate of interest. The $n_{\rm{eff}}$ metric computed by the {\sf rstan} package is based on the ability of the draws to estimate the true mean value of the parameter. As the draws from a Markov chain are dependent, $n_{\rm{eff}}$ is usually smaller than the total number of draws. \textcite{gelman2013bayesian} recommend running the simulation until $n_{\rm eff}$ is at least 5 times the number of chains, or $5 \times 4 = 20$. This requirement is met for all parameters in all of our models.
\section{Discussion and conclusion}\label{sec:conclusion}
We used multilevel logistic regression to investigate whether the performance of NHL goaltenders during the playoffs is consistent with a hot-hand effect. We used data from the 2008--2016 NHL playoffs. We measured past performance using both shot-based windows (as has been done in past research) and time-based windows (which has not been done before). Our window sizes spanned a wide range: from, roughly, half a game to 5 games. We allowed the intercept and the slope with respect to recent save performance to vary across goaltender-season combinations.
We found a significant \emph{negative} impact of recent save performance on the next-shot save probability. This finding was consistent across all window types, window sizes, and goaltender-season combinations. This finding is inconsistent with a hot-hand effect and contrary to the findings for baseball in \textcite{green2017hot}, who used a similar window size and hypothesized that skilled activity would generally demonstrate a hot-hand effect.
If a goaltender's performance, after controlling for observable factors, was completely random, then we would expect a period of above-average or below-average recent save performance to be likely to be followed by a period of save performance that is closer to the average, because of regression to the mean. As we increase the sample size used to measure recent save performance (that is, increase the window size), we would expect the average amount by which performance moves toward the average to decrease. We observe the opposite (see Figure \ref{XGraph}), which argues against our finding being driven by regression to the mean.
A motivation effect provides one possible explanation for our finding. That is, if a goaltender's recent save performance has been below his average for the current playoffs, then his motivation increases, resulting in increased effort and focus, causing the next-shot save probability to be higher. Conversely, if the recent save performance has been above average, then the goaltender's motivation, effort, and focus could decrease, leading to a lower next-shot save probability. \textcite{belanger2013driven} find support for the first of these effects (greater performance after failure) for ``obsessively passionate individuals'' but did not find support for the second effect (worse performance after success) for such individuals. The study found support for neither effect for ``harmoniously passionate individuals.'' These findings are consistent with Hall-of-Fame goaltender Ken Dryden's (\citeyear{dryden2019life}) sentiment that ``if a shot beats you, make sure you stop the next one, even if it is harder to stop than the one before.'' The psychological mechanisms underlying our finding could benefit from further study.
Although the estimated recent save performance coefficient is consistently negative, its magnitude varies and in particular, the magnitude increases sharply with the window size. We expect to see more reliable estimates with longer window sizes, but the increase in magnitude is surprising, given that we define the recent save performance as a scale-free save percentage.
One limitation of our study is that, in defining windows, we ignore the time that passes between games. Past research, such as \textcite{green2017hot}, shares this limitation. This limitation could be particularly serious for backup goaltenders, for whom the interval between two successive appearances could be several days long.
\singlespacing
\newpage
\printbibliography
\end{document}
| {
"attr-fineweb-edu": 1.855469,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdSI5qsBB3NoB4AhC | \section{Introduction}
The statistical analysis of competing games based on data gathered from professional competitions is currently a growing area of research~\cite{petersen2020renormalizing,neiman2011reinforcement, mukherjee2019prior, merritt2013environmental, mandic2019trends,perotti2013innovation,schaigorodsky2014memory,almeira2017structure}.
In the case of team sports games, these studies have a potentially high impact.
It is boosted by commercial interests but also by its intrinsic complexity that caught the attention of basic
research~\cite{petersen2020renormalizing,neiman2011reinforcement, mukherjee2019prior, merritt2013environmental, mandic2019trends}.
In the context of team sports games, the emergence of complex behaviour is often observed. It arises from the interplay dynamics of a process governed by well--defined spatiotemporal scales.
It is well known that these scales are important for both individual interactions among athletes and collective strategies~\cite{lebed2013complexity}.
Particularly interesting is the game of football, where data analytics have been successfully tackled in recent years~\cite{lopes2019entropy, rossi2018effective, bransen2019measuring}.
For instance, in the field of complex systems, J. Buldú et al. used network theory to analyze the Guardiola's {\it F.C.~Barcelona} performance ~\cite{buldu2019defining}.
In that work, they consider a team as an organized social system where players are nodes linked during the game through coordination interactions.
Despite these recent contributions, football analytics seems to be relegated as compared to other major team sports, like basketball or baseball.
That is why football's team management and strategy is far from being recognized as analytics--driven.
The specific problem with football is concerned with data collection.
Usually, the collection of data upon ball--based sports competitions is focused on what is happening in the neighborhood of the ball (on--ball actions).
Nonetheless, in football games, an important part of the dynamics is developed far from the ball (off--ball dynamics), and this information is required to analyze the performance of football teams~\cite{casal2017possession}.
Consequently, in the game of football, on--ball actions might provide less insight for strategy and player evaluation than off--ball dynamics.
In this context, a possible solution is to improve the data gathering, a possibility often limited by a lack of resources.
From an alternative perspective, we aim to define a framework based on the use of state of the art statistical tools and modeling techniques, that allow us the characterization of the global dynamics by studying the local information provided by the data.
Based on these ideas, and on previous studies~\cite{yamamoto2018examination, hunter2018modeling, cakmak2018computational},
in the present contribution, we have surveyed, collected and analyzed information from a novel database~\cite{pappalardo2019public}
to propose an innovative agent--based football model.
We emphasize, our goal is not to model the full complexity dynamics of a football game, but to model the dynamics of ball possession intervals, defined as the consecutive series of actions carried out by a team. We focus on study the interactions in the frame of both on-ball and off-ball actions, considered as the main feature to understand the team's collective performances \cite{gama2016networks, gudmundsson2017spatio}.
This paper is organized in three parts: Material and Methods, Results and Discussion. In section Material and Methods, we firstly introduce the database. In particular, we describe the dataset {\ttfamily Events}, as well as other information regarding relevant fields.
Secondly, we discuss some interesting statistical patterns that we found in this dataset, and we use to propose the model's components.
Thirdly, we give a formal definition of the model and discuss in detail the key elements, the assumptions, and the dynamical parameters.
Lastly, we present a method to systematically search for a suitable set of parameters for the model.
The section Results is divided into two parts. Firstly we evaluate the results of the model.
To do so, we focus on analyzing three statistical observables (i) the distribution of possession time, (ii) the distribution of the distance traveled by the ball in passes (hereafter referred to as the passes length), and (iii) the distribution of the number of passes.
The idea is to assess the model's performance by comparing its outcomes with the data.
Secondly, we place our model in a theoretical framework. This allows, under certain approximations, an interpretation of the emergent spatiotemporal dynamics of the model.
Finally, our results are discussed in the last section.
\section{Material and Methods}
\label{se:metodos}
\subsection{The dataset}
\label{se:material}
In 2019, L. Pappalardo {\it et al.} published one of the largest football--soccer database ever released \cite{pappalardo2019public}.
Within the information provided in this astounding work,
the dataset {\ttfamily Events} contains a gathering of all the spatiotemporal events recorded from each game in the season $2017$-$2018$ of the following five professional football leagues in Europe: Spain, Italy, England, Germany, and France.
A typical entry in this dataset bears information on,
\begin{itemize}
\item {\it Type of event.}
Namely, pass, duels, free kicks, fouls, etc,
subdivided into other useful subcategories. This field allows us to evaluate in detail the correlation between particular actions and the consequences in the dynamics.
\item {\it Spatiotemporal data.}
Each event is tagged with temporal information, referred to the match period, and to its duration in seconds.
Spatial information, likewise, is referred to the stadiums' dimensions as a percentage of the field length from the view of the attacking team.
\item {\it Unique identifications.}
Each event in the dataset is linked to an individual player in a particular team.
This allows us to accurately determine the ball position intervals, and moreover to perform a statistical analysis of the players involved.
\end{itemize}
In light of this information, we define a ball position interval (BPI) as the set of consecutive events generated by the same team.
We gathered $3071395$ events and $625195$ BPIs from the dataset, totalizing $1826$ games, involving $98$ teams,
and with the participation of $2569$ different players.
Since we aim to study a dynamical evolution, only BPIs with two or more events were collected.
On the other hand, since different games often occur in stadiums of varying sizes, to compare distances we normalized all the measured distances in a game to the average distance calculated using the whole set of measures in that game.
\subsection{Statistical patterns}
\label{se:insights}
The idea of this section is to present the statistical patterns that we have used to propose the main components of our football model.
Firstly, in Fig.~\ref{stats} panel A, we plot the frequency of events by type (blue bars) and also the frequency of events that trigger a ball possession change (BPC) (see red bars).
By looking at the blue bars, we can see that the most common event is the {\it "Pass"}, with $1.56$ million entries.
Notice that passes, almost duplicate the second most frequent type of event, {\it "Duels"}. Which at the same time is the most frequent event triggering the possession changes (see red bars).
Moreover, by comparing the two bars on {\it "Duels"}, we can see that $\approx 75\%$ of the duels produce possession changes, showing that this type of event is very effective to end BPIs.
Secondly, in Fig.~\ref{stats} panel B, the main plot shows the number of different players involved per BPI.
As can be seen, the most common case is {\it two players}, with $0.27$ million observations, duplicating the {\it three players} case, the second most commonly observed.
The inset shows the number of different types of events per BPI.
With $0.4$ million of cases recorded, we can see the case of two types of events is the most common.
Notice, the data seems to show statistical regularities.
Despite the doubtless complexity of the game, there are features that dominate over others.
In the following section, we use these observations to propose the main components of a minimalist dynamical model.
\subsection{The model}
\label{se:rules}
We aim to build a model that draws the main features of football game dynamics during ball possession intervals.
The idea is to propose a system both simple and minimalist, but also effective in capturing global emergents of the dynamics.
To do so, we used the empirical observations made in the previous section.
Let us think in a system with three agents ({\it the players}), two in the same team having possession of the ball ({\it the teammates}), and one in the other ({\it the defender}).
The players in this system can move in two dimensions, and the teammates can perform passes to each other.
In this simulated game, the system evolves until the defender reaches the player with the ball and, emulating a {\it Duel}, it ends the BPI.
Bearing these ideas in mind, in the following we propose the rules that govern the agents' motion, and consequently define the model's dynamics.
Let $\Vec{r_i}(t)$ be a $2D$ position vector for an agent $i$ ($i=1,2,3$) at time $t$.
Considering discrete time steps $\Delta t=1$, at $t+1$ the agents will move as $\Vec{r_i}(t+1)=\Vec{r_i}(t)+ \Vec{\delta r_i}(t)$.
In our model, we propose
$\Vec{\delta r_i}(t) = (R \cos \Theta, R \sin \Theta)$,
where $R$ and $\Theta$ are two variables taken as follows,
\begin{enumerate}
\item {\it The displacement $R$}
The three agents randomly draw a displacement from an exponential distribution $P_a(r)=\frac{1}{a}e^{-r/a}$,
where $a$, the scale of the distribution, is the agent's action radius (see Fig.~\ref{diagram} A), i.e. the surroundings that each player controls.
\item {\it The direction $\Theta$}
\begin{enumerate}
\item {\it For the teammates}. The agents randomly draw an angle in $[0, 2\pi)$ from a uniform distribution.
\item {\it For the defender}. This agent takes the direction of the action line between itself and the agent with the ball.
\end{enumerate}
\end{enumerate}
Then, according to the roles in the game, the players decide to accept the changes proposed as follows,
\begin{enumerate}
\setcounter{enumi}{2}
\item {\it The player with the ball} evaluates if the proposed displacement moves it away from the defender. If it does, the player changes the position; otherwise, it remains the current position.
\item {\it The free player} and {\it the defender} always accept the change.
\end{enumerate}
As we mentioned before, in this model we consider the possibility that the teammates perform passes to each other.
This decision is made as follows,
\begin{enumerate}
\setcounter{enumi}{4}
\item If the defender's action radius does not intercept the imaginary line joining the teammates, then the player with the ball plays a pass to the other teammate with probability $p$.
\end{enumerate}
Since in real football games the player's movements are confined, for instance, by the field limits, in the model we introduce two boundary parameters: The inner and external radius, $R_1$ and $R_2$, respectively, (see Fig.~\ref{diagram} B).
\begin{enumerate}
\setcounter{enumi}{5}
\item The inner radius $R_1$ is used to set the initial conditions. At $t=0$, each one of the three agents is put at a distance $R_1$
from the center of the field, spaced with an angular separation of $120$ degrees (maximum possible distance between each other).
\item The external radius $R_2$ defines the size of the field. It sets the edge of the simulation.
If an agent proposes a new position $\Vec{x}(t+1)$, such that $||\Vec{x}(t+1)||\geq R_2$, then, the change is forbidden, and the agent keeps its current position -- note this overrules the decision taken from $(3)$ and $(4)$.
\end{enumerate}
Lastly, a single realization of the model in the frame of the rules proposed above ends when,
\begin{enumerate}
\setcounter{enumi}{7}
\item The defender invades the agent with the ball's action radius. That is, when the distance $d$ between the player with the ball and defender satisfies $d<a$.
\end{enumerate}
Let us justify the election of the rules and the different elements of the model.
Firstly, it is well--known that football exhibits a complex dynamics.
Fig.\ref{stats} (A) shows that many events are possible in the context of a BPI. However, we can see that the events {\it "Pass"} and {\it "Duels"} domain in the frequency of the common events, and events triggering a BPC, respectively.
Therefore, a reasonable simplification is to propose a model with only two possible events. This also agrees with the data shown in the inset of Fig.~\ref{stats} (B), regarding the number of different types of events observed during BPI.
Secondly, considering only three players for a football model could be seen as an oversimplification.
However, as we show in the main plot of Fig.~\ref{stats} (B), the number of players by BPI is in most of the cases two.
Therefore, a system with two teammates and a single defender triggering the BPCs is, presumably, a good approximation; ultimately, to be judged by the model's predictions on the observed statistics.
Thirdly, let us discuss the players' movement rules. In item (1) (see listing above), we propose the agents raw the displacements from an exponential distribution, with an action radius $a$ as the scale.
The idea behind this is to set a memoryless distribution, in the light that the players' displacements are commonly related to both evasion and distraction maneuvers, which are more effective without a clear motion pattern~\cite{bloomfield2007physical}.
The direction and the adoption of the new movement, on the other hand, are proposed as role--dependent.
The player with the ball takes a random direction and adopts the movement if the new displacement moves it away from the defender, else it stays on the current position.
The idea here is to slow down the player movement since it is well--known that the players on ball control, are slower than free players.
The free player, on the other hand, follows a random walk.
In this regard, our aim is to include in the model the possibility of performing passes of different lengths.
The defender's main role, in turn, is to capture the player with the ball.
Therefore, we consider rule (2.b) as the simplest strategy to choose in the frame of a minimalist model.
Lastly, the incorporation of the boundaries $R_1$ and $R_2$ is because the development of football games takes place inside confined spaces.
In particular, $R_1$ brings into the model the possibility of capturing short--time ball possession intervals, emulating plays occurring in reduced spaces, as, for instance, fast attacks.
The incorporation of $R_2$, on the other hand, is straightforward, since the real football fields are not limitless.
The main difference between the real and the model field's bounds is the shape. In this regard, we neglect any possible contribution from the fields' geometry.
We consider that our model offers an adequate balance between simplicity, accuracy, and, as we show in the following sections, empirical validation.
In the Supplementary Material at [URL will be inserted by publisher, S1 and S2], additionally, we show the evaluation of both alternative components and alternative strategies for the model.
In the following section, we propose a convenient method for tuning the main parameters ruling the model dynamics, (i) the action radius $a$, (ii) the probability of performing a pass $p$, and (iii) the confinement radius $R_1$ and $R_2$.
\subsection{On setting the model's parameters}
\label{se:performance}
The model's performance depends on the correct choice of four parameters: $a$, $p$, $R_1$ and $R_2$.
In this section, we propose a simple method to optimize this tuning procedure.
For the sake of simplicity, we decided to fix $a$, and refer the other radius to this scale, $R_1\to \frac{R_1}{a}$ and $R_2\to \frac{R_2}{a}$.
For the other parameters, we devised a fitting procedure based on the minimization of the sum of the Jensen--Shannon divergences between the observed and the predicted probability distributions of the studied stochastic variables.
To do so, we used the following statistical observables, (i) the distribution of ball possession time $P(T)$,
(ii) the distribution of passes length, $P(\Delta r,Y=\mathrm{Pass})$, and (iii) the distribution of the number of passes performed $P(N)$.
With this, we can evaluate the model's dynamics by using three macroscopic variables that we can observe in the real data, a temporal, a combinatorial and a spatial variable describing the interaction between {\it the teammates}.
The method follows the algorithm below,
\begin{enumerate}
\item Propose a set of parameters $\rho =(p,R_1,R_2)$;
\item Perform $10^5$ realization, calculate $P(T)$, $P(\Delta r,Y=\mathrm{Pass})$ and $P(N)$
\item Compare the three distributions obtained in step 2 with the real data, using the Jensen--Shannon divergence (JSD) \cite{wong1985entropy}.
\item Propose a new set of parameters $\rho$,
seeking to lower the sum of the JSD over the three distributions.
\item Back to step 2, and repeat until the JSD is minimized.
\end{enumerate}
Notice, our goal is not to perform a standard non--linear fit but to optimize the search of a realistic set of parameters that simultaneously fit the three distributions.
In this frame, the introduction of the JSD allows us to use a metric distance to compare and assess differences between probability distributions with different physical meanings.
In the last part of the Supplementary Material at [URL will be inserted by publisher, S1 c.f. FIG.~S4. ], we discuss in detail the implementation of this method.
\section{Results}
\label{se:results}
\subsection{Statistical observables}
\label{se:results1}
The idea of this section is to describe the statistical observables that we extracted from the dataset, and that we use to evaluate the model performance.
The main plot in Fig.~\ref{model-exp} panel A, shows the distribution of possession times.
We measured the mean value in $\avg{T} = 13.72~s$.
In this case, we performed a non--linear fit with a function $P(T) \propto T^{-\gamma}$, from where we found $\gamma = 5.1 \pm 0.1$.
We can conclude, despite the distribution seems to follow a power-law behavior, the exponent is large to ensure it \cite{clauset2009power}.
The inset in that panel, in turn, shows the distribution $P(\Delta t)$, the time between two consecutive events.
The same heavy--tailed behavior is observed, which seems to indicate that in both plots, extreme events might not be linked to large values of $T$, but of $\Delta t$. This is probably due to events such as interruptions in the match or similar.
On the other hand, in panel B we show the distribution $P(\Delta r)$, the spatial distance between two consecutive events.
In this case, we divided the dataset to see the contribution of the event tagged as {\it "Pass"} since, as we show in Fig.~\ref{stats} A, these are the most recurrent entries.
Let us split $P(\Delta r)$ as follows,
$P(\Delta r) =
P(\Delta r, Y=\mathrm{Pass}) +
P(\Delta r, Y=\mathrm{Other}) $,
where $Y$ stands for the type of event, the first term is the contribution coming from passes and the second one from any other type of event.
Moreover, we divided the event pass, into two subtypes
$P(\Delta r, Y=\mathrm{Pass}) =
P(\Delta r, Y=\mathrm{Simple\, pass}) +
P(\Delta r, Y=\mathrm{Other\, pass}) $,
where the first term is the contribution of the sub--type {\it "Simple Pass"} and the second is the contribution of any other sub--type (for example {\it "High pass", "Cross", "Launch"}, etc. c.f. \cite{pappalardo2019public} for further details).
For the sake of simplicity, hereafter we refer to the type of events {\it "Pass"}, and the subtypes {\it "Simple pass"} and {\it "Other pass"} as $X$, $X_2$ and $X_3$, respectively.
Notably, we can see a significant contribution of the event {\it "Pass"} to distribution $P(\Delta r)$.
The peak at $\Delta r \approx 1$ (the mean value) and the hump around $\Delta r \approx 3$ is well explained by the contribution of $P(\Delta r,X)$ and $P(\Delta r,X1)$, whereas $P(\Delta r,X2)$ seems to contributes more to the tail.
This multi-modal behaviour, likewise, might evidence the presence of two preferential distances, from where teammates are more likely to interact by performing passes.
Panel C shows the distribution $P(N)$ of the number of passes per BPI. We observe the presence of a heavy tail at the right.
The mean value, $\avg{N}= 3.1$, indicates that on average we observe $\approx 3$ passes per BPI.
Concerning this point, in panel D, we show the relation between the number of passes and the possession time. Interestingly, we observe a linear relation for values within $0<T< 60~(s)$ (see solid blue line in the panel).
From our best linear fit in this region, we obtain $\avg{N}(T)= \omega_p~T$ with $\omega_p=0.19 \pm 0.03$ ($R^2=0.99$).
This parameter can be thought in overall terms as the rate of passes per unit of time. Therefore, we conclude that during ball possession intervals, $\approx 0.2$ passes per second are performed.
\subsection{Assessing the model performance}
In this section, we evaluate and discuss the model's outcomes.
The results are shown in Fig.~\ref{model-exp}. Panels A, B, C, and D show the comparison between the results obtained from the dataset (discussed above) and from the model's simulations (black solid lines).
We used the set of parameters $(p,a,R_1,R_2) = (0.3,~1,~2.25,~16)$.
For the distribution $P(T)$ in panel A, we obtain a Jensen-Shannon distance of $D_{JS}= 0.017$, which indicates a good similarity between the dataset and the model results.
However, we observe a shift in the mean of $\approx -20\%$, and a problem to capture "the hump" of the curve around $T \approx 30~s$.
For the distribution of passes length, $P(\Delta r,X)$, showed in panel B, we observe a very good similarity $D_{JS}= 0.008$. Moreover, we can see the model succeeds in capturing the bimodality of the distribution, which seems to indicate that the proposed model rules are very effective for capture both nearby and distant passes, two interaction distances.
On the other hand, the model fails in capturing the tail, possibly because these events are related to very long passes ({\it "Goal kicks"} or {\it "Cross passes"}) not generated by the simple dynamics of the model.
In panel C, we show the distribution of the number of passes $P(N)$.
The calculation for the Jensen--Shannon distance gives the value $D_{JS}=0.0007$, which indicates a very good similarity between the curves.
In this case, the value of $p$ seems to be crucial.
Note as the chosen value for $p$ is near to the rate $\omega_p = 0.19$ passes per second, reported in the previous section.
Regarding the relation $\avg{N}$ vs. $T$ in panel D, the dataset shows that, on average, the number of passes cannot indefinitely grow with the possession time, which is likely a finite--size effect.
Our simple model, in turn, allows the unrealistic unbounded growth of $\avg{N}$.
Lastly, let us put the parameter values in the context of real football dimensions.
Regarding the action radius $a$, the literature includes reported estimations from kinetic and coordination variables~\cite{schollhorn2003coordination, lames2010oscillations}, where speed measurements~\cite{little2005specificity, loturco2019maximum}
show that professional players are able to move in a wide range within $1.1$ -- $4.8$ $m/s$.
Thus, it would be easy for a professional player to control a radius of $a \approx 2~m$.
If we set this value for $a$, we proportionally obtain for the internal and the external radius, the values $R_1\approx5~m$ and $R_2\approx 32~m$, respectively.
Consequently, in the frame of our model, the dynamics of the possession intervals take place into areas within a range of $78~m^2$ (approximately a goal area), and $3200~m^2$ ($\approx 47 \%$ of the Wimbledon Greyhound Stadium).
Therefore, we conclude the proposed parameters are in the order of magnitude of real football field dimension, and we can confirm that the dynamics of the model is ruled upon a realistic set of parameters' values.
\subsection{Mapping the model in a theoretical framework}
\label{se:results2}
We propose a theoretical framework to understand the distribution of possession times, $P(T)$, observed from the model's outcomes.
Every realization can be thought of as a process where the defender must capture the ball. A ball that, due to the movements and passes performed by the teammates, may follow a complicated path in the plane. However, since the defender always takes the direction towards the ball, the process can be reduced to a series of movements in one dimension.
To visualize this mapping we fix the origin of our 1D coordinate system at the ball position and define the coordinate $x$ of the defender as the radial distance $d$ between the ball and the defender. In this frame, the defender takes steps back and forth depending on whether the radial distance between the ball and defender is increasing or decreasing, respectively.
The step size $\Delta d$ of this random walk is variable, and the process ends when the coordinate $x$ of the defender reaches the interval $(-a, a) $ (c.f. Section II.~C, rule 8).
In this process, the step size distribution characterizes the random walk.
Let us define $\delta = \Delta d/d_0$ as the step size normalized to the initial distance between the players.
Then, in Fig.~\ref{fi:gt} A, we plot the distribution $P(\delta)$ analyzing two possible contributions for the steps, (i) the steps taken when the defender follows the player with the ball ($S_1$), (ii) those generated when a pass between teammates occurs ($S_2$). In order to visualize these contributions, we have plotted $P(\delta)$, and the joint probabilities $P(\delta,S_1)$ and $P(\delta,S_2)$, fulfilling $P(\delta)= P(\delta,S_1)+P(\delta,S_2)$.
From this perspective, we can see that $(S_2)$ explains the extreme events whereas $(S_1)$ explain the peak
On the other hand, if we measure the mean value of both contributions we obtain $\avg{\delta}_{P(\delta,S_1)} = -0.14$, $\avg{\delta}_{P(\delta,S_2)} = 0.22$, which means that in average, the first contribution brings the defender towards the ball, and the second takes it away. However, notice that the full contribution is negative, $\avg{\delta}_{P(\delta)} = -0.07$, which indicates the presence of a drift leading the defender towards the ball.
From this perspective, we can map the dynamics to a random walk with drift, and in the presence of an absorbing barrier.
Moreover, in the approximation where $\delta$ is constant, the process described above is governed by the following Focker--Plank equation,
\begin{equation}
\frac{\sigma^2}{2} \frac{\partial^2 p}{\partial x^2}- \mu \frac{\partial p}{\partial x} = \frac{\partial p}{\partial t}
\label{eq:f-p}
\end{equation}
subject to the boundary conditions,
\begin{align*}
p(d_0,x;0) &= \delta (x), \\
p(d_0,x_b;t) &= 0,
\end{align*}
where $p(d_0,x,t)$ is the probability of finding a walker that starts in $d_0$, in the position $x$ at time $t$.
The coefficients $\mu$ and $\sigma$ are the drift and the diffusion, and $x_b$ indicates the position where the absorbing barrier is placed. Additionally, it can be proved that the probability distribution of the first passage time $\tau$, for a walker reaching the barrier, is given by \cite{cox1977theory},
\begin{equation}
g(\tau) = \frac{x_b}{\sigma \sqrt{2\pi \tau^3}}
\exp \left(-\frac{(x_b -\mu \tau)^2}{2 \sigma^2 \tau} \right),
\label{eq:mpt}
\end{equation}
which can be straightforwardly linked to the distribution of possession times $P(T)$.
In this theoretical framework, we used eq. (\ref{eq:mpt}) to perform a non--linear fit of $P(T)$, via the parameters $\mu$ and $\sigma$.
We set $x_b= a$, as the action radius can be thought of as the barrier's position.
The result presented in Fig.~\ref{fi:gt} B, shows the fitting is statistically significant, yielding a correlation coefficient $r^2 = 0.97$, with $\mu= 0.09 \pm 0.02$ and $\sigma= 0.39 \pm 0.03$. Moreover, notice that we achieve a very good agreement between the drift value and $\avg{\delta}_{P(\delta)}$, in magnitude.
Therefore, we can conclude that, in the context of the model, a random walk with a constant step $\delta$ and a drift $\mu$, is a good approximation for a walker drawing steps from $\avg{\delta}_{P(\delta)}$. Furthermore, this approximation explains the long tail observed in $P(T)$ for both, the outcomes of the model and the empirical observations.
\section{Discussion}
In this contribution, we focused on analyzing the dynamic of ball possession intervals.
We have performed an empirical study of a novel dataset, detected relevant statistical patterns, and on this base, proposed a numerical agent--based model.
This model is simple, and it can be easily interpreted in terms of the features of the phenomenon under discussion.
Moreover, we proposed a theoretical interpretation of the numerical model in the frame of an even simpler but better--understood physical model: the Wiener process with drift and an absorbing barrier.
In this section, we extend the discussion regarding these results.
First, we fully characterize BPIs of the extensive dataset that compiles most of the events during the games, identifying the main contributions.
Four salient features were identified and used later as the input to devise a minimalist football model to study the dynamics of ball possession intervals.
Namely, (i) the most frequent type of events, (ii) events leading to a change in possession, (iii) the number of players participating in a BPI, and (iv) the different types of events during BPIs.
We found that the most frequent event is {\it "Pass"}, which twice the second most common event, {\it "Duels"}. The latter, in turn, is the most common type of event triggering ball possession changes.
In most cases, just two players are involved in a BPI, and during a BPI there are usually at most two events.
Prompted from these findings, we introduced a minimalist model composed of two {\it teammates} and a single {\it defender}, that following simple motion rules, emulates both on--ball and off--ball actions.
This model can be tuned by setting four independent parameters
$a$, $p$, $R_1$ and $R_2$, which control the action radius, the probability of making a pass, and the internal and external radius, respectively.
We evaluated the model's performance by comparing the outcomes with three statistical observables in the possession intervals, the distribution of possession time $P(T)$, the distribution of passes length $P(\Delta r,X)$, and the distribution of the number of passes $P(N)$.
To this end, we have introduced a simple method based on the evaluation of the Jensen--Shannon distances, as a criterion to fit the simulation's outcomes to the real data.
Remarkably, despite the simplicity of the model, it approaches very well the empirical distributions.
Finally, to get a physical insight into the process behind ball possession dynamics, we map the model to a one--dimensional random walk in which the ball is fixed at the origin, and the defender moves taking non--uniform steps of length $\delta$.
We showed that since $\avg{\delta}_{P(\delta)}<0$ holds, the defender moves following a preferential direction towards the ball.
Then, we can use the theoretical framework of a Wiener process with drift and an absorbing barrier to describe the model's dynamics.
We evaluated this hypothesis by performing an non--linear fit to the distribution of possession times, $P(T)$, with the expression of the first passage time for the Wiener process, finding a very good agreement.
The mapping shows that the agents' dynamics in the numerical model can be understood in the frame of a simple physical system.
We can think in the game of football as a complex system where the interactions are based on cooperation and competition.
Competition is related to teams' strategies, it concerns the problem of how to deal with the strengths and weaknesses of the opponent \cite{hewitt2016game}.
Strategies are usually previously planned and are developed during the entire game, hence it could be associated with long--term patterns in the match.
Cooperation, on the other hand, can be linked to tactical aspects into the game.
Where interactions bounded to a reduced space in the field, short periods into the match, and carry out by a reduced number of players,
could be associated with short--term patterns.
Ball possession intervals are related to cooperative interactions.
Therefore, in this work, we are not studying the full dynamics of a football match but tactical aspects of the game.
In this frame, our work should be considered as a new step towards a better understanding of the interplay between the short-term dynamics and the emerging long-term patterns within the game of football when studied as complex systems with non-trivial interaction dynamics.
From a technical point of view, our model could be used as a starting point to simulate and analyze several tactical aspects of the game. Note that the main advantage of our simple numerical model is that it easily allows the introduction of complexity: more players, different types of interactions, etc.
For instance, simulations based on our model can be useful to design training sessions of small--sided games \cite{sangnier2019planning,eniseler2017high,reilly2005small}.
Where coaches expose players to workout under specific constraints: in reduced space, with a reduced number of players, with coordinated actions guided by different rules, etc \cite{sarmento2018small}.
Moreover, by performing simulations is possible to estimate the physical demand of the players, which is useful for sessions planning and post evaluation \cite{hodgson2014time}.
Lastly, as we said above, we consider that a full characterization of the football dynamics should focus on the study of both competitive and cooperative interactions.
In this work, we focus on the latter, a first--step to address the former could focus on analyzing the spatiotemporal correlations between consecutive possession intervals.
In this regard, we let the door open to futures research works in the area.
\begin{acknowledgments}
This work was partially supported by grants from CONICET (PIP 112 20150 10028), FonCyT (PICT-2017-0973), SeCyT–UNC (Argentina) and MinCyT Córdoba (PID PGC 2018).
\end{acknowledgments}
| {
"attr-fineweb-edu": 1.829102,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdf04uzlhjV1b1xFx | \section{Introduction}
\subsection{The betting market for the EPL}
Gambling on soccer is a global industry with revenues between \$700 billion and \$1 trillion a year (see "Football Betting - the Global Gambling Industry worth Billions." BBC Sport). Betting on the result of a soccer match is a rapidly growing market, and online real-time odds exists (Betfair, Bet365, Ladbrokes). Market odds for all possible score outcomes ($0-0, 1-0, 0-1, 2-0, ... $) as well as outright win, lose and draw are available in real time. In this paper, we employ a two-parameter probability model based on a Skellam process and a non-linear objective function to extract the expected scoring rates for each team from the odds matrix. The expected scoring rates then define the implied volatility of the game.
A key feature of our analysis is to use the real-time odds to re-calibrate the expected scoring rates instantaneously as events evolve in the game. This allows us to assess how market expectations change according to exogenous events such as corner kicks, goals, and red cards. A plot of the implied volatility provides a diagnostic tool to show how the market reacts to event information. In particular, we study the evolution of the odds implied final score prediction over the course of the game. Our dynamic Skellam model fits the scoring data well in a calibration study of 1520 EPL games from the 2012 - 2016 seasons.
The goal of our study is to show how a parsimonious two-parameter model can flexibly model the evolution of the market odds matrix of final scores. We provide a non-linear objective function to fit our Skellam model to instantaneous market odds matrix. We then define the implied volatility of an EPL game and use this as a diagnostics to show how the market's expectation changes over the course of a game.
One advantage of viewing market odds through the lens of a probability model is the ability to obtain more accurate estimates of winning probabilities. For example, a typical market "vig" (or liquidity premium for bookmakers to make a return) is $5-8\%$ in the win, lose, draw market. Now there is also extra information in the final score odds about the win odds. Our approach helps to extract that information. Another application of the Skellam process is to model final score outcomes as a function of characteristics (see \cite{Karlis:2003ck, Karlis:2009dq}.)
The rest of the paper is outlined as follows. The next subsection provides connections with existing research. Section 2 presents our Skellam process model for representing the difference in goals scored. We then show how to make use of an odds matrix while calibrating the model parameters. We calculate a dynamic implied prediction of any score and hence win, lose and draw outcomes, using real-time online market odds. Section 3 illustrates our methodology using an EPL game between Everton and West Ham during the 2015-2016 season. Finally, Section 4 discusses extensions and concludes with directions for future research.
\subsection{Connections with Existing Work}
There is considerable interest in developing probability models for the evolution of the score of sporting events.
\cite{Stern:1994hj} and \cite{Polson:2015ira} propose a continuous time Brownian motion model for the difference in scores in a sporting event and show how to calculate the implied volatility of a game.
We build on their approach by using a difference of Poisson processes (a.k.a. Skellam process) for the discrete evolution of the scores of an EPL game, see also
\cite{Karlis:2003ck, Karlis:2009dq} and \cite{Koopman2014}.
Early probabilistic models (\citealt{Lee:1997ct}) predicted the outcome of soccer matches using independent Poisson processes. Later models incorporate a correlation between the two scores and model the number of goals scored by each team using bivariate Poisson models (see \cite{Maher:1982hr} and \cite{Dixon:1997jc}). Our approach follows \cite{Stern:1994hj} by modeling the score difference (a.k.a. margin of victory), instead of modeling the number of goals and the correlation between scores directly.
There is also an extensive literature on soccer gambling and market efficiency. For example, \cite{Vecer2009} estimates the scoring intensity in a soccer game from betting markets. \cite{Dixon:2004gj} presents a detailed comparison of odds set by different bookmakers. \cite{Fitt:2009iv} uses market efficiency to analyze the mispricing of cross-sectional odds
and \cite{Fitt:2005bj} models online soccer spread bets.
Another line of research, asks whether betting markets are efficient and, if not, how to exploit potential inefficiencies in the betting market. For example, \cite{Levitt2004} discusses the structural difference of the gambling market and financial markets. The study examines whether bookmakers are more skilled at game prediction than bettors and in turn exploit bettor biases by setting prices that deviate from the market clearing price. \cite{Avery:1999jg} examine the hypothesis that sentimental bettors act like noise traders and can affect the path of prices in soccer betting markets.
\section{Skellam Process for EPL scores}
To model the outcome of a soccer game between team A and team B, we let the difference in scores, $N(t)=N_A(t)-N_B(t)$ where
$N_A(t)$ and $N_B(t)$ are the team scores at time point $t$. Negative values of $N(t)$ indicate that team A is behind. We begin at $N(0) = 0$ and ends at time one with $N(1)$ representing the final score difference. The probability $\mathbb{P}(N(1)>0)$ represents the ex-ante odds of team A winning.
Half-time score betting, which is common in Europe, is available for the distribution of $N(\frac{1}{2})$.
We develop a probabilistic model for the distribution of $N(1)$ given $N(t)=\ell$ where $\ell$ is the current lead. This model, together with the current market odds can be used to infer the expected scoring rates of the two teams and then to define the implied volatility of the outcome of the match. We let $ \lambda^A$ and $ \lambda^B $ denote the expected scoring rates for the whole game. We allow for the possibility that the scoring abilities (and their market expectations) are time-varying, in which case we denote the expected scoring rates after time $t$ by $ \lambda^A_t $ and $\lambda^B_t$ respectively, instead of $ \lambda^A(1-t) $ and $\lambda^B(1-t)$.
\subsection{Implied Score Prediction from EPL Odds}
The Skellam distribution is defined as the difference between two independent Poisson variables, see \cite{Skellam:1946kb}, \cite{Sellers:2012uy}, \cite{Alzaid:2010ua}, and \cite{BarndorffNielsen:2012tx}. \cite{Karlis:2009dq} shows how Skellam distribution can be extended to a difference of distributions which have a specific trivariate latent variable structure.
Following \cite{Karlis:2003ck}, we decompose the scores of each team as
\begin{equation}
\left\{
\begin{aligned}
N_A(t) &=& W_A(t)+W(t) \\
N_B(t) &=& W_B(t)+W(t)
\end{aligned}
\right.
\end{equation}
where $W_A(t)$, $W_B(t)$ and $W(t)$ are independent processes with
$W_A(t) \sim Poisson (\lambda^A t)$, $W_B(t) \sim Poisson (\lambda^B t) . $
Here $W(t)$ is a non-negative integer-valued process to induce a correlation between the numbers of goals scored.
By modeling the score difference, $N(t)$, we avoid having to specify the distribution of $W(t)$ as the difference in goals scored is independent of $W(t)$. Specifically, we have
a Skellam distribution
\begin{equation}
N(t) = N_A(t) - N_B(t) = W_A(t) - W_B(t) \sim Skellam(\lambda^A t,\lambda^B t).
\label{skellam}
\end{equation}
where $ \lambda^A t $ is the cumulative expected scoring rate on the interval $ [0,t]$.
At time $t$, we have the conditional distributions
\begin{equation}
\left\{
\begin{aligned}
W_A(1) - W_A(t) &\sim& Poisson (\lambda^A(1-t)) \\
W_B(1) - W_B(t) &\sim& Poisson (\lambda^B(1-t)) \\
\end{aligned}
\right.
\end{equation}
Now letting $N^*(1-t)$, the score difference of the sub-game which starts at time $t$ and ends at time 1 and the duration is $(1-t)$. By construction, $N(1) = N(t) + N^*(1-t)$. Since $N^*(1-t)$ and $N(t)$ are differences of two Poisson process on two disjoint time periods, by the property of Poisson process, $N^*(1-t)$ and $N(t)$ are independent.
Hence, we can re-express equation (\ref{skellam}) in terms of $N^*(1-t)$, and deduce
\begin{equation}
N^*(1-t) = W^*_A(1-t) - W^*_B(1-t) \sim Skellam(\lambda^A_t,\lambda^B_t)
\end{equation}
where $W^*_A(1-t) = W_A(1) - W_A(t)$, $\lambda^A = \lambda^A_0$ and $\lambda^A_t=\lambda^A(1-t)$. A natural interpretation of the expected scoring rates, $\lambda^A_t$ and $\lambda^B_t$, is that they reflect the "net" scoring ability of each team from time $t$ to the end of the game. The term $W(t)$
model a common strength due to external factors, such as weather. The "net" scoring abilities of the two teams are assumed to be independent of each other as well as the common strength factor.
We can calculate the probability of any particular score difference, given by $\mathbb{P}(N(1)=x|\lambda^A,\lambda^B)$, at the end of the game where the $ \lambda$'s are estimated from the matrix of market odds. Team strength and "net" scoring ability can be influenced by various underlying factors, such as the offensive and defensive abilities of the two teams. The goal of our analysis is to only represent these parameters at every instant as a function of the market odds matrix for all scores.
To derive the implied winning probability, we use the law of total probability. The probability mass function of a Skellam random variable is the convolution of two Poisson distributions:
\begin{eqnarray}
\mathbb{P}(N(1)=x|\lambda^A,\lambda^B)
&=&\sum_{k=0}^\infty \mathbb{P}(W_B(1)=k-x|W_A(1)=k, \lambda^B) \mathbb{P}(W_A(1)=k|\lambda^A) \nonumber\\
&=&\sum_{k=max\{0,x\}}^\infty \left\{e^{-\lambda^B}\frac{(\lambda^B)^{k-x}}{(k-x)!}\right\}\left\{e^{-\lambda^A}\frac{(\lambda^A)^k}{k!}\right\}\nonumber\\
&=&e^{-(\lambda^A+\lambda^B)} \sum_{k=max\{0,x\}}^\infty\frac{(\lambda^B)^{k-x}(\lambda^A)^k}{(k-x)!k!}\nonumber \\
&=&e^{-(\lambda^A+\lambda^B)} \left(\frac{\lambda^A}{\lambda^B}\right)^{x/2}I_{|x|}(2\sqrt{\lambda^A\lambda^B})
\end{eqnarray}
where $I_r(x)$ is the modified Bessel function of the first kind (for full details, see \cite{Alzaid:2010ua}), thus has the series representation
\[ I_r(x)=\left(\frac{x}{2}\right)^r \sum_{k=0}^{\infty} \frac{(x^2/4)^k}{k!\Gamma(r+k+1)}. \]
The probability of home team A winning is given by
\begin{equation}
\mathbb{P}(N(1)>0|\lambda^A,\lambda^B)=\sum_{x=1}^\infty \mathbb{P}(N(1)=x|\lambda^A,\lambda^B).
\end{equation}
In practice, we truncate the number of possible goals since the probability of an extreme score difference is negligible. Unlike the Brownian motion model for the evolution of the outcome in a sports game (\cite{Stern:1994hj}, \cite{Polson:2015ira}), the probability of a draw in our setting is not zero. Instead, $\mathbb{P}(N(1)=0|\lambda^A,\lambda^B)>0$ depends on the sum and product of two parameters $\lambda^A$ and $\lambda^B$ and thus the odds of a draw are non-zero.
For two evenly matched teams with$\lambda^A=\lambda^B=\lambda$, we have
\begin{equation}
\mathbb{P}(N(1)=0|\lambda^A=\lambda^B=\lambda)
= e^{-2\lambda}I_0(2\lambda)
= \sum_{k=0}^{\infty} \frac{1}{(k!)^2}\left(\frac{\lambda^k}{e^\lambda}\right)^2.
\end{equation}
Figure \ref{draw} shows that this probability is a monotone decreasing function of $\lambda$ and so two evenly matched teams with large $\lambda$'s are less likely to achieve a draw.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.5]{draw.pdf}
\caption{Left: Probability of a draw for two evenly matched teams. Right: Probability of score differences for two evenly matched teams. Lambda values are denoted by different colors.}
\label{draw}
\end{figure}
Another quantity of interest is the conditional probability of winning as the game progresses. If the current lead at time $t$ is $\ell$, and $N(t)=\ell=N_A(t)-N_B(t)$,
the Poisson property implied that the final score difference $(N(1)|N(t)=\ell)$can be calculated by using the fact that $N(1)=N(t)+N^*(1-t)$ and $N(t)$ and $N^*(1-t)$ are independent. Specifically, conditioning on $N(t)=\ell$, we have the identity
\[ N(1)=N(t)+N^*(1-t)=\ell+Skellam(\lambda^A_t,\lambda^B_t).\]
We are now in a position to find the conditional distribution ($N(1)=x|N(t)=\ell$) for every time point $t$ of the game given the current score. Simply put, we have the time homogeneous condition
\begin{eqnarray}
\mathbb{P}(N(1)=x|\lambda^A_t,\lambda^B_t,N(t)=\ell)&=&\mathbb{P}(N(1)-N(t)=x-\ell |\lambda^A_t,\lambda^B_t,N(t)=\ell)\nonumber\\
&=&\mathbb{P}(N^* (1-t)=x-\ell |\lambda^A_t,\lambda^B_t)
\end{eqnarray}
where $\lambda^A_t$, $\lambda^B_t$, $\ell$ are given by market expectations at time $t$.
Two conditional probabilities of interest are he chances that the home team A wins,
\begin{eqnarray}
\mathbb{P}(N(1)>0|\lambda^A_t,\lambda^B_t,N(t)=\ell)&=&\mathbb{P}(\ell+ N^*(1-t)>0|\lambda^A_t,\lambda^B_t)\nonumber\\
&=&\mathbb{P}(Skellam(\lambda^A_t,\lambda^B_t)>-\ell |\lambda^A_t,\lambda^B_t)\nonumber\\
&=&\sum_{x>-\ell}e^{-(\lambda^A_t+\lambda^B_t)}\left(\frac{\lambda^A_t}{\lambda^B_t}\right)^{x/2}I_{|x|}(2\sqrt{\lambda^A_t\lambda^B_t}).
\end{eqnarray}
and the conditional probability of a draw at time $t$ is
\begin{eqnarray}
\mathbb{P}(N(1)=0|\lambda^A_t,\lambda^B_t,N(t)=\ell)&=&\mathbb{P}(\ell+N^*(1-t)=0|\lambda^A_t,\lambda^B_t)\nonumber\\
&=&\mathbb{P}(Skellam(\lambda^A_t,\lambda^B_t)=-\ell |\lambda^A_t,\lambda^B_t)\nonumber\\
&=&e^{-(\lambda^A_t+\lambda^B_t)}\left(\frac{\lambda^A_t}{\lambda^B_t}\right)^{-\ell/2}I_{|\ell |}(2\sqrt{\lambda^A_t\lambda^B_t}).
\end{eqnarray}
\noindent The conditional probability at time $t$ of home team A losing is
$ 1-\mathbb{P}(N(1)>0|\lambda^A_t,\lambda^B_t,N(t)=\ell) $.
We now turn to the calibration of our model from given market odds.
\subsection{Market Calibration}
Our information set at time $t$, denoted by $\mathcal{I}_t$, includes the current lead $N(t) = \ell$ and the market odds for $\left\{Win, Lose, Draw, Score\right\}_t$, where
$Score_t = \{ ( i - j ) : i, j = 0, 1, 2, ....\}$. These market odds can be used to calibrate a Skellam distribution which has only two parameters $\lambda^A_t$ and $\lambda^B_t$. The best fitting Skellam model with parameters $\{\hat\lambda^A_t,\hat\lambda^B_t\}$ will then provide a better estimate of the market's information concerning the outcome of the game than any individual market (such as win odds) as they are subject to a "vig" and liquidity. Suppose that the fractional odds for all possible final score outcomes are given by a bookmaker. In this case, the bookmaker pays out three times the amount staked by the bettor if the outcome is indeed 2-1. Fractional odds are used in the UK, while money-line odds are favored by American bookmakers with $2:1$ ("two-to-one") implying that the bettor stands to make a \$200 profit on a \$100 stake. The market implied probability makes the expected winning amount of a bet equal to 0. In this case, the implied probability $p=1/(1+3)=1/4$ and the expected winning amount is $\mu=-1*(1-1/4)+3*(1/4)=0$. We denote this odds as $odds(2,1)=3$. To convert all the available odds to implied probabilities, we use the identity
\[ \mathbb{P}(N_A(1) = i, N_B(1) = j)=\frac{1}{1+odds(i,j)}. \]
The market odds matrix, $O$, with elements $o_{ij}=odds(i-1,j-1)$, $i,j=1,2,3...$ provides all possible combinations of final scores. Odds on extreme outcomes are not offered by the bookmakers. Since the probabilities are tiny, we set them equal to 0. The sum of the possible probabilities is still larger than 1 (see \cite{Dixon:1997jc} and \cite{Polson:2015ira}). This "excess" probability corresponds to a quantity known as the "market vig." For example, if the sum of all the implied probabilities is 1.1, then the expected profit of the bookmaker is 10\%. To account for this phenomenon, we scale the probabilities to sum to 1 before estimation.
To estimate the expected scoring rates, $\lambda^A_t$ and $\lambda^B_t$, for the sub-game $N^*(1-t)$, the odds from a bookmaker should be adjusted by $N_A(t)$ and $N_B(t)$. For example, if $N_A(0.5)=1$, $N_B(0.5)=0$ and $odds(2,1)=3$ at half time, these observations actually says that the odds for the second half score being 1-1 is 3 (the outcomes for the whole game and the first half are 2-1 and 1-0 respectively, thus the outcome for the second half is 1-1). The adjusted ${odds}^*$ for $N^*(1-t)$ is calculated using the original odds as well as the current scores and given by
\begin{equation}
{odds}^*(x,y)=odds(x+N_A(t),y+N_B(t)).
\end{equation}
At time $t$ $(0\leq t\leq 1)$, we calculate the implied conditional probabilities of score differences using odds information
\begin{equation}
\mathbb{P}(N(1)=k|N(t)=\ell)=\mathbb{P}(N^*(1-t)=k-\ell)=\frac{1}{c}\sum_{i-j=k-\ell}\frac{1}{1+{odds}^*(i,j)}\end{equation}
where $c=\sum_{i,j} \frac{1}{1+{odds}^*(i,j)}$ is a scale factor, $\ell=N_A(t)-N_B(t)$, $i,j\geq 0$ and $k=0,\pm 1,\pm 2\ldots$.
Moments of the Poisson distribution make it straightforward to derive the moments of a Skellam random variable with parameters $\lambda^A$ and $\lambda^B$. The unconditional mean and variance are given by $$E[N(1)]=E[W_A(1)]-E[W_B(1)]=\lambda^A-\lambda^B,$$
$$V[N(1)]=V[W_A(1)]+V[W_B(1)]=\lambda^A+\lambda^B.$$ Therefore, the conditional moments are given by
\begin{equation}
\left\{
\begin{aligned}
E[N(1)|N(t)=\ell]&=\ell+(\lambda^A_t-\lambda^B_t),\\
V[N(1)|N(t)=\ell]&=\lambda^A_t+\lambda^B_t.
\end{aligned}
\right.
\end{equation}
We also need to ensure that $\hat E[N(1)|N(t)=\ell]-\ell\leq \hat V[N(1)|N(t)=\ell]$. A method of moments estimate of $\lambda$'s is given by the solution to
\begin{equation}
\left\{
\begin{aligned}
\hat E[N(1)|N(t)=\ell]&=\ell+(\lambda^A_t-\lambda^B_t),\\
\hat V[N(1)|N(t)=\ell]&=\lambda^A_t+\lambda^B_t,
\end{aligned}
\right.
\end{equation}
where $\hat E$ and $\hat V$ are the expectation and variance calculated using market implied conditional probabilities, could be negative. To address this issue, we define the residuals
\begin{equation}
\left\{
\begin{aligned}
D_E&=\hat E[N(1)|N(t)=\ell]-[\ell+(\lambda^A_t-\lambda^B_t)],\\
D_V&=\hat V[N(1)|N(t)=\ell]-(\lambda^A_t+\lambda^B_t).
\end{aligned}
\right.
\end{equation}
We then calibrate parameters by adding the constraints $\lambda^A_t\geq 0$ and $\lambda^B_t\geq 0$ and solving the following equivalent constrained optimization problem.
\begin{eqnarray}
\left(\hat\lambda^A_t,\hat\lambda^B_t\right) &=& \underset{\lambda^A_t,\lambda^B_t}{\arg\min} \left\{D_E^2+D_V^2\right\}\\
&\text{subject to} & \lambda^A_t\geq 0, \lambda^B_t\geq 0 \nonumber
\end{eqnarray}
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.5]{prob.pdf}
\caption{The Skellam process model for winning margin and game simulations. The top left panel shows the outcome distribution using odds data before the match starts. Each bar represents the probability of a distinct final score difference, with its color corresponding to the result of win/lose/draw. Score differences larger than 5 or smaller than -5 are not shown. The top right panel shows a set of simulated Skellam process paths for the game outcome. The bottom row has the two figures updated using odds data available at half-time.}
\label{prob}
\end{figure}
Figure \ref{prob} illustrates a simulation evolution of an EPL game between Everton and West Ham (March 5th, 2016) with their estimated parameters. It provides a discretized version of Figure 1 in \cite{Polson:2015ira}. The outcome probability of first half and updated second half are given in the left two panels. The top right panel illustrates a simulation-based approach to visualizing how the model works in the dynamic evolution of score difference. In the bottom left panel, from half-time onwards, we also simulate a set of possible Monte Carlo paths to the end of the game. This illustrates the discrete nature of our Skellam process and how the scores evolve.
\subsection{Model Diagnostics}
To assess the performance our score-difference Skellam model calibration for the market odds, we have collected data from {\tt ladbrokes.com} on the correct score odds of 18 EPL games (from October 15th to October 22nd, 2016) and plot the calibration result in Figure \ref{18games}. The Q-Q plot of $\log(odds)$ is also shown. In average, there are 13 different outcomes per game, i.e., $N(1) = -6, -5, ... 0, ..., 5, 6$. In total 238 different outcomes are used. We compare our Skellam implied probabilities with the market implied probabilities for every outcome of the 18 games. If the model calibration is sufficient, all the data points should lies on the diagonal line. \begin{figure}[ht!]
\centering
\includegraphics[scale=0.5]{18games.pdf}
\caption{Left: Market implied probabilities for the score differences versus Skellam implied probabilities. Every data point represents a particular score difference; Right: Market log(odds) quantiles versus Skellam implied log(odds) quantiles. Market odds (from {\tt ladbrokes.com}) of 18 games in EPL 2016-2017 are used (in average 13 score differences per game). The total number of outcomes is 238.}
\label{18games}
\end{figure}
Figure \ref{18games} left panel demonstrates that our Skellam model is calibrated by the market odds sufficiently well, except for the underestimated draw probabilities. \cite{Karlis:2009dq} describe this underestimation phenomenon in a Poisson-based model for the number of goals scored. Following their approach, we apply a zero-inflated version of Skellam distribution to improve the fit on draw probabilities, namely
\begin{equation}
\left\{
\begin{aligned}
\tilde{P}(N(1) = 0) &= p + (1-p) P(N(1) = 0)\\
\tilde{P}(N(1) = x) &= (1-p) P(N(1) = x) \qquad \text{if }x\neq 0.
\end{aligned}
\right.
\end{equation}
Here $0<p<1$ is an inflation factor and $\tilde{P}$ denotes the inflated probabilities. We also consider another type of inflation here
\begin{equation}
\left\{
\begin{aligned}
\tilde{P}(N(1) = 0) &= (1+\theta) P(N(1) = 0)\\
\tilde{P}(N(1) = x) &= (1-\gamma) P(N(1) = x) \qquad \text{if }x\neq 0
\end{aligned}
\right.
\end{equation}
where $\theta$ is the inflation factor and $P(N(1) = 0) = \gamma/(\gamma+\theta)$.
Both types of inflation factors have the corresponding interpretation regarding the bookmakers' way of setting odds. With the first type of factor, the bookmakers generate two different set of probabilities, one specifically for the draw probability (namely the inflation factor $p$) and the other for all the outcomes using the Skellam model. The ``market vig" for all the outcomes is a constant. With the second type, the bookmakers use the Skellam model to generate the probabilities for all the outcomes. Then they apply a larger ``market vig" for draws than others. \cite{yates1982} also point out the ``collapsing" tendency in forecasting behavior, whereby the bookmakers are inclined to report forecasts of 50\% when they feel they know little about the event. In Figure \ref{18games} right panel, we see that the Skellam implied $\log(odds)$ has a heavier right tail than the market implied $\log(odds)$. This effect results from the overestimation of extreme outcomes, which in turn is due to market microstructure effect due to the market ``vig".
\begin{figure}[ht!]
\centering
\includegraphics[width=7in, height=3.5in]{inflation2.pdf}
\caption{Left: Market implied probabilities of win and draw. The fitted curves are Skellam implied probabilities with fixed $\lambda^A\lambda^B = 1.8$. Right: Market odds and result frequency of home team winning. 1520 EPL games from 2012 to 2016 are used. The dashed line represents: Frequency = Market Implied Probability}
\label{inflation}
\end{figure}
To assess the out-of-sample predictive ability of the Skellam model, we analyze the market (win, lose, draw) odds for 1520 EPL games (from 2012 to 2016, 380 games per season). However, the sample covariance of the end of game scores,$N_A(1)$ and $N_B(1)$, is close to 0. If we assume parameters stay the same, then the estimates are $\hat\lambda^A=1.5$ and $\hat\lambda^B=1.2$. Since the probabilities of win, lose and draw sum to 1, we only plot the market implied probabilities of win and draw. In Figure \ref{inflation} left panel, the draw probability is nearly a non-linear function of the win probability. To illustrate our model, we set the value of $\lambda^A\lambda^B = 1.5 \times 1.2 = 1.8$ and plot the curve of Skellam implied probabilities (red line). We further provide the inflated Skellam probabilities (blue line for the first type and green line for the second type). As expected, the non-inflated Skellam model (red line) underestimates the draw probabilities while the second type inflated Skellam model (green line) produces the better fit. We also group games by the market implied winning probability of home teams $P(N(1)>0)$: (0.05,0.1], (0.1,0.15], $\cdots$, (0.8,0.85]. We calculate the frequency of home team winning for each group. In Figure \ref{inflation} right panel, the barplot of frequencies (x-axis is regarding scaled odds) shows that the market is efficient, i.e., the frequency is close to the corresponding market implied probability and our Skellam model is calibrated to the market outcome for this dataset.
\subsection{Time-Varying Extension}
One extension that is clearly warranted is allowing for time-varying $\{\lambda^A_t, \lambda^B_t\}$ where the Skellam model is re-calibrated dynamically through updated market odds during the game. We use the current $\{\lambda^A_t, \lambda^B_t\}$ to project possible results of the match in our Skellam model. Here $\{\lambda^A_t, \lambda^B_t\}$ reveal the market expectation of scoring difference for both teams from time $t$ to the end of the game as the game progresses. Similar to the martingale approach of \cite{Polson:2015ira}, $\{\lambda^A_t, \lambda^B_t\}$ reveal the best prediction of the game result. From another point of view, this approach is the same as assuming homogeneous rates for the rest of the game.
An alternative approach to time-varying $\{\lambda^A_t, \lambda^B_t\}$ is to use a Skellam regression with conditioning information such as possession percentages, shots (on goal), corner kicks, yellow cards, red cards, etc. We would expect jumps in the $\{\lambda^A_t, \lambda^B_t\}$ during the game when some important events happen. A typical structure takes the form
\begin{equation}
\left\{
\begin{aligned}
\log(\lambda^A_t) &=& \alpha_A + \beta_A X_{A,t-1} \\
\log(\lambda^B_t) &=& \alpha_B + \beta_B X_{B,t-1},
\end{aligned}
\right.
\end{equation}
estimated using standard log-linear regression.
Our approach relies on the betting market being efficient so that the updating odds should contain all information of game statistics. Using log differences as the dependent variable is another alternative with a state space evolution. \cite{Koopman2014} adopt stochastically time-varying densities in modeling the Skellam process. \cite{Barndorff-Nielsen2012a} is another example of the Skellam process with different integer valued extensions in the context of high-frequency financial data. Further analysis is required, and this produces a promising area for future research.
\section{Example: Everton vs West Ham (3/5/2016) }
We collect the real-time online betting odds data from {\tt ladbrokes.com} for an EPL game between Everton and West Ham on March 5th, 2016. By collecting real-time online betting data for every 10-minute interval, we can show the evolution of betting market prediction on the final result. We do not account for the overtime for both 1st half and 2nd half of the match and focus on a 90-minute game.
\subsection{Implied Skellam Probabilities}
\begin{table}[ht!]
\centering
\begin{tabular}{@{}ccccccc@{}}
\toprule
Everton \textbackslash West Ham & 0 & 1 & 2 & 3 & 4 & 5 \\ \midrule
0 & 11/1 & 12/1 & 28/1 & 66/1 & 200/1 & 450/1 \\
1 & 13/2 & 6/1 & 14/1 & 40/1 & 100/1 & 350/1 \\
2 & 7/1 & 7/1 & 14/1 & 40/1 & 125/1 & 225/1 \\
3 & 11/1 & 11/1 & 20/1 & 50/1 & 125/1 & 275/1 \\
4 & 22/1 & 22/1 & 40/1 & 100/1 & 250/1 & 500/1 \\
5 & 50/0 & 50/1 & 90/1 & 150/1 & 400/1 & \\
6 & 100/1 & 100/1 & 200/1 & 250/1 & & \\
7 & 250/1 & 275/1 & 375/1 & & & \\
8 & 325/1 & 475/1 & & & & \\ \bottomrule
\end{tabular}
\caption{Original odds data from Ladbrokes before the game started\label{Table1}}
\end{table}
Table \ref{Table1} shows the raw data of odds right the game. We need to transform odds data into probabilities. For example, for the outcome 0-0, 11/1 is equivalent to a probability of 1/12. Then we can calculate the marginal probability of every score difference from -4 to 5. We neglect those extreme scores with small probabilities and rescale the sum of event probabilities to one.
\begin{figure}[htb!]
\centering
\includegraphics[scale=0.6]{comparison.pdf}
\caption{Market implied probabilities versus the probabilities estimated by the model at different time points, using the parameters given in Table \ref{lambda} \label{comparison}.}
\end{figure}
In Figure \ref{comparison}, the probabilities estimated by the model are compared with the market implied probabilities. As we see, during the course of the game, the Skellam assumption suffices to approximate market expectation of score difference distribution. This set of plots is evidence of goodness-of-fit the Skellam model.
\begin{table}[ht!]
\centering
\begin{tabular}{c c c c c c c c c c c}
\toprule
Score difference&-4&-3&-2&-1&0&1&2&3&4&5\\
\midrule
Market Prob. (\%)& 1.70 & 2.03 & 4.88 &12.33& 21.93 &22.06 &16.58 &9.82 &4.72 &2.23\\
Skellam Prob.(\%)& 0.78 & 2.50 & 6.47 & 13.02 & 19.50 & 21.08 & 16.96 & 10.61 & 5.37 & 2.27\\
\bottomrule
\end{tabular}
\caption{Market implied probabilities for the score differences versus Skellam implied probabilities at different time points. The estimated parameters $\hat\lambda^A=2.33$, $\hat\lambda^B=1.44.$\label{Table2}}
\end{table}
Table \ref{Table2} shows the model implied probability for the outcome of score differences before the game, compared with the market implied probability. As we see, the Skellam model appears to have longer tails. Different from independent Poisson modeling in \cite{Dixon:1997jc}, our model is more flexible with the correlation between two teams. However, the trade-off of flexibility is that we only know the probability of score difference instead of the exact scores.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.45]{game2.png}
\caption{The betting market data for Everton and West Ham is from {\tt ladbrokes.com}. Market implied probabilities (expressed as percentages) for three different results (Everton wins, West Ham wins and draw) are marked by three distinct colors, which vary dynamically as the game proceeds. The solid black line shows the evolution of the implied volatility (defined in Section \ref{IV}). The dashed line shows significant events in the game, such as goals and red cards. Five goals in this game are 13' Everton, 56' Everton, 78' West Ham, 81' West Ham and 90' West Ham.\label{Figure2}}
\end{figure}
Finally, we can plot these probability paths in Figure \ref{Figure2} to examine the behavior of the two teams and represent the market predictions on the final result. Notably, we see the probability change of win/draw/loss for important events during the game: goals scoring and a red card penalty. In such a dramatic game, the winning probability of Everton gets raised to 90\% before the first goal of West Ham in 78th minutes. The first two goals scored by West Ham in the space of 3 minutes completely reverses the probability of winning. The probability of draw gets raised to 90\% until we see the last-gasp goal of West Ham that decides the game.
\subsection{How the Market Forecast Adapts} \label{IV}
A natural question arises to how does the market odds (win, lose, draw and actual score) adjust as the game evolves. This is similar to option pricing where Black-Scholes model uses its implied volatility to show how market participants' beliefs change. Our Skellam model mimics its way and shows how the market forecast adapts to changing situations during the game. See \cite{Merton:1976ge} for references of jump models.
Our work builds on \cite{Polson:2015ira} who define the implied volatility of a NFL game. For an EPL game, we simply define the implied volatility as $\sigma_{IV,t} = \sqrt{\lambda^A_t + \lambda^B_t}$. As the market provides real-time information about $\lambda^A_t$ and $\lambda^B_t$, we can dynamically estimate $\sigma_{IV,t}$ as the game proceeds. Any goal scored is a discrete Poisson shock to the expected score difference (Skellam process) between the teams, and our odds implied volatility measure will be updated.
Figure \ref{Figure2} plots the path of implied volatility throughout the course of the game. Instead of a downward sloping line, we see changes in the implied volatility as critical moments occur in the game. The implied volatility path provides a visualization of the conditional variation of the market prediction for the score difference. For example, when Everton lost a player by a red card penalty at 34th minute, our estimates $\hat\lambda^A_t$ and $\hat\lambda^B_t$ change accordingly. There is a jump in implied volatility and our model captures the market expectation adjustment about the game prediction. The change in $\hat\lambda_A$ and $\hat\lambda_B$ are consistent with the findings of \cite{Vecer2009} where the scoring intensity of the penalized team drops while the scoring intensity of the opposing team increases. When a goal is scored in the 13th minute, we see the increase of $\hat\lambda^B_t$ and the market expects that the underdog team is pressing to come back into the game, an effect that has been well-documented in the literature. Another important effect that we observe at the end of the game is that as goals are scored (in the 78th and 81st minutes), the markets expectation is that the implied volatility increases again as one might expect.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.5]{iv2.pdf}
\caption{Red line: the path of implied volatility throughout the game, i.e., $\sigma_{t}^{red} = \sqrt{\hat\lambda^A_t+\hat\lambda^B_t}$. Blue lines: the path of implied volatility with constant $\lambda^A+\lambda^B$, i.e., $\sigma_{t}^{blue} = \sqrt{(\lambda^A+\lambda^B)*(1-t)}$. Here $(\lambda^A+\lambda^B) = 1, 2, ..., 8$. \label{ivcompare}}
\end{figure}
\begin{table}[ht!]
\centering
\begin{tabular}{c c c c c c c c c c c c}
\toprule
t & 0 & 0.11 & 0.22 & 0.33 & 0.44 & 0.50 & 0.61 & 0.72 & 0.83 & 0.94 & 1\\
\midrule
$\hat\lambda^A_t/(1-t)$ & 2.33 & 2.51 & 2.53 & 2.46 & 1.89 & 1.85 & 2.12 & 2.12 & 2.61 & 4.61 & 0\\
$\hat\lambda^B_t/(1-t)$ & 1.44 & 1.47 & 1.59 & 1.85 & 2.17 & 2.17 & 2.56 & 2.90 & 3.67 & 5.92 & 0\\
\midrule
$(\hat\lambda^A_t+\hat\lambda^B_t)/(1-t)$ & 3.78 & 3.98 & 4.12 & 4.31 & 4.06 & 4.02 & 4.68 & 5.03 & 6.28 & 10.52 &0\\
\midrule
$\sigma_{IV,t}$ & 1.94 & 1.88 & 1.79 & 1.70 & 1.50 & 1.42 & 1.35 & 1.18 & 1.02 & 0.76 & 0\\
\bottomrule
\end{tabular}
\caption{The calibrated $\{\hat\lambda^A_t, \hat\lambda^B_t\}$ divided by $(1-t)$ and the implied volatility during the game. $\{\lambda^A_t, \lambda^B_t\}$ are expected goals scored for rest of the game. The less the remaining time, the less likely to score goals. Thus $\{\hat\lambda^A_t, \hat\lambda^B_t\}$ decrease as $t$ increases to 1. Diving them by $(1-t)$ produces an updated version of $\hat\lambda_{0}$'s for the whole game, which are in general time-varying (but not decreasing necessarily).\label{lambda}}
\end{table}
Figure \ref{ivcompare} compares the updating implied volatility of the game with implied volatilities of fixed $(\lambda^A+\lambda^B)$. At the beginning of the game, the red line (updating implied volatility) is under the "($\lambda^A+\lambda^B=4)$"-blue line; while at the end of the game, it's above the "($\lambda^A+\lambda^B=8)$"-blue line. As we expect, the value of $(\hat\lambda^A_t + \hat\lambda^B_t)/(1-t)$ in Table \ref{lambda} increases throughout the game, implying that the game became more and more intense and the market continuously updates its belief in the odds.
\section{Discussion}
The goal of our analysis is to provide a probabilistic methodology for calibrating real-time market odds for the evolution of the score difference for a soccer game.Rather than directly using game information, we use the current odds market to calibrate a Skellam model to provide a forecast of the final result. To our knowledge, our study is the first to offer an interpretation of the betting market and to show how it reveals the market expectation of the game result through an implied volatility. One area of future research is studying the index betting. For example, a soccer game includes total goals scored in match and margin of superiority (see \cite{Jackson:1994gj}). The latter is the score difference in our model, and so the Skellam process directly applies.
Our Skellam model is also valid for low-scoring sports such as baseball, hockey or American football with a discrete series of scoring events. For NFL score prediction, \cite{baker2013} propose a point process model that performs as well as the betting market. On the one hand, our model has the advantage of implicitly considering the correlation between goals scored by both teams but on the other hand, ignores the sum of goals scored. For high-scoring sports, such as basketball, the Brownian motion adopted by \cite{Stern:1994hj} is more applicable. \cite{Rosenfeld:1000a} provides an extension of the model that addresses concerns of non-normality and uses a logistic distribution to estimate the relative contribution of the lead and the remaining advantage. Another avenue for future research, is to extend the Skellam model to allow for the dependent jumpiness of scores which is somewhere in between these two extremes (see \cite{Glickman:2012dt}, \cite{Polson:2015ira} and \cite{Rosenfeld:1000a} for further examples.)
Our model allows the researcher to test the inefficiency of EPL sports betting from a statistical arbitrage viewpoint. More importantly, we provide a probabilistic approach for calibrating dynamic market-based information. \cite{Camerer:1989dc} shows that the market odds are not well-calibrated and that an ultimate underdog during a long losing streak is underpriced on the market. \cite{Golec:1991cd} test the NFL and college betting markets and find bets on underdogs or home teams win more often than bets on favorites or visiting teams. \cite{Gray:1997gz} examine the in-sample and out-of-sample performance of different NFL betting strategies by the probit model. They find the strategy of betting on home team underdogs averages returns of over 4 percent, over commissions. In summary, a Skellam process appears to fit the dynamics of EPL soccer betting very well and produces a natural lens to view these market efficiency questions.
\newpage
| {
"attr-fineweb-edu": 1.445312,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdrA5qWTBBvMd7dy6 | \section{Introduction}\label{sec:intro}}
Athlete's career trajectory is a popular topic of discussion in the
media nowadays. Questions regarding whether a player has reached their
peak, is past their prime, or is good enough to remain in their
respective professional league are often seen in media outlets such as
news articles, television debate shows, and podcasts. The average
performance of players by age throughout their careers is visually
represented by an \emph{aging curve}. This graph typically consists of a
horizontal axis representing a time variable (usually age or season) and
a vertical axis showing a performance metric at each time point in a
player's career.
One significant challenge associated with the study of aging curves in
sports is \emph{survival bias}, as pointed out by Lichtman (2009),
Turtoro (2019), Judge (2020a), and Schuckers et al. (2021). In
particular, the aging effects are not often determined from a full
population of athletes in a given league. That is, only players that are
good enough to remain are observed; whereas those might be involved, but
do not actually participate or not talented enough to compete, are being
completely disregarded. This very likely results in an overestimation in
the construction of aging curves.
As such, player survivorship and dropout can be viewed in the context of
missing data. There are different cases of player absence from
professional sport at different points in their careers. At the
beginning, teams may elect to assign their young prospects to their
minor/development league affiliates for several years of nurture. Many
of those players would end up receiving a call-up to join the senior
squad, when the team believes they are ready. During the middle of one's
career, a nonappearance could occur due to various reasons. Injury is
unavoidable in sports, and this could cost a player at least one year of
their playing time. Personal reasons such as contract situation and more
recently, concerns regarding a global pandemic, could also lead to
athletes sitting out a season. Later on, a player might head for
retirement because they cannot perform at a level like they used to.
The primary aim of this paper is to apply missing data techniques to the
estimation of aging curves. In doing so, we focus on baseball and pose
one research question: What would the aging curve look like if players
competed in every season from a fixed range of age? In other words, what
would have happened if a player who was forced to retire from their
league at a certain age had played a full career? The manuscript
continues with a review of existing literature on aging curves in
baseball and other sports in Section \ref{sec:lit}. Next, we describe
our data and methods used to perform our analyses in Section
\ref{sec:meth}. After that, our approach is implemented through
simulation and analyses of real baseball data in Sections \ref{sec:sim}
and \ref{sec:app}. Finally, in Section \ref{sec:discuss}, we conclude
with a discussion of the results, limitations, and directions for future
work.
\hypertarget{sec:lit}{%
\section{Literature Review}\label{sec:lit}}
To date, we find a considerable amount of previous work related to aging
curves and career trajectory of athletes. This body of work consists of
several key themes, a wide array of statistical methods, and
applications in many sports besides baseball such as basketball, hockey,
and track and field.
A typical notion in the baseball aging curves literature is the
assumption of a quadratic form for modeling the relationship between
performance and age. Morris (1983) looks at Ty Cobb's batting average
trajectory using parametric empirical Bayes and uses shrinkage methods
to obtain a parabolic curve for Cobb's career performance. Albert (1992)
proposes a quadratic random effects log-linear model for smoothing a
batter's home run rates throughout their career. A nonparametric method
is implemented to estimate the age effect on performance in baseball,
hockey, and golf by Berry et al. (1999). However, Albert (1999) weighs
in on this nonparametric approach and questions the assumptions that the
peak age and periods of growth and decline are the same for all players.
Albert (1999) ultimately prefers a second-degree polynomial function for
estimating age effect in baseball, which is a parametric model.
Continuing his series of work on aging trajectories, Albert (2002)
proposes a Bayesian exchangeable model for modeling hitting performance.
This approach combines quadratic regression estimates and assumes
similar careers for players born in the same decade. Fair (2008) and
Bradbury (2009) both use a fixed-effects regression to examine age
effects in the MLB, also assuming a quadratic aging curve form.
In addition to baseball, studies on aging curves have also been
conducted for other sports. Early on, Moore (1975) looks at the
association between age and running speed in track and field and
produces aging curves for different running distances using an
exponential model. Fair (1994) and Fair (2007) study the age effects in
track and field, swimming, chess, and running, in addition to their
latter work in baseball, as mentioned earlier. In triathlon, Villaroel
et al. (2011) assume a quadratic relationship between performance and
age, as many have previously considered. As for basketball, Page et al.
(2013) use a Gaussian process regression in a hierarchical Bayesian
framework to model age effect in the NBA. Additionally, Lailvaux et al.
(2014) use NBA and WNBA data to investigate and test for potential sex
differences in the aging of comparable performance indicators. Vaci et
al. (2019) apply Bayesian cognitive latent variable modeling to explore
aging and career performance in the NBA, accounting for player position
and activity levels. In tennis, Kovalchik (2014) studies age and
performance trends in men's tennis using change point analysis.
Another convention in the aging curve modeling literature is the
assumption of discrete observations. Specifically, most researchers use
regression modeling and consider a data measurement for each season
played throughout a player's career. In contrast to previous approaches,
Wakim \& Jin (2014) take a different route and consider functional data
analysis as the primary tool for modeling MLB and NBA aging curves. This
is a continuous framework which treats the entire career performance of
an athlete as a smooth function.
A subset of the literature on aging and performance in sports provides
answers to the question: At what age do athletes peak? Schulz \& Curnow
(1988) look at the age of peak performance for track and field,
swimming, baseball, tennis, and golf. A follow-up study to this work was
done by Schulz et al. (1994), where the authors focus on baseball and
find that the average peak age for baseball players is between 27 and
30, considering several performance measures. Later findings on baseball
peak age also show consistency with the results in Schulz et al. (1994).
Fair (2008) determines the peak-performance age in baseball to be 28,
whereas Bradbury (2009) determines that baseball hitters and pitchers
reach their career apex at about 29 years old. In soccer, Dendir (2016)
determines that the peak age for footballers in the top leagues falls
within the range of 25 to 27.
Last and most importantly, the idea of player survivorship is only
mentioned in a small number of articles. To our knowledge, not many
researchers have incorporated missing data methods into the estimation
of aging curves to account for missing but observable athletes. Schulz
et al. (1994) and Schell (2005) note the selection bias problem with
estimating performance averages by age in baseball, as better players
tend to have longer career longevity. Schall \& Smith (2000) predict
survival probabilities of baseball players using a logit model, and
examine the link between first-year performance and career length.
Lichtman (2009) studies different aging curves for different eras and
groups of players after correcting for survival bias, and shows that
survival bias results in an overestimation of the age effects.
Alternatively, Judge (2020a) concludes that survivorship bias leads to
an underestimation, not overestimation, of the aging curves. In
analyzing NHL player aging, Brander et al. (2014) apply their quadratic
and cubic fixed-effects regression models to predict performance for
unobserved players in the data.
Perhaps the most closely related approach to our work is that by
Schuckers et al. (2021), which considers different regression and
imputation frameworks for estimating the aging curves in the National
Hockey League (NHL). First, they investigate different regression
approaches including spline, quadratic, quantile, and a delta plus
method, which is an extension to the delta method previously studied by
Lichtman (2009), Turtoro (2019), and Judge (2020b). This paper also
proposes an imputation approach for aging curve estimation, and
ultimately concludes that the estimation becomes stronger when
accounting for unobserved data, which addresses a major shortcoming in
the estimation of aging curves. However, it appears that the aging
curves are constructed without taking into account the variability as a
result of imputing missing data. This could be improved by applying
multiple imputation rather than considering only one imputed dataset. As
pointed out by Gelman \& Hill (2006) (Chapter 25), conducting only a
single imputation essentially assumes that the filled-in values
correctly estimate the true values of the missing observations. Yet,
there is uncertainty associated with the missingness, and multiple
imputation can incorporate the missing data uncertainty and provide
estimates for the different sources of variability.
\hypertarget{sec:meth}{%
\section{Methods}\label{sec:meth}}
\hypertarget{sec:data}{%
\subsection{Data Collection}\label{sec:data}}
In the forthcoming analyses, we rely on one primary source of publicly
available baseball data: the Lahman baseball database (Lahman, 1996 --
2021). Created and maintained by Sean Lahman, this database contains
pitching, hitting, and fielding information for Major League Baseball
players and teams dating back to 1871. The data are available in many
different formats, and the \texttt{Lahman} package in \texttt{R}
(Friendly et al., 2021; R Core Team, 2022) is utilized for our
investigation.
Due to our specific purpose of examining the aging curves for baseball
offensive players, the following datasets from the \texttt{Lahman}
library are considered: \texttt{Batting}, which provides
season-by-season batting statistics for baseball players; and
\texttt{People}, to obtain the date of birth of each player and
calculate their age for each season played. In each table, an athlete is
identified with their own \texttt{playerID}, hence this attribute is
used as a joining key to merge the two tables together. A player's age
for a season is determined as their age on June 30, and the formula
suggested by Marchi et al. (2018) for age adjustment based on one's
birth month is applied.
Throughout this paper, we consider on-base plus slugging (OPS), which
combines a hitter's ability to reach base on a swing and power-hitting,
as the baseball offensive performance measure. We scale the OPS for all
players and then apply an arcsine transformation to ensure a reasonable
range for the OPS values when conducting simulation and imputation. We
also assume a fixed length for a player's career, ranging from age 21 to
39. In terms of sample restriction, we observe all player-seasons with
at least 100 plate appearances, which means a season is determined as
missing if one's plate appearances is below that threshold.
\hypertarget{multiple-imputation}{%
\subsection{Multiple Imputation}\label{multiple-imputation}}
Multiple imputation (Rubin, 1987) is a popular statistical procedure for
addressing the presence of incomplete data. The goal of this approach is
to replace the missing data with plausible values to create multiple
completed datasets. These datasets can each be analyzed and results are
combined across the imputed versions. Multiple imputation consists of
three steps. First, based on an appropriate imputation model, \(m\)
copies of the dataset are created by filling the missing values. Next,
\(m\) analyses are performed on each of the \(m\) completed datasets.
Finally, the results from each of the \(m\) datasets are pooled together
to create a combined estimate and standard errors are estimated that
account for the between and within imputation variability. This last
step can be accomplished using asymptotic theory with Rubin's combining
rules (Little \& Rubin, 1987), which are as follows.
Let \(Q\) be a parameter of interest and \(\widehat Q_i\) where
\(i=1,2,\dots,m\) are estimates of \(Q\) obtained from \(m\) imputed
datasets, with sampling variance \(U\) estimated by \(\widehat U_i\).
Then the point estimate for \(Q\) is the average of the \(m\) estimates
\[
\overline Q = \frac{1}{m} \sum_{i=1}^m \widehat Q_i
\,.
\] The variance for \(\overline Q\) is defined as \[
T=\overline U + \left(1 + \frac{1}{m}\right)B
\,,
\] where \[
\overline U = \frac{1}{m} \sum_{i=1}^m \widehat U_i
\] and \[
B=\frac{1}{m-1} \sum_{i=1}^m (\widehat Q_i - \overline Q)^2
\] are the estimated within and between variances, respectively.
Inferences for \(Q\) are based on the approximation \[
\frac{Q - \overline Q}{\sqrt{T}} \sim t_\nu
\,,
\] where \(t_\nu\) is the Student's \(t\)-distribution with
\(\displaystyle \nu = (m-1)\left(1+\frac{1}{r}\right)^2\) degrees of
freedom, with
\(\displaystyle r=\left(1+\frac{1}{m}\right)\frac{B}{\overline U}\)
representing the relative increase in variance due to missing data.
Accordingly, a \(100(1-\alpha)\)\% Wald confidence interval for \(Q\) is
computed as \[
\overline Q \ \pm \ t_{\nu,1-\alpha/2}\sqrt{T}
\,,
\] where \(t_{\nu,1-\alpha/2}\) is the \(1-\alpha/2\) quantile of
\(t_\nu\).
It is important to understand the reasons behind the missingness when
applying multiple imputation to handle incomplete data. Based on the
degree of bias attributable to the missingness, three types of missing
data are defined: missing completely at random (MCAR), missing at random
(MAR), and missing not at random (MNAR) (Rubin, 1976). MCAR occurs when
a missing observation is statistically independent of both the observed
and unobserved data. In the case of MAR, the missingness is associated
with the observed but not with the unobserved data. When data are MNAR,
there is a link between the missingness and the unobserved values in the
dataset.
Among the tools for performing multiple imputation, multivariate
imputations by chained equation (MICE) (van Buuren \&
Groothuis-Oudshoorn, 1999) is a flexible, robust, and widely used
method. This algorithm imputes missing data via an iterative series of
conditional models. In each iteration, each incomplete variable is
filled in by a separate model of all the remaining variables. The
iterations continue until apparent convergence is reached.
In this paper, we implement the MICE framework in \texttt{R} via the
popular \texttt{mice} package (van Buuren \& Groothuis-Oudshoorn, 2011).
Moreover, we focus on multilevel multiple imputation, due to the
hierarchical structure of our data. Specifically, we consider multiple
imputation by a two-level normal linear mixed model with heterogeneous
within-group variance (Kasim \& Raudenbush, 1998). In context, our data
consist of baseball seasons (ages) which are nested within the class
variable, player; and the season-by-season performance is considered to
be correlated for each athlete. The described imputation model can be
specified as the \texttt{2l.norm} method available in the \texttt{mice}
library.
\hypertarget{sec:sim}{%
\section{Simulation}\label{sec:sim}}
In this simulation, we demonstrate our aging curve estimation approach
with multiple imputation, and evaluate how different types of player
dropouts affect the curve. There are three steps to our simulation.
First, we fit a model for the performance-age relationship and utilize
its output to generate fictional careers for baseball players. Next, we
generate missing data by dropping players from the full dataset based on
different criteria, and examine how the missingness affects the original
aging curve obtained from fully observed data. Finally, we apply
multiple imputation to obtain completed datasets and assess how close
the imputed aging curves are to the true curve.
\hypertarget{generating-player-careers}{%
\subsection{Generating Player Careers}\label{generating-player-careers}}
We fit a mixed-effects model using the player data described in Section
\ref{sec:data}. Our goal is to obtain the variance components of the
fitted model to simulate baseball player careers. The model of
consideration is of the form \[
\displaylines{
Y_{pq} = (\beta_0 + b_{0p}) + \beta_1X_q + \beta_2X_q^2 + \beta_3X_q^3 + \epsilon_{pq}
\cr
b_{0p} \sim N(0, \tau^2)
\cr
\epsilon_{pq} \sim N(0, \sigma^2).
}
\] In detail, this model relates the performance metric \(Y_{pq}\) (in
our case, transformed OPS) for player \(p\) at age (season) \(q\) to a
baseline level via the fixed effect \(\beta_0\). The only covariate
\(X\) in the model is age, which is assumed to have a cubic relationship
with the response variable, transformed OPS. Another component is the
observational-level error \(\epsilon_{pq}\) with variance \(\sigma^2\)
for player \(p\) at age \(q\). We also introduce the random effects
\(b_{0p}\), which represents the deviation from the grand mean
\(\beta_0\) for player \(p\), allowing a fixed amount of shift to the
performance prediction for each player. In addition, to incorporate the
variability in production across the season \(q\), a random effect
parameter \(\tau^2\) is included. Our modeling approach is implemented
using the \texttt{lme4} package in \texttt{R} (Bates et al., 2015). We
utilize the sources of variance from the fitted model to simulate 1000
careers for baseball players of age 21 to 39.
\hypertarget{sec:drop}{%
\subsection{Generating Missing Data}\label{sec:drop}}
After obtaining reasonable simulated careers for baseball players, we
create different types of dropouts and examine how they cause deviations
from the fully observed aging curve. We consider the following cases of
players' retirement from the league:
\begin{enumerate}
\def(\arabic{enumi}){(\arabic{enumi})}
\tightlist
\item
Dropout players with 4-year OPS average below a certain threshold, say
0.55.
\item
Dropout players with OPS average from age 21 (start of career) to 25
of less than 0.55.
\item
25\% of the players randomly retire at age 30.
\end{enumerate}
For the first two scenarios, the missingness mechanism is MAR, since
players get removed due to low previously observed performance. Dropout
case (3) falls under MCAR, since athletes are selected at random to
retire without considering past or future offensive production.
Figure \ref{fig:drop-compare} displays the average OPS aging curves for
all baseball players obtained from the original data with no missingness
and data with only the surviving players associated with the dropout
mechanisms mentioned above. These are smoothed curves obtained from
loess model fits, and we use mean absolute error (MAE) to evaluate the
discrepancy between the dropout and true aging curves. It is clear that
randomly removing players have minimal effect on the aging curve, as the
curve obtained from (3) and the original curve essentially overlap (MAE
\(= 7.42 \times 10^{-4}\)). On the other hand, a positive shift from the
fully observed curve occurs for the remaining two cases of dropout based
on OPS average (MAE \(= 0.031\) for (1) and MAE \(=0.019\) for (2)).
This means the aging curves with only the surviving players are
overestimated in the presence of missing data due to past performance.
More specifically, the player performance drops off faster as they age
than when it is estimated with only complete case analysis.
\begin{figure}
{\centering \includegraphics{drop-compare-1}
}
\caption{Comparison of the average OPS aging curve constructed with the fully observed data and different cases of dropouts (obtained only for the surviving players and without imputation).}\label{fig:drop-compare}
\end{figure}
\hypertarget{sec:imp}{%
\subsection{Imputation}\label{sec:imp}}
Next, we implement the multiple imputation with a hierarchical structure
procedure described in Section \ref{sec:meth} to the cases of dropout
that shifts the aging effect on performance. We perform \(m=5\)
imputations with each made up of 30 iterations, and apply Rubin's rules
for combining the imputation estimates. The following results are
illustrated for dropout mechanism (2), where players with a low OPS
average at the start of their careers (ages 21--25) are forced out of
the league.
Figure \ref{fig:drop-imp} (left) shows smoothed fitting loess aging
curves for all 5 imputations and a combined version of them, in addition
to the curves constructed with fully observed and only surviving players
data. The 95\% confidence interval for the mean OPS at each age point in
the combined curve obtained from Rubin's rules is further illustrated in
Figure \ref{fig:drop-imp} (right). It appears that the combined imputed
curve follows the same shape as the true, known curve. Moreover,
imputation seems to capture the rate of change for the beginning and end
career periods quite well, whereas the middle of career looks to be
slightly underestimated. The resulting MAE of \(0.0039\) confirms that
there is little deviation of the combined curve from the true one.
Additionally, we perform diagnostics to assess the plausibility of the
imputations, and also examine whether the algorithm converges. We first
check for distributional discrepancy by comparing the distributions of
the fully observed and imputed data. Figure \ref{fig:diag} (left)
presents the density curves of the OPS values for each imputed dataset
and the fully simulated data. It is obvious that the imputation
distributions are well-matched with the observed data. To confirm
convergence of the MICE algorithm, we inspect trace plots for the mean
and standard deviation of the imputed OPS values. As shown in Figure
\ref{fig:diag} (right), convergence is apparent, since no definite trend
is revealed and the imputation chains are intermingled with one another.
\begin{figure}
{\centering \includegraphics{drop-imp-1}
}
\caption{At left, comparison of the average OPS aging curve constructed with the fully observed data, only surviving players, and imputation. At right, combined imputed curve with 95\% confidence intervals obtained from Rubin's rules. Results shown here are for dropout case of players having OPS average from age 21 to 25 below 0.55.}\label{fig:drop-imp}
\end{figure}
\begin{figure}
{\centering \includegraphics{diag-1} \includegraphics{diag-2}
}
\caption{At left, kernel density estimates for the fully observed and imputed OPS values. At right, trace plots for the mean and standard deviation of the imputed OPS values against the iteration number for the imputed data. Results shown here are for dropout case of players having OPS average from age 21 to 25 below 0.55.}\label{fig:diag}
\end{figure}
\hypertarget{sec:app}{%
\section{Application: MLB Data}\label{sec:app}}
Lastly, we apply the multilevel multiple imputation model to estimate
the average OPS aging curve for MLB players. For this investigation,
besides the data pre-processing tasks mentioned in Section
\ref{sec:data}, our sample is limited to all players who made their
major league debut no sooner than 1985, resulting in a total of 2323
players. To perform imputation, we pass in the parameters similar to our
simulation study (\(m=5\) with 30 iterations for each imputation).
Figure \ref{fig:mlb-imp} shows the OPS aging curves for MLB players
estimated with and without imputation. The plot illustrates a similar
result as from simulation, as the combined imputed curve is lower than
the curve obtained when ignoring the missing data. It is clear that the
aging effect is overestimated without considering the unobserved
player-seasons that are observable. In other words, the actual
performance declines with age more rapidly than estimates based on only
the observed data.
\begin{figure}
{\centering \includegraphics{mlb-imp-1}
}
\caption{Comparison of the average OPS aging curve constructed with only observed players and imputation for MLB data.}\label{fig:mlb-imp}
\end{figure}
\hypertarget{sec:discuss}{%
\section{Discussion}\label{sec:discuss}}
The concept of survivorship bias is frequently seen in professional
sports, and our paper approaches the topic of aging curves and player
dropout in baseball as a missing data problem. We utilize multiple
imputation with a multilevel structure to improve estimates for the
baseball aging curves. Through simulation, we highlight that ignoring
the missing seasons leads to an overestimation of the age effect on
baseball offensive performance. With imputation, we achieve a better
aging curve which shows that players actually decline faster as they get
older than previously estimated.
There are many limitations in our study which leave room for improvement
in future work. In our current imputation model, age is the only
predictor for estimating performance. It is possible to include more
covariates in the algorithm and determine whether a better aging curve
estimate is achieved. In particular, we could factor in other baseball
offensive statistics (e.g.~home run rate, strikeout rate, WOBA, walk
rate,\ldots) in building an imputation model for OPS.
Furthermore, the aging curve estimation problem can be investigated in a
completely different statistical setting. As noted in Section
\ref{sec:lit}, rather than considering discrete observations, another
way of studying aging curves is through a continuous approach, assuming
a smooth curve for career performance. As pointed out by Wakim \& Jin
(2014), methods such as functional data analysis (FDA) and principal
components analysis through conditional expectation (PACE) possess many
modeling advantages, in regard to flexibility and robustness. There
exists a number of proposed multiple imputation algorithms for
functional data (Ciarleglio et al., 2021; He et al., 2011; Rao \&
Reimherr, 2021), which all can be applied in future studies on aging
curves in sports.
\hypertarget{references}{%
\section*{References}\label{references}}
\addcontentsline{toc}{section}{References}
\hypertarget{refs}{}
\begin{CSLReferences}{1}{0}
\leavevmode\vadjust pre{\hypertarget{ref-Albert2002smoothing}{}}%
Albert, J. (2002). \emph{Smoothing career trajectories of baseball
hitters}. Unpublished manuscript, Bowling Green State University.
\url{https://bayesball.github.io/papers/career_trajectory.pdf}
\leavevmode\vadjust pre{\hypertarget{ref-Albert1999comment}{}}%
Albert, J. (1999). Bridging different eras in sports: comment.
\emph{Journal of the American Statistical Association}, \emph{94}(447),
677. \url{https://doi.org/10.2307/2669974}
\leavevmode\vadjust pre{\hypertarget{ref-Albert1992bayesian}{}}%
Albert, J. (1992). A bayesian analysis of a poisson random effects model
for home run hitters. \emph{The American Statistician}, \emph{46}(4),
246. \url{https://doi.org/10.2307/2685306}
\leavevmode\vadjust pre{\hypertarget{ref-Bates2015lme4}{}}%
Bates, D., Mächler, M., Bolker, B., \& Walker, S. (2015). Fitting linear
mixed-effects models using {lme4}. \emph{Journal of Statistical
Software}, \emph{67}(1), 1--48.
\url{https://doi.org/10.18637/jss.v067.i01}
\leavevmode\vadjust pre{\hypertarget{ref-Berry1999bridging}{}}%
Berry, S. M., Reese, C. S., \& Larkey, P. D. (1999). Bridging different
eras in sports. \emph{Journal of the American Statistical Association},
\emph{94}(447), 661--676.
\url{https://doi.org/10.1080/01621459.1999.10474163}
\leavevmode\vadjust pre{\hypertarget{ref-Bradbury2009peak}{}}%
Bradbury, J. C. (2009). Peak athletic performance and ageing: Evidence
from baseball. \emph{Journal of Sports Sciences}, \emph{27}(6),
599--610. \url{https://doi.org/10.1080/02640410802691348}
\leavevmode\vadjust pre{\hypertarget{ref-Brander2014estimating}{}}%
Brander, J. A., Egan, E. J., \& Yeung, L. (2014). Estimating the effects
of age on {NHL} player performance. \emph{Journal of Quantitative
Analysis in Sports}, \emph{10}(2).
\url{https://doi.org/10.1515/jqas-2013-0085}
\leavevmode\vadjust pre{\hypertarget{ref-Ciarleglio2021elucidating}{}}%
Ciarleglio, A., Petkova, E., \& Harel, O. (2021). Elucidating age and
sex-dependent association between frontal {EEG} asymmetry and
depression: An application of multiple imputation in functional
regression. \emph{Journal of the American Statistical Association},
\emph{117}(537), 12--26.
\url{https://doi.org/10.1080/01621459.2021.1942011}
\leavevmode\vadjust pre{\hypertarget{ref-Dendir2016soccer}{}}%
Dendir, S. (2016). When do soccer players peak? A note. \emph{Journal of
Sports Analytics}, \emph{2}(2), 89--105.
\url{https://doi.org/10.3233/jsa-160021}
\leavevmode\vadjust pre{\hypertarget{ref-Fair2007estimated}{}}%
Fair, R. C. (2007). Estimated age effects in athletic events and chess.
\emph{Experimental Aging Research}, \emph{33}(1), 37--57.
\url{https://doi.org/10.1080/03610730601006305}
\leavevmode\vadjust pre{\hypertarget{ref-Fair2008estimated}{}}%
Fair, R. C. (2008). Estimated age effects in baseball. \emph{Journal of
Quantitative Analysis in Sports}, \emph{4}(1).
\url{https://doi.org/10.2202/1559-0410.1074}
\leavevmode\vadjust pre{\hypertarget{ref-Fair1994fast}{}}%
Fair, R. C. (1994). {How Fast Do Old Men Slow Down?} \emph{The Review of
Economics and Statistics}, \emph{76}(1), 103--118.
\url{https://ideas.repec.org/a/tpr/restat/v76y1994i1p103-18.html}
\leavevmode\vadjust pre{\hypertarget{ref-Friendly2021Lahman}{}}%
Friendly, M., Dalzell, C., Monkman, M., \& Murphy, D. (2021).
\emph{{Lahman: Sean 'Lahman' Baseball Database}}.
\url{https://CRAN.R-project.org/package=Lahman}
\leavevmode\vadjust pre{\hypertarget{ref-Gelman2006data}{}}%
Gelman, A., \& Hill, J. (2006). \emph{Data analysis using regression and
multilevel/hierarchical models}. Cambridge University Press.
\url{https://doi.org/10.1017/cbo9780511790942}
\leavevmode\vadjust pre{\hypertarget{ref-He2011functional}{}}%
He, Y., Yucel, R., \& Raghunathan, T. E. (2011). A functional multiple
imputation approach to incomplete longitudinal data. \emph{Statistics in
Medicine}, \emph{30}(10), 1137--1156.
\url{https://doi.org/10.1002/sim.4201}
\leavevmode\vadjust pre{\hypertarget{ref-Judge2020approach}{}}%
Judge, J. (2020a). \emph{An approach to survivor bias in baseball}.
BaseballProspectus.com.
\url{https://www.baseballprospectus.com/news/article/59491/an-approach-to-survivor-bias-in-baseball}
\leavevmode\vadjust pre{\hypertarget{ref-Judge2020delta}{}}%
Judge, J. (2020b). \emph{{The Delta Method, Revisited: Rethinking Aging
Curves}}. BaseballProspectus.com.
\url{https://www.baseballprospectus.com/news/article/59972/the-delta-method-revisited}
\leavevmode\vadjust pre{\hypertarget{ref-Kasim1998application}{}}%
Kasim, R. M., \& Raudenbush, S. W. (1998). Application of gibbs sampling
to nested variance components models with heterogeneous within-group
variance. \emph{Journal of Educational and Behavioral Statistics},
\emph{23}(2), 93--116. \url{https://doi.org/10.3102/10769986023002093}
\leavevmode\vadjust pre{\hypertarget{ref-Kovalchik2014older}{}}%
Kovalchik, S. A. (2014). The older they rise the younger they fall: Age
and performance trends in men's professional tennis from 1991 to 2012.
\emph{Journal of Quantitative Analysis in Sports}, \emph{10}(2).
\url{https://doi.org/10.1515/jqas-2013-0091}
\leavevmode\vadjust pre{\hypertarget{ref-Lahman2021baseball}{}}%
Lahman, S. (1996 -- 2021). \emph{Lahman{'}s baseball database}.
SeanLahman.com.
\url{https://www.seanlahman.com/baseball-archive/statistics/}
\leavevmode\vadjust pre{\hypertarget{ref-Lailvaux2014trait}{}}%
Lailvaux, S. P., Wilson, R., \& Kasumovic, M. M. (2014). Trait
compensation and sex-specific aging of performance in male and female
professional basketball players. \emph{Evolution}, \emph{68}(5),
1523--1532. \url{https://doi.org/10.1111/evo.12375}
\leavevmode\vadjust pre{\hypertarget{ref-Lichtman2009baseball}{}}%
Lichtman, M. (2009). \emph{How do baseball players age? (Part 2)}. The
Hardball Times.
\url{https://tht.fangraphs.com/how-do-baseball-players-age-part-2}
\leavevmode\vadjust pre{\hypertarget{ref-Little1987statistical}{}}%
Little, R. J. A., \& Rubin, D. B. (1987). \emph{Statistical analysis
with missing data}. Wiley.
\leavevmode\vadjust pre{\hypertarget{ref-marchi2018analyzing}{}}%
Marchi, M., Albert, J., \& Baumer, B. S. (2018). \emph{Analyzing
baseball data with {R}} (2nd ed., p. 83). Boca Raton, FL: Chapman;
Hall/CRC Press. \url{https://doi.org/10.1201/9781351107099}
\leavevmode\vadjust pre{\hypertarget{ref-Moore1975study}{}}%
Moore, D. H. (1975). A study of age group track and field records to
relate age and running speed. \emph{Nature}, \emph{253}(5489), 264--265.
\url{https://doi.org/10.1038/253264a0}
\leavevmode\vadjust pre{\hypertarget{ref-Morris1983parametric}{}}%
Morris, C. N. (1983). Parametric empirical bayes inference: Theory and
applications. \emph{Journal of the American Statistical Association},
\emph{78}(381), 47--55.
\url{https://doi.org/10.1080/01621459.1983.10477920}
\leavevmode\vadjust pre{\hypertarget{ref-Page2013effect}{}}%
Page, G. L., Barney, B. J., \& McGuire, A. T. (2013). Effect of
position, usage rate, and per game minutes played on {NBA} player
production curves. \emph{Journal of Quantitative Analysis in Sports},
\emph{0}(0), 1--9. \url{https://doi.org/10.1515/jqas-2012-0023}
\leavevmode\vadjust pre{\hypertarget{ref-R2022language}{}}%
R Core Team. (2022). \emph{R: A language and environment for statistical
computing}. R Foundation for Statistical Computing.
\url{https://www.R-project.org/}
\leavevmode\vadjust pre{\hypertarget{ref-Rao2021modern}{}}%
Rao, A. R., \& Reimherr, M. (2021). Modern multiple imputation with
functional data. \emph{Stat}, \emph{10}(1).
\url{https://doi.org/10.1002/sta4.331}
\leavevmode\vadjust pre{\hypertarget{ref-Rubin1987multiple}{}}%
Rubin, D. B. (Ed.). (1987). \emph{Multiple imputation for nonresponse in
surveys}. John Wiley {\&} Sons, Inc.
\url{https://doi.org/10.1002/9780470316696}
\leavevmode\vadjust pre{\hypertarget{ref-Rubin1976inference}{}}%
Rubin, D. B. (1976). Inference and missing data. \emph{Biometrika},
\emph{63}(3), 581--592. \url{https://doi.org/10.1093/biomet/63.3.581}
\leavevmode\vadjust pre{\hypertarget{ref-Schall2000career}{}}%
Schall, T., \& Smith, G. (2000). Career trajectories in baseball.
\emph{{CHANCE}}, \emph{13}(4), 35--38.
\url{https://doi.org/10.1080/09332480.2000.10542233}
\leavevmode\vadjust pre{\hypertarget{ref-Schell2005baseball}{}}%
Schell, M. J. (2005). Calling it a career: Examining player aging. In
\emph{Baseball's all-time best sluggers: Adjusted batting performance
from strikeouts to home runs} (pp. 45--57). Princeton University Press.
\url{https://www.jstor.org/stable/j.ctt19705ks}
\leavevmode\vadjust pre{\hypertarget{ref-Schuckers2021observed}{}}%
Schuckers, M., Lopez, M., \& Macdonald, B. (2021). \emph{What does not
get observed can be used to make age curves stronger: Estimating player
age curves using regression and imputation}. arXiv.
\url{https://doi.org/10.48550/arXiv.2110.14017}
\leavevmode\vadjust pre{\hypertarget{ref-Schulz1988peak}{}}%
Schulz, R., \& Curnow, C. (1988). Peak performance and age among
superathletes: Track and field, swimming, baseball, tennis, and golf.
\emph{Journal of Gerontology}, \emph{43}(5), 113--120.
\url{https://doi.org/10.1093/geronj/43.5.p113}
\leavevmode\vadjust pre{\hypertarget{ref-Schulz1994relationship}{}}%
Schulz, R., Musa, D., Staszewski, J., \& Siegler, R. S. (1994). The
relationship between age and major league baseball performance:
Implications for development. \emph{Psychology and Aging}, \emph{9}(2),
274--286. \url{https://doi.org/10.1037/0882-7974.9.2.274}
\leavevmode\vadjust pre{\hypertarget{ref-Turtoro2019flexible}{}}%
Turtoro, C. (2019). \emph{Flexible aging in the NHL using GAM}. RPubs.
\url{https://rpubs.com/cjtdevil/nhl_aging}
\leavevmode\vadjust pre{\hypertarget{ref-Vaci2019large}{}}%
Vaci, N., Cocić, D., Gula, B., \& Bilalić, M. (2019). Large data and
bayesian modeling{\textemdash}aging curves of {NBA} players.
\emph{Behavior Research Methods}, \emph{51}(4), 1544--1564.
\url{https://doi.org/10.3758/s13428-018-1183-8}
\leavevmode\vadjust pre{\hypertarget{ref-Vanbuuren1999flexible}{}}%
van Buuren, S., \& Groothuis-Oudshoorn, C. G. M. (1999). \emph{Flexible
multivariate imputation by MICE} (Vol. PG/VGZ/99.054). TNO Prevention;
Health.
\leavevmode\vadjust pre{\hypertarget{ref-vanBuuren2011mice}{}}%
van Buuren, S., \& Groothuis-Oudshoorn, K. (2011). {mice}: Multivariate
imputation by chained equations in {R}. \emph{Journal of Statistical
Software}, \emph{45}(3), 1--67.
\url{https://doi.org/10.18637/jss.v045.i03}
\leavevmode\vadjust pre{\hypertarget{ref-Villaroel2011elite}{}}%
Villaroel, C., Mora, R., \& Parra, G. C. G. (2011). Elite triathlete
performance related to age. \emph{Journal of Human Sport and Exercise},
\emph{6}(2 (Suppl.)), 363--373.
\url{https://doi.org/10.4100/jhse.2011.62.16}
\leavevmode\vadjust pre{\hypertarget{ref-Wakim2014functional}{}}%
Wakim, A., \& Jin, J. (2014). \emph{Functional data analysis of aging
curves in sports}. arXiv. \url{https://doi.org/10.48550/arXiv.1403.7548}
\end{CSLReferences}
\end{document}
| {
"attr-fineweb-edu": 1.995117,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfirxK7kjXIdHA2pE | \section{Introduction}
Data-driven sport video analytics attracts considerable attention from academia and industry. This interest stems from the massive commercial appeal of sports programs, along with the increasing role played by data-driven decisions in soccer and many other sports \cite{shih2017survey}. We focus here on the challenging problem of temporal event recognition and localization in soccer, which requires considering the positions and actions of several players at once.
\lia{Sports analytics systems relies on a variety of data sources for event detection, including broadcast videos \cite{rematas2018soccer,khan2018soccer,shih2017survey}, multi-view camera setup \cite{shih2017survey,Pettersen2014} and wearable trackers and sensors \cite{richly2016recognizing,cannavo2019}. Large outdoor soccer stadiums are usually equipped with multiple wide-angle, fixed position, synchronized cameras. This setup is particularly apt at event recognition as the spatio-temporal location of all players can be inferred in an unobtrusive and accurate fashion, without resorting to ad-hoc sensors, as will be detailed in Section \ref{sec:archi}. }
Previous attempts at sports event recognition \lia{fall in two main categories}: machine learning techniques applied to spatio-temporal positional data \cite{richly2016recognizing,networks,cannavo2019} or knowledge-based systems based, e.g., on finite state machines, fuzzy logic or first-order logic \cite{shih2017survey,khan2018soccer}. The latter approach has several advantages in this context: it does not require large training set, takes full advantage of readily available domain knowledge, and can be easily extended with reasoning engines.
We propose here a comprehensive event detection system based on Interval Temporal Logics (ITL). \lia{Khan et al. applied a similar approach to identify events of interest in broadcast videos \cite{khan2018soccer}: the distance-based event detection system takes as input bounding boxes associated with a confidence score for each object category, and applies first-order logic to identify simple and complex events. Complex events combine two or more simple events using logical (AND, OR) or temporal (THEN) operators. }
\lia{Our work extends previous attempts in literature \cite{khan2018soccer} in several ways. First, we work on spatio-temporal data instead of broadcast videos: we are thus able to detect events that require the position of multiple players at once (e.g. filtering pass), or their location within the field (e.g., cross). We thus cover a much wider range of events, determining which can be accurately detected from positional data, and which would need integration with other visual inputs (e.g., pose estimation). Lastly, we extend existing rule-based system by using more expressive ITLs, which associate to each event a time interval and are capable of both qualitative and quantitative ordering.}
\lia{A severe limitation for developing sports analytics systems is} the paucity of available datasets, which are usually small and lack fine-grained event annotations. This is especially true for multi-view, fixed setups comparable to those available in modern outdoor soccer stadiums \cite{Pettersen2014}. A large scale dataset was recently published based on broadcast videos \cite{giancola2018soccernet}, but annotations include only a limited set of events (Goal, Yellow/Red Card, and Substitution).
With the aim of fostering research in this field, we have generated and released the synthetic Soccer Event Recognition (SoccER) dataset, based on the open source Gameplay Football engine. The Gameplay Football engine was recently proposed as a training gym for reinforcement learning algorithms \cite{kurach2019google}. We believe that event recognition can similarly benefit from this approach, especially to explore aspects such as the role of reasoning and the efficient modeling of spatial and temporal relationships.
We used the dataset to demonstrate the feasibility of
\lia{our approach}, achieving precision and recall higher than 80\% on most events.
The rest of the paper is organized as follows. Section \ref{sec:dataset} introduces the SoccER dataset. In Section \ref{sec:Eve}, the event detector is described. Experimental results are presented in Section \ref{sec:results} and discussed in Section \ref{sec:discussion}.
\section{The SoccER Dataset}
\label{sec:dataset}
\subsection{Modified Gameplay Football engine}
We designed a solution to generate synthetic datasets starting from the open source Gameplay Football game \cite{Gameplay_Football}, which simulates a complete soccer game, including all the most common events such as goals, fouls, corners, penalty kicks, etc. \cite{kurach2019google}. While the graphics is not as photorealistic as that of commercial products, the game physics is reasonably accurate and, being the engine open source, it can be inspected, improved and modified as needed for research purposes. The opponent team is controlled by means of a rule-based bot, provided in the original Gameplay Football simulator \cite{kurach2019google}.
For each time frame, we extract the positions and bounding boxes of all distinct 22 players and the ball, the ground truth event annotation and the corresponding video screenshots. We adopt the same field coordinate system used in the Alfheim dataset, which includes the position of players obtained from wearable trackers \cite{Pettersen2014}. All the generated videos have a resolution of 1920$\times$1080 pixels (Full HD) and frame rate of 30 fps. An example of generated frame is reported in Fig. \ref{Figure:game_frame}. We envision that event detectors can be trained and tested directly on the generated positional data, focusing on the high-level relational reasoning aspects of the soccer game, independently of the performance of the player detection and tracking stage \cite{rematas2018soccer,khan2018soccer}.
\begin{figure}[tb]
\centering
\includegraphics[width=7cm]{Figures/game_frame.png}
\caption{Example of scene generated by the Gameplay Football engine, with superimposed ground truth bounding boxes and IDs of each player and the ball. The ground truth and detected events are also overlaid on the bottom of the scene: in this frame, a tackle attempt is correctly detected.}
\label{Figure:game_frame}
\end{figure}
\subsection{Events and generated datasets}
Events are automatically logged by the game engine in order to generate the ground truth annotation. We define the notion of event based on previous work by Tovinkere et al. \cite{SoccerDetectionEvent} and Khan et al. \cite{khan2018soccer}.
Similarly to \cite{khan2018soccer}, we distinguish between \textit{atomic} and \textit{complex} events, with a slightly different approach (as discussed in the next sub-section). Atomic events are those that are spatio-temporally localized, whereas complex (compound) events are those that occur across an extended portion of the field, involve several players or can be constructed by a combination of other events. Stemming from this difference, an atomic event is associated to a given time frame, whereas a complex event is associated to a time interval, i.e., to a starting and ending frame. Atomic events include ball possession, kicking the ball, ball deflection, tackle, ball out, goal, foul and penalty. Complex events include ball possession, tackle, pass and its special cases (filtering pass, cross), shot and saved shot. A complex ball possession, or tackle, event corresponds to a sequence of consecutive atomic events that involve the same players. The ground truth also includes examples of chains of events, such as a pass, filtering pass or cross that led to a goal.
The annotations are generated leveraging information from the game engine bot, independently from the detection system: different finite state machines detect the occurrence of several types of events based on the decisions of the bot or the player, their outcomes and the positions of all the players. The definition of each event was double-checked against the official rules of the Union of European Football Association (UEFA), and the annotations were visually verified.
For the present work, eight matches were synthesized through various modalities (player vs. player, player vs. AI, AI vs. AI), for a total of 500 minutes of play with 1,678,304 atomic events and 9,130 complex events, divided in a training and testing set as reported in Table~\ref{table:list_of_events_dataset}.
\lia{The game engine and dataset are available at https://gitlab.com/grains2/slicing-and-dicing-soccer.}
\begin{table*}[t]
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
\textbf{Atomic event} & \textbf{Train Set} & \textbf{Test set} \\
\hline
\emph{KickingTheBall} & 3,786 & 3,295 \\
\hline
\emph{BallPossession} & 812,086 & 797,224 \\
\hline
\emph{Tackle} & 34,929& 26,286 \\
\hline
\emph{BallDeflection} & 172 & 78 \\
\hline
\emph{BallOut} & 182 & 168 \\
\hline
\emph{Goal} & 45 & 36 \\
\hline
\emph{Foul} & 3 & 10 \\
\hline
\emph{Penalty} & 3 & 1 \\
\hline
\end{tabular}
\quad
\begin{tabular}{|l|c|c|}
\hline
\textbf{Complex event} & \textbf{Train Set} & \textbf{Test set} \\
\hline
\emph{Pass} & 2,670 & 2,389\\
\hline
\emph{PassThenGoal} & 33 & 31 \\
\hline
\emph{FilteringPass} & 37 & 27 \\
\hline
\emph{FilterPassThenGoal} & 4 & 4 \\
\hline
\emph{Cross} & 197 & 165 \\
\hline
\emph{CrossThenGoal} & 9 & 9 \\
\hline
\emph{Tackle} & 1,413 & 1,130 \\
\hline
\emph{Shot} & 282 & 224 \\
\hline
\emph{ShotThenGoal} & 41&36 \\
\hline
\emph{SavedShot} & 104 & 64 \\
\hline
\end{tabular}
\end{center}
\caption{Distribution of atomic and complex events (training and test set). \label{table:list_of_events_dataset}}
\end{table*}
\section{Soccer event detection: a temporal logic approach}
\label{sec:Eve}
The designed event detection system comprises two modules: an \textit{atomic event detector} and a \textit{complex event detector}. The first module takes as input the \textit{x} and \textit{y} coordinates of the players and the ball, and recognizes atomic (low-level) events through feature extraction and the application of predefined rules. The atomic events are stored in memory, and a temporal logic is then used to model and recognize low- and high-level complex events \cite{anicic2009event,etalis}.
The proposed system is capable of detecting overall five atomic events and 10 complex events, including all events defined in the ground truth except for fouls, penalties and goals, which would require additional information (such as the referee position and the $z$ coordinate of the ball).
We adopt a methodology and notation similar to that used in \cite{khan2018soccer}, grounded on declarative logic, for the rule-based system. Briefly, an atomic event is defined as follows:
\[
\begin{aligned}
&SE=\langle ID, seType, t, \langle role_1, p_1 \rangle, ..., \langle role_i, p_i \rangle \rangle \\
\end{aligned}
\]
where ID is an event identifier, \textit{seType} is the type of the event, and \textit{t} is the time at which the event occurred; the event is associated to one or more objects, each identified as $p_i$ and associated to a specific $role_i$, \lia{which identifies the function played by the player in the event and is assigned automatically when the rule is verified}. The event can be associated to conditions to be satisfied, e.g., based on the distance between the player and the ball.
Complex events are built by aggregating other simple or complex events using temporal (temporal complex events) or logical operators (logical complex events):
\[
\begin{aligned}
&LCE=\langle ID, ceType, (t_s, t_e) ,L = \langle e_1 op e_2 op...op e_n \rangle\rangle \\
&TCE= \langle ID, ceType, (t_s, t_e) ,L = \langle e_1 THEN e_2 THEN...THEN e_n \rangle\rangle
\end{aligned}
\]
In all cases, \textit{ID} corresponds to the event identifier, \textit{ceType} to the event type, $(t_s, t_e)$ is the time interval in which the event occurred, and $e_i$ is used to identify the sub-events. In the following, we do not differentiate between logical or temporal complex events. The main difference between our approach and that proposed in \cite{khan2018soccer} is that we model time using intervals, rather than
instants. \lia{Rule parameters were optimized using a genetic algorithm (see Section\ref{section:Evolutionary Algoritm}). }
\subsection{Atomic event detector}
\label{sec:atomic}
\subsubsection{Feature extraction}
Starting from the player and ball \textit{x} and \textit{y} positions, the following features were calculated: \textit{velocity}, \textit{acceleration}, \textit{direction} with respect to the field, \textit{distance from the ball}, which players move, \textit{distance from the target line} of both teams, \textit{expected cross position on target line} and angle covered by the \textit{change of direction}. For a more detailed definition of the individual features, the reader is referred to the paper by Richly et al. \cite{richly2016recognizing}.
\subsubsection{Rules}
Atomic events are detected by applying a set of rules. Even if they are associated to a single time instant $t_{i}$, in order to reduce the computational time and calculate stable values for the features, a sliding window approach was implemented: given a time instant $t_{i}$, the event $E_{i}$ is recognized if the corresponding rule is satisfied by the values in the interval $(t_{i},t_{i+k})$, where $k$ is equal to the window size. Feature extraction and rule checking were implemented in Python. Specifically, atomic events are defined as follows:
\begin{enumerate}
\item \textbf{KickingTheBall} consists in a simple kick aimed at executing a cross, pass or shot. Starting from a position close to the player, the ball should move away from the player over the course of the window $k$, with a sudden acceleration and a final increased speed.
\[
\centering
\begin{aligned}
&\langle ID, KickingTheBall, t,L=\langle\langle KickingPlayer, p_i\rangle ,\langle KickedObject,b\rangle \rangle \rangle \\
&player(p_i), ball(b), Distance(p_i,b,t) < T_{id_1} \\&
\forall k = 1 \ldots n, D(p_i,b,t+k) < D(p_i,b,t+k+1),\\ &speed(b,t+n) < T_{s_1}
\exists k | acceleration(b, t+k) < T_{a_1} \\
\end{aligned}
\]
\item \textbf{BallPossession} is defined taking into account not only the player who has the control of the ball (i.e., the closest player), but also the player status (i.e., whether it is moving or not). Secondly, since the $z$ coordinate of the ball is not available, we used the ball speed to avoid accidentally triggering ball possession during cross events.
\[
\centering
\begin{aligned}
&\langle ID, BallPossession,t, L=\langle \langle PossessingPlayer, p_i\rangle ,\langle PossessedObject,b\rangle \rangle \rangle \\
&player(p_i), ball(b), Distance(p_i,b,t) < T_{id_2} \\ & \forall j \neq i, player (p_j), D(p_j,b,t) > D (pi,b, t)\\
& \forall k = 1 \ldots n, D(p_i,b,t+k) < T_{id_2} \\
& \forall k = 0 \ldots n, \forall j \neq i, team(p_j) \neq team(p_i), D(p_i, p_j,t+k) < T_{od_2}, \\
& speed(b, t+k) < T_{s_2} \\
\end{aligned}
\]
\item \textbf{Tackle} occurs when a player (TacklingPlayer) tries to gain control of the ball against a player of the opposite team (PossessingPlayer). As a direct consequence, the presence of a member of the opposite team nearby is a condition to trigger the event.
\[
\centering
\begin{aligned}
&\langle ID, Tackle,t, L= \langle \langle PossessingPlayer, p_i\rangle ,
\langle TacklingPlayer, p_j\rangle \rangle ,\\
&\langle PossessedObject,b\rangle \rangle, \\
& player(p_i), player(p_j), ball(b),\\
&Distance(p_i,b,t) < T_{id_3} \\
& \forall u \ldots i, player (p_u), D(p_u,b,t) > D (p_i,b, t) \\
& \forall k = 1 \ldots n, D(p_i,b,t+k) < T_{id_3} \\
&\forall k = 0 \ldots n,
\exists player(p_i) | D(p_i, p_j,t+k) < T_{od_3}, team(p_i) \neq team(p_j), \\
&speed(b, t+k) < T_{s_3} \\
\end{aligned}
\]
\item \textbf{BallDeflection} occurs when the ball has a sudden change in direction, usually due to a player or the goalkeeper deflecting it. The ball in this event undergoes an intense deceleration reaching an area far from the deflecting player.
\[
\begin{aligned}
&\langle ID, BallDeflection,t,L= \langle
\langle DeflectingPlayer, p_i\rangle
\rangle DeflectedObject,b\rangle \rangle \rangle \\
&player(p_i), ball(b), Distance(p_i,b,t) < T_{id_4} \\ & \forall k = 1 \ldots n, D(p_i,b,t+k) < D(p_i,b,t+k+1), \\&speed(b,t+n) > T_{s_4} \\
& \exists k | acceleration(b, t+k) < -T_{a_4} \\
\end{aligned}
\]
\item \textbf{BallOut} is triggered when the ball goes off the pitch.
\item \textbf{Goal} occurs when a player scores a goal.
\end{enumerate}
\subsection{Complex event detector}
This module was implemented based on a temporal logic; specifically the Temporal Interval Logic with Compositional Operators (TILCO) \cite{TILCO} was used. TILCO belongs to the class of ITLs, where each event is associated to a time interval. TILCO was selected among several available options because it implements both qualitative and quantitative ordering, and defines a metric over time: thus, we were able to impose constraints on the duration of the events, as well as to gather statistics on their duration. The ETALIS (Event TrAnsaction Logic Inference System) open source library, based on Prolog, was used for implementation \cite{etalis}. The complex event detector is characterized by few parameters, which were manually optimized on the training set.
For the complex events, the rules were formalized as reported in the following.
\begin{enumerate}
\item \textbf{Pass} and \textbf{Cross} events occur when the the ball is passed between two players of the same team, and hence can be expressed as a sequence of two atomic events, KickingTheBall and BallPossession, where the passing and receiving players belong to the same team. A cross is a special case in which the ball is passed from the sideline area of the field to the goal area. An additional clause is added to the pass detection (not reported for brevity) to evaluate the position of the players, straightforward in our case as the coordinate system coincides with the field.
\[
\begin{aligned}
&\langle ID, Pass,(t,t+k),t, L= \langle
ID, KickingTheBall,\\& \langle KickingPlayer, p_i,t\rangle ,\langle KickedObject,b,t\rangle \rangle \\
&THEN \langle ID, BallPossession, \langle PossessingPlayer, p_j,t+k\rangle , \\ & \langle PossessedObject,b,t\rangle \rangle \rangle \\
&player(p_i), player(p_j), ball(b), team(p_i) = team(p_j), k < Th3 \\
\end{aligned}
\]
\item \textbf{FilteringPass} allows to create goal opportunities when the opposite team have an organized defence. According to the UEFA definition, it consists of a pass over the defence line of the opposite team. In our definition, the player that receives the ball has to be, at the time the pass starts, nearer to the goal post than all the players from the opposite team.
\[
\begin{aligned}
&\langle ID, FilteringPass,t,t+k,t, L =
\langle ID, Pass,\langle PossessingPlayer,p_i,t\rangle ,\\&\langle ReceivingPlayer,p_j,t+k\rangle ,
\langle PossessedObject, b, t\rangle \rangle \rangle \\
&player(p_i), player(p_j), ball(b), team(p_i) = team(p_j), \\
&\forall k, player(p_k), team(p_k) \neq team(p_j), goal(g, p_k),\\& D(p_j, g, t + k) < D(p_k, g, t + k) \\
\end{aligned}
\]
\item \textbf{PassThenGoal}, \textbf{CrossThenGoal} and \textbf{FilteringPassThenGoal} are defined by the concatenation of two temporal sub-sequences: an alternation of Pass/FilteringPass/Cross followed by a Goal, where the receiver of the pass is the same player who scores.
\item \textbf{Tackle}: as a complex event, it is a sequence of one or more atomic tackles, followed by a ball possession (which indicates the end of the action). A \textbf{WonTackle} terminates with the successful attempt to gain the ball by the opponent team. A \textbf{LostTackle} is obtained by the complementary rule.
\item \textbf{ShotOut}, \textbf{ShotThenGoal} and \textbf{SavedShot} represent possible outcomes of an attempt to score. The SavedShot event, where the goalkeeper successfully intercepts the ball, is formalized as KickingTheBall followed by a BallDeflection or BallPossession, where the deflecting player is the goal keeper.
\begin{comment}
\[
\begin{aligned}
&\langle ID, SavedShot, \langle KickingPlayer, p_i, t\rangle ,
\langle GoalKeeper, p_j, t+k\rangle , \langle KickedObject, b, t\rangle \rangle \\
&player(p_i), ball(b), GoalKeeper(p_j),
\\&
\langle ID, SavedShot, T = \langle ID, KickingTheBall,
\langle KickingPlayer, p_i,t\rangle ,\langle KickedObject,b,t\rangle \rangle \\
&THEN \\&
\langle ID, BallDeflectionByPlayer, p_j, t + k\rangle
t_k - t_i < t_h \\
\end{aligned}
\]
\end{comment}
\end{enumerate}
\subsection{Event recognition from a multi-view camera setup}
\label{sec:archi}
\begin{figure}[b!]
\centering
\includegraphics[width=\linewidth]{Figures/flow_analysis.pdf}
\caption{Deployment of the proposed system in a real-life scenario.}
\label{Figure:flow_analysis}
\end{figure}
\lia{
In a real setting, spatio-temporal data would need to be extracted from a multi-view video stream using a multi-object detection and tracking system (see Figure \ref{Figure:flow_analysis}). A multi-camera setup is required in order to solve occlusions and cover the entire playing field. For instance, Pettersen et al. used three wide-angle cameras to cover the Alfheim stadium \cite{Pettersen2014}; modern acquisition setup like Intel True View\textsuperscript{\textcopyright} include up to 38 5K cameras. The players and the ball can be detected using e.g., Single Shot Detector or another real-time object detector \cite{khan2018soccer,rematas2018soccer}. Pixel coordinates are then mapped to the field coordinate systems using a properly calibrated setup; alternatively, field lines can be used to estimate the calibration parameters \cite{rematas2018soccer}. For accurate event detection the system should be able to distinguish and track different players, assign them to the correct team, and minimize identity switches during tracking. For instance, certain events can only occur between players of the same team, other between players of competing teams. Developing the detection and tracking system is beyond the scope of this paper. Instead, we exploit the game engine to log the position of the players and the ball at each frame, and focus on the final event detection step, which is further divided into atomic and complex event detection. }
\section{Experimental results}
\label{sec:results}
In this section, the evaluation protocol and the experimental results of the proposed detector on the SoccER dataset are reported. We focus first on the detection of atomic events, for which optimal parameters were found by means of a multi-objective genetic algorithm. Starting from the optimal solution of the atomic event detector, the performance of the complex event detector is analyzed and compared with the state of the art.
\subsection{Evaluation protocol}
A ground truth atomic event is detected if an event of the same type is found within a temporal window of three frames. For complex events, we use the common OV20 criterion for temporal action recognition: a temporal window matches a ground truth action if they overlap, according to the Intersection over Union, by 20\% or more \cite{gaidon2011actom}. For each event, we calculate the recall, precision and F-score.
\subsection{Parameter optimization: an evolutionary strategy}
\label{section:Evolutionary Algoritm}
Genetic or evolutionary algorithms are effective techniques for parameter optimization, as they only require the ability to evaluate the fitness function and are applicable when an analytic formulation of the loss is not available \cite{morra2018optimization}. In our case, the fitness value is based on the weighted average of the recall and precision metrics over all the event types. Since precision and recall are competing requirements, we opted for a multi-objective implementation, the Strength Pareto Evolutionary Algorithm or SPEA2 \cite{ZLTh_01_SPE}. SPEA2 is a Pareto-based optimization technique which seeks to approximate the Pareto-optimal set, i.e., the set of individuals that are not dominated by any others, while maximizing the diversity of the generated solutions.
Each individual's genome encodes the set of 16 parameters associated to all rules. \francesco{The parameters of each rule are defined in Section \ref{sec:atomic} (i.e.,Inner Distance($T_{id_N}$), Outer Distance($T_{od_N}$),
speed($T_{s_N}$) and accelleration($T_{a_N})$, where $N$ ranges from 1 to 4). In addition, the window for each rule is separately optimized.} Finally, since the rules are not mutually exclusive, the order in which they are evaluated is also encoded using the Lehmer notation. A range and discretization step is defined for each real-valued parameter to limit the search space. All window sizes are limited in the range 3--30 frames (with unitary step), all thresholds on speed were limited in the range 1--15 with step 1.0, and all thresholds on distance were limited in the range 0.1--2.0 \lia{meters} with step 0.1.
The genetic algorithm was run for 50 generations starting from a population of 200 individuals; genetic operators were the BLX-0.5 crossover \cite{alcala2007multi}, with probability 90\%, and random mutation with probability 20\%. An archive of 100 individuals was used to store the Pareto front. The optimal parameters were determined on the training set and evaluated on the testing set. The experiment was repeated twice to ensure, qualitatively, the reproducibility of the results. \lia{
Genetic algorithms are sensitive to random initialization and more runs would be needed to estimate the variability in the results. }
The final set of solutions, which approximate the Pareto front, is shown in Fig.~\ref{Figure:Results of Archive generation number 50}. The four solutions which maximize F-score for each event are compared in Fig. \ref{Figure:map_atomic_event}. The BallOut event (not reported) reaches perfect scores for all parameter choices. The easiest events to detect are KickingTheBall, with an average F-score of 0.94, and BallPossession, with an average F-score of 0.93. For Tackle, the average precision is high (0.94), but the recall is much lower (0.61). The worst result is obtained for BallDeflection, with values of F-score consistently lower than 0.4. Some events are more difficult to detect based on positional data alone, i.e., without considering the position of the joints or the action performed by the players \cite{pattern}. The best performing solution for the Tackle event (0.65 vs. 0.42 recall) corresponds to a lower recall for BallPossession (0.91 vs. 0.87), largely due to the similarity between the two classes; the difference in absolute values is easily explained by the higher frequency of BallPossession events.
\begin{comment}
\begin{table}[tbh]
\begin{adjustwidth}{-1.75cm}{-.5cm}
\begin{center}
\begin{tabular}{|l|l|c|c|c|c|c|}
\hline
\textbf{Evento} & \textbf{Metrica} & \textbf{KickingTheBall} & \textbf{BallPossession} & \textbf{Tackle} & \textbf{BallDeflection} & \textbf{Media} \\
\hline
\textbf{KickingTheBall} & \textbf{Precision} & \textbf{0.96} & \textbf{0.97} &\textbf{ 0.96 } & \textbf{0.97} &\textbf{ 0.96} \\
& \textbf{Recall }&\textbf{ 0.92} & \textbf{0.87} & \textbf{0.91} & \textbf{0.86} &\textbf{ 0.93} \\
& \textbf{F-Score} &\textbf{ 0.94} &\textbf{ 0.91} &\textbf{ 0.9}3 &\textbf{ 0.91} &\textbf{ 0.94 }\\
\hline
BallPossession & Precision & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 \\
& Recall & 0.88 & 0.91 & 0.87 & 0.86 & 0.88 \\
& F-Score & 0.93 & 0.95 & 0.93 & 0.92 & 0.93 \\
\hline
Tackle& Precision & 0.95 & 0.97& 0.87 & 0.96 & 0.94 \\
& Recall & 0.6 & 0.42 & 0.65 & 0.47 & 0.61 \\
& F-Score & 0.73 & 0.59 & 0.74 & 0.63 & 0.74 \\
\hline
BallDeflection & Precision & 0.28 & 0.28 & 0.26 & 0.26 & 0.27 \\
& Recall & 0.37 & 0.36 & 0.39 & 0.35 & 0.35 \\
& F-Score & 0.32 & 0.31 & 0.31 & 0.30 & 0.31 \\
\hline
\end{tabular}
\end{center}
\end{adjustwidth}
\caption{Values that maximize all events \label{table:Parameters_obtained_after_iteration_on_all_matches}}
\end{table}
\end{comment}
\begin{comment}
\begin{figure}
\centering
\begin{minipage}{0.9\textwidth}
\includegraphics[width=0.95\textwidth]{Figures/ArchiveGen_50.png}
\caption{Population at the final generation. Points highlighted in red corresponds to solutions in the archive, which approximates the Pareto front. }
\label{Figure:Results of Archive generation}
\end{minipage}\hfill
\begin{minipage}{0.6\textwidth}
\includegraphics[width=0.99\textwidth]{Figures/cropped_map.pdf}
\caption{Comparison of the precision, recall and F-score for the best parameter configuration (columns) and each main atomic event (rows). Each parameter configuration corresponds to a different}
\label{Figure:map_atomic_event}
\end{minipage}
\end{figure}
\end{comment}
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\subfloat[\label{Figure:Results of Archive generation number 50}]{\includegraphics[width=.45\textwidth]{Figures/ArchiveGen_50.pdf}} &
\subfloat[\label{Figure:Complex event comparision}]{\includegraphics[width=.55\textwidth]{Figures/cropped_map_3.pdf}}
\end{tabular}
\caption{Visualization of the Pareto front after 50 generations (a) and performance of the four best solutions generated (b). In (a) each dot represents a possible solution, and those belonging to the Pareto front are highlighted in red. In (b), each column represents the solution which maximize the F-score with respect to a specific event: KickingTheBall (KtB), BallPossession (BallP), Tackle and BallDeflection (BallD). For each event (row), the average performance is reported in the last column.}
\label{Figure:map_atomic_event}
\end{figure}
\subsection{Parameters Evolution}
The distribution of the parameter values at different iterations provides additional insight on the role of each parameter and the effectiveness of each rule. Two competing factors are responsible for the convergence towards specific parameter values: lack of diversity in the population, leading to premature convergence, and the existence of a narrow range of optimal values for a given parameter. We ruled out the first factor by repeating the experiment: we assume that parameters that converge to a stable value across multiple runs are more critical to the overall performance, especially if they are associated to high detection performance.
Let us consider for instance the parameters for the KickingTheBall rule, represented in Fig. \ref{fig:param_evoluion}. The window size and distance threshold both converge to a very narrow range, suggesting that a strong local minimum was found. On the other hand, the threshold on the ball speed appears less critical.
Other parameters tend to behave in a similar way, although there are exceptions. Generally speaking, the system is very sensitive to the distance thresholds, and in fact they converge to very narrow ranges for all events except BallDeflection (results are not reported for brevity). For most events, the window size has a larger variance then KickingTheBall and, in general, the rules seem quite robust with respect to the choice of this parameter.
The existence of an optimal parameter value is not necessarily associated to a high detection performance: for instance, the distribution of the acceleration threshold for the BallDeflection has a very low standard deviation and very high mean (not shown), as the change of direction usually causes an abrupt acceleration. At the same time, acceleration alone is probably not sufficient to recognize the event. Finally, the order in which the rules are processed does not seem to play a fundamental role.
\begin{figure}[t]
\centering
\begin{tabular}{cccc}
\subfloat[Window size \label{Figure:WindowSize1}]{\includegraphics[width=.3\textwidth]{Figures/windowSize1.png}} &
\subfloat[Ball-player distance \label{Figure:InnerDistance1}]{\includegraphics[width=.3\textwidth]{Figures/innerDistance1.png}} &
\subfloat[Ball Speed \label{Figure:speed1}]{\includegraphics[width=.3\textwidth]{Figures/speed1.png}} &
\end{tabular}
\caption{Distribution (mean and standard deviation) of each parameter of the rule KickingTheBall calculated over the entire population at each iteration.}
\label{fig:param_evoluion}
\end{figure}
\subsection{Overall performance}
The performance for complex events (precision and recall) is reported in Fig. \ref{Figure:Comparison1}. In eight out of 11 cases, the system was able to reach an F-score between 0.8 and 1. Sequences of events, such as passes that result in a goal, can be detected effectively. However, performance suffers when the detection of the atomic events is not accurate, e.g., for Tackle and SavedShot, which depend on the atomic events Tackle and BallDeflection, respectively.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{Figures/precision_and_recall_for_complex_event.pdf}
\caption{Precision and Recall for each complex event}
\label{Figure:Comparison1}
\end{figure}
Comparison with previous literature is difficult due to differences in the datasets, experimental settings, and types of events. Few previous works were based on positional data, extracted either from wearable trackers or using cameras covering the entire field \cite{richly2016recognizing,networks,pattern}. In the latter case, the accuracy of the positional data may further vary, depending on whether the ball and players are manually identified \cite{richly2016recognizing} or detected by a multi-object detector and tracker \cite{pattern}.
Despite these limitations, in Table \ref{Table:Final_Comparison} we attempt a comparison for two events: pass (complex event) and kicking the ball (atomic event). For both events, our results are comparable or better than previous literature, confirming that the proposed events can be successfully detected using (i) positional data (as in \cite{richly2016recognizing,networks}) and (ii) temporal logic (as in \cite{khan2018soccer}). It should be noticed that the SoccER dataset is much larger than those used in competing approaches,
, including 1,203 passes and 1,728 kicking the ball events: datasets included in Table \ref{Table:Final_Comparison} range between 14 and 134 events).
\begin{table}[tbh]
\begin{center}
\begin{tabular}{p{2.2cm}|p{2.2cm}|p{3cm}|c|c|c}
\hline
\textbf{Solution} & \textbf{Input} & \textbf{Method} & \textbf{Precision} & \textbf{Recall} & \textbf{F-score} \\
\hline
\multicolumn{6}{c}{\textit{kicking the ball} } \\
\hline
Richly (2017) \cite{richly2016recognizing} & positional data & feature extraction + neural networks & 95\% & 92\% & 93\%\\
\hline
Khan (2018) \cite{khan2018soccer} & broadcast video & object detection + temporal logic & - & 92\% & 89\% \\
\hline
Ours & positional data & temporal logic & 96\% & 93\% & 94\%\\
\hline
\multicolumn{6}{c}{\textit{pass} } \\
\hline
Khan (2018) \cite{khan2018soccer} & broadcast video & object detection + temporal logic & 94\% & 84\% & 89\% \\
\hline
Richly (2016) \cite{richly2016recognizing} & positional data & feature extraction + SVM & 42.6\% & 64.7\% & 51\% \\
\hline
Lee (2017) \cite{pattern} & Fixed camera, entire pitch & Action recognition + finite state machine & - & 60\% & - \\
\hline
Ours & positional data & temporal logic &96\% & 93\% & 94\% \\
\hline
\end{tabular}
\caption { Comparison between state of the art and proposed approach. \label{Table:Final_Comparison}}
\end{center}
\end{table}
\section{Discussion and conclusions}
\label{sec:discussion}
Event recognition in soccer is a challenging task due to the complexity of the game, the number of players and the subtle differences among different actions. In this work, we introduce the SoccER dataset, which is generated by an automatic system built upon the open source Gameplay Football engine. With this contribution, we strive to alleviate the lack of large scale datasets for training and validating event recognition systems. We modified the Gameplay Football engine to log positional data, as could be generated by a fixed multi-camera setup covering the whole field. Compared to the use of broadcast footage, we are thus able to consider the position of all players at once and model sequences of complex and related events that occur across the entire field. In the future, the game engine could be further extended to generate data on-the-fly, e.g., for the training of deep neural networks.
A second contribution is the design and validation of ITLs for soccer event recognition. ITLs provide a compact and flexible representation for events, exploiting readily available domain knowledge, given that sports are governed by a well-defined set of rules. The capability of reasoning about events is key to detect with high accuracy complex chains of events, such as ``passes that resulted in a scored goal'', bypassing the need for extensive training and data collection. Relationships between events are also easy encoded.
\lia{Spatio-temporal positional data in the SoccER dataset may be more accurate than those extracted from real video streams, as explained in Section \ref{sec:archi}. Previous works reported a tracking accuracy of about 90\% for the players and 70\% for the ball in a multi-camera setup \cite{pattern}. It is possible to accurately and fairly compare different event detection techniques using synthetic data. Nonetheless, investigating the performance on real video streams, in the presence of noise, will require further investigation. }
In conclusion, we have shown that ITLs are capable of accurately detecting most events from positional data extracted from untrimmed soccer video streams. Future work will exploit the SoccER dataset for comparing other event detection techniques, for instance based on machine learning \cite{giancola2018soccernet}.
\bibliographystyle{splncs04}
| {
"attr-fineweb-edu": 1.84375,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfXc4uzliAaD0w9zN | \section{Introduction}
\input{texfiles/intro}
\input{texfiles/background}
\input{texfiles/relatedwork}
\input{texfiles/designgoals}
\input{texfiles/visualdesign}
\input{texfiles/usecaseI}
\input{texfiles/usecaseII}
\input{texfiles/userstudy}
\input{texfiles/discussion}
\input{texfiles/conclusion}
\input{texfiles/acknowledgments}
\bibliographystyle{abbrv}
{\footnotesize
\section{Acknowledgments}
The authors wish to thank the anonymous reviewers for their valuable comments. This research was supported in part by HK RGC GRF 16208514 and 16241916.
\section{Background}
\label{sec:background}
\changed{
Assume a tourist wants to find a city that has both a clean environment and a low living cost.
Fig.~\ref{fig:skyline_example} shows all possible candidate cities as a scatter plot, where each point represents a city.
Some comparisons are obvious.
For example, city $b$ dominates city $a$, as $b$ is cleaner and has a lower living cost.
However, it is not obvious for cities $b$, $j$, and $i$, since they are not dominated by any other cities.
Thus, these three cities form the skyline of the dataset.
Once the skyline is extracted, the tourist can then safely neglect the rest cities, since the final choice is always from the skyline, disregarding his/her personal preference over these two attributes.
}
Formally, given an $m$-dimensional space $D = (d_1, d_2, \ldots, d_m)$, we denote $P = \{p_1, p_2, \ldots, p_n\}$ as a set of $n$ data points on space D.
For a point $p \in P$, it can be represented as $p = (p^1, p^2, ..., p^m)$ where $p^i \in \mathbb{Q} (1 \leq i \leq m) $ denotes the value on dimension $d_i$.
For each dimension $d_i$, assume that there exists a total order relationship on the domain values, either `$>$' or `$<$'.
Without loss of generality, we consider `$>$' (i.e., higher values are more preferred) in the following definitions.
\textbf{Dominance:} For any two points $p, q \in P$, $p$ is said to dominate $q$, denoted by $p \succ q$, if and only if $(i)$ $p$ is as good as or better than $q$ in all dimensions and $(ii)$ at least better than $q$ in one dimension,
i.e., $(i)~\forall~d_i \in D, p^i \ge q^i$ and $ (ii)~\exists~d_j \in D,~p^j > q^j$ where $1 \leq i, j \leq m$.
\textbf{Skyline point:}
A point $p \in P$ is a skyline point if and only if $p$ is not dominated by any $q \in P - \{p\}$,
i.e., $ \nexists q \in P - \{p\}$, $q \succ p$.
\textbf{Skyline:}
The skyline $A$ of $P$ is the set of skyline points in dataset $P$ on space $D$.
\textbf{Dominated point:}
A point $p$ is a dominated point if and only if there exists a point $q~(\neq p) \in P$ dominates $p$,
i.e., $\exists q \in P - \{p\}$, $q \succ p$.
\textbf{Dominating score:}
Suppose $A$ is the skyline of $P$, for a skyline point $p \in A$, the dominating score of $p$ is the number of points dominated by this point.
The dominating score can be denoted as $\phi(p)$, in which $\phi(p) = |\{q \in P - A |~p \succ q\}|$
\textbf{Subspace:}
Each non-empty subset $D'$ of $D$ is referred to as a subspace,
i.e., $ D' \subseteq D~\&~D' \neq \emptyset$
\textbf{Subspace skyline:}
For a point $p$ in space $D$, the projection of $p$ in subspace $D'\subseteq D$, denoted by $p^{D'}$, is in the subspace skyline if and only if $p^{D'}$ is not dominated by any other points $q^{D'}$.
\textbf{Decisive subspace:}
For a point $p \in P$ that is a skyline point in space $D$, if a subspace $B$ is decisive, if and only if that for any subspace $B'$ such that $B \subseteq B' \subseteq D$, $p^{B'}$ is in the corresponding subspace skyline.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.8\linewidth]{figs/skyline_example}
\vspace{-3mm}
\caption{Example of a travel destination dataset with two attributes: living cost and environment. The solid black points $b$, $j$, and $i$ form the skyline of this dataset.}
\label{fig:skyline_example}
\vspace{-6mm}
\end{figure}
\subsection{Comparison View}
While the Projection View provides the whole picture of skyline and the Tabular View helps with reasoning about individual skyline points, the most important step is to thoroughly compare and examine the differences between a couple of candidates.
When users find desirable skyline points in the other views, they can click on the glyphs or rows to add them to this Comparison View for detailed comparison.
Apart from attribute values (\textbf{T3}), the number of dominated points and the value distribution of these dominated points are also important aspects to compare (\textbf{T5}).
Therefore, we design this view to allow users to closely investigate the differences among a few skyline points from these aspects.
Specifically, two types of visual elements are designed for this view: the radar charts for perceiving the attribute values of different skyline points and the domination glyphs to summarize and compare each skyline point's dominated points.
As shown in Fig. \ref{fig:comparison_view}, the added points are represented by the radar charts, which are arranged on a circle at uniformly distributed angles.
We adopt this circular layout to emphasize the comparison between skyline points by putting the domination glyphs in the center part of the view, thus letting users focus on the comparison quickly and directly.
Each domination glyph is connected to a number of radar charts and visually summarizes the differences between the connected skyline points.
If $n$ skyline points are selected, we enumerate all possible combinations (i.e., $\sum_{i=2}^n{{i}\choose{n}}$) and add a domination glyph for each of them.
Although the combination number grows exponentially, the scalability is not a big issue in our scenario, since we mainly focus on comparing a small number ($\leq 4$) of skyline points in this view.
A force-directed based layout is used to position the domination glyphs so that they can be arranged close to their linked radar charts.
\textbf{Radar charts}.
We use radar chart, a widely used multi-dimensional data visualization technique, to show the attribute values of selected skyline points (differentiated by categorical colors).
However, we enhance the traditional radar charts in several ways for our specific scenario as shown in Fig.~\ref{fig:comparison_view}a.
First, we draw circles on axes to encode the relative rankings of the skyline points in the corresponding dimensions.
Inside each polygon, we also draw a blue circle, whose radius represents the dominating score of the corresponding skyline point.
\changed{The design is consistent with the Projection View and is more space-efficient compared with affiliating additional indicators outside the radar chart.}
When hovering over a radar chart, a pop-up window will display to show more details (Fig.~\ref{fig:comparison_view}b).
On each axis, we draw the value distribution of the corresponding dimension, in which the values increase from the center along the axis and the width of the flow indicates the number of points.
\begin{figure}[!tb]
\centering
\includegraphics[width=1.0\linewidth]{figs/comparison_view}
\vspace{-5mm}
\caption{Visual elements of the Comparison View: (a) the radar chart that shows point A's attribute values and statistic information; (b) the visual encodings in the radar chart; (c) the domination glyph that summarizes the domination differences; and (d) a pop-up radar chart that illustrates the exclusive dominated points of point B.}
\label{fig:comparison_view}
\vspace{-6mm}
\end{figure}
\textbf{Domination glyph}.
The domination glyph is designed to summarize the differences between a small number of skyline points from the domination perspective.
\changed{We use a circular design based on the same consideration discussed in Sec.~\ref{sec:projectionview}.}
Similar to skyline glyphs, a domination glyph incorporates two parts (Fig.~\ref{fig:comparison_view}c).
An inner pie chart shows the dominating scores of the linked skyline points.
In addition, we also use the radius of the chart to encode the number of points that are dominated by at least one linked skyline point.
Surrounding the pie chart, arcs are displayed to represent the proportion of points that are exclusively dominated by the corresponding skyline points.
When hovering over an inner sector or an outer sector, a radar chart will also pop to show the comparison among these linked skyline points in detail.
The skyline points are represented by the thick colored lines and the dominated points are represented by the thin gray lines.
For example, the gray lines in Fig.~\ref{fig:comparison_view}d represent the points that are exclusively dominated by the orange skyline point.
This helps users identify the reasons these exclusively dominated points are not dominated by the other skyline points (i.e., in which attributes they have higher values than these skyline points).
\textbf{Alternative designs}.
Before adopting the current design, we also considered two alternatives (Fig.~\ref{fig:comparison_view_alter}), both of which have two parts: a central radar chart that compares the attribute values of skyline points and the outer rings that show the dominating scores.
In the first design (Fig.~\ref{fig:comparison_view_alter}a), the number of outer rings are equal to the number of selected skyline points, and each colored circle that is positioned on a ring represents a dominated point.
Thus, if several skyline points share a dominated point, the corresponding outer rings will each have a duplicate circle to represent this dominated point.
In the second design (Fig.~\ref{fig:comparison_view_alter}b), all the dominated points are illustrated on a single outer ring.
If a point is dominated by several skyline points, it will appear as a pie chart indicating the exact skyline points that dominate it.
\changed{A categorical color scheme is used to distinguish skyline points.}
\changed{However, both designs suffer from severe visual clutter due to the overlap of outer rings when the number of dominated points is large.}
For the above reasons, we abandon these two designs.
\section{Conclusion}
In this work, we propose SkyLens, a visual analytic system that assists users in exploring and comparing skyline from different perspectives and at different scales.
It comprises three major views: 1) the Projection View that presents the whole picture of skyline for identifying clusters and outliers; 2) the Tabular View that provides the detailed attribute information and the factors that make a point in skyline; 3) and the Comparison View that aims at comparing a small number of skyline points in detail from both the attribute value perspective and the domination perspective.
We also provide a rich set of interactions to help users interactively explore skyline.
In the future, we first plan to include nominal attribute analysis in SkyLens.
Many objects in multi-criteria decision making scenarios have nominal attributes; enabling users to dynamically edit their preferences on nominal attributes would extensively expand the application scope of SkyLens.
Furthermore, we want to investigate skyline visualization techniques that can support data with uncertain values, which is also a common scenario in many domains that requires skyline analysis.
At last, we aim to further explore how to track the temporal changes of skyline to assist temporal data analysis.
We hope this work will shed light on the future research on skyline visualization.
\section{Design Goals}
\label{sec:designgoals}
We have distilled the following design goals based on a thorough literature review of $50$ papers we collected from the database field and our interviews with two domain experts who work on skyline algorithms.
Further details are provided in our supplementary materials.
\textbf{G1: Explore the entire skyline from different perspectives and at different scales.}
Although skyline techniques can automatically exclude points that are dominated by superior ones, users still need to select their favorites themselves.
To make a quick and confident selection, users need to explore and understand the entire skyline from different perspectives and at different scales.
On the basis of our review, the goal is the first and most important, with $35$ papers focusing on this objective from different angles.
Our first design goal is critical for skyline analysis for two reasons.
First, the number of skyline points is often large, which hinders users from gaining insights into skyline~\cite{yiu2007efficient}.
Although a number of previous studies (Sec.~\ref{sec:rel-skyline-query}) aim at providing different criteria to rank the skyline points or identifying a representative subset of skyline points, these criteria cannot fully represent the requirements and preferences of users.
Thus, it is necessary to follow Ben Shneiderman's Visualization Mantra~\cite{shneiderman1996eyes} and enable the dynamic exploration of skyline at different scales.
Second, the comparison between skyline points can also be complex in a high-dimensional space~\cite{lee2007approaching}.
For example, when comparing skyline points, users may not only want to consider the values of each attribute but also to explore the value distribution of other points~\cite{lee2007approaching}.
Furthermore, when deciding if a skyline point is unique for some specific requirements, users need to ascertain the number of points dominated by that particular skyline point and determine whether these points are dominated by other skyline points~\cite{gao2010finding}.
Therefore, the visualization system needs to support skyline analysis from different perspectives, including the attribute-related information and the domination relations.
\textbf{G2: Understand the superiority of skyline points.}
Aside from generating the superior skyline points from the entire dataset, users also need to know on what combinations of factors a skyline point dominates other points~\cite{pei2006towards,magnani2013skyview}.
Users can easily focus on the points of interest rather than on the entire set of skyline points by gaining this insightful information about skyline.
The reasons that make a point in skyline can be observed from the relative ranking of the point in each attribute, its differences with other skyline points, and its decisive subspaces~\cite{pei2005catching}.
From the relative ranking in each attribute, users can infer in which attributes a specific skyline point is superior to others.
With finer granularity, users can examine the reasons a skyline point is not dominated by other points from the pair-wise difference between attribute values.
When the relative rankings cannot provide enough information, the decisive subspaces can be exploited to understand on what combinations of attributes the skyline point is superior.
These insights help users better understand how the skyline points differ from one another and facilitate decision making.
\textbf{G3: Compare skyline points and highlight their differences.}
Users always need to compare multiple skyline points before a successful selection in multi-criteria decision making scenarios.
This task not only includes an overall browsing of the entire skyline~\cite{balke2005approaching}, but also a detailed comparison of a few skyline candidates\cite{valkanas2013skydiver}.
The attribute statistical information, such as the relative rankings of skyline points and the value distribution in each attribute, is helpful when raw attribute values cannot provide users with sufficient knowledge to make decisions~\cite{lee2007approaching}.
For example, the attribute value distribution is useful when users want to examine whether a designated candidate is strong enough in certain attributes and when users do not have prior knowledge about the data.
Apart from examining the attribute values and attribute statistics, the domination relation, i.e., the relation of the point sets that are dominated by different skyline points, are also important when examining multiple skyline points~\cite{gao2010finding}.
From the dominating score and domination relation, users can inspect the specific data distribution behind skyline~\cite{gao2010finding}, and select the appropriate skyline points that best match their domain requirements.
\textbf{G4: Support an interactive exploration and refinement of skyline.}
User preferences are dynamic during their data-exploration process~\cite{dhar1995new}; thus, users should be provided with a convenient mode in which they can refine the skyline algorithms by removing certain points, constraining the range of attribute values, or excluding non-essential attributes~\cite{mahmoud2015strong, balke2007user}.
Furthermore, as user's understanding of the data deepens with data exploration, they may tend to be more interested in certain attributes or data ranges~\cite{yuan2005efficient,lee2012interactive}.
Thus, allowing users to select attributes of interests and highlighting those points that act as the subspace skyline of these attributes is essential.
A rich set of interactions, such as linking and brushing, filtering, and searching, should also be supported to facilitate the aforementioned requirements.
\section{Analytical Tasks}
\label{sec:analyticaltasks}
To fulfill the aforementioned design goals, we have extracted the following analytical tasks.
\textbf{T1: Encode multi-dimensional attributes and statistics.}
Showing the attribute values is insufficient for multi-dimensional skyline analysis.
The relative ranking of skyline points in each attribute should also been shown because raw attribute values could be misleading.
Furthermore, when users have no prior knowledge about the data, they may need to examine the value distribution in this attribute for decision making.
Thus, our system should encode not only multi-dimensional attributes but also the attribute statistics of skyline ({\textbf{G1, G3}).
\textbf{T2: Encode decisive subspaces of each skyline point.}
The decisive subspaces of a skyline point can provide users with a different perspective to examine the reasons a point is in the skyline (\textbf{G1, G2}).
According to the decisive subspace definition, the attributes in decisive subspaces guarantee that the corresponding point is in the full-space skyline.
Therefore, the decisive subspaces help reveal the outstanding merits of a skyline point, especially when the relative rankings of the points in each attribute are too close to illustrate attribute differences.
\textbf{T3: Highlight the differences between multiple skyline points.}
Highlighting the differences between skyline points is useful not only when comparing different skyline points (\textbf{G3}), but also when inspecting the reasons for the superiority of a point in the skyline (\textbf{G2}).
To compare the relative strengths of different skyline points across all the attributes, the system should first summarize the skyline point differences in all dimensions as a whole.
Moreover, the system should highlight the differences between skyline points in each attribute on demand so that users can quickly identify how other points differ from a selected point.
The intersections of dominated points and the value distribution at these intersections also suggest the relationships and differences between skyline points.
The system should provide a clear and effective mode to represent these domination relations among the skyline candidates for a detailed comparison.
\textbf{T4: Identify the clusters and outliers of skyline points.}
To provide the whole picture of all the skyline points (\textbf{G1}), the visualization system should enable users to identify clusters and outliers as the initial step of data exploration.
For example, when looking at the skyline of NBA statistical data, the players can be categorized into several groups, such as good attackers, adept defenders, or astute passers.
Users may have interests in one of the clusters and conduct further data exploration and analysis on this cluster.
\textbf{T5: Analyze the domination relations between skyline points.}
To provide a different perspective in addition to attribute-related information for comparing different skyline points (\textbf{G1, G3}), the system should allow users to analyze the domination relations among multiple skyline points.
This task includes illustrating both the dominating score and the differences between the dominated points of the selected skyline points.
Users may also have some prior knowledge on the data and want to know whether points that can dominate a specific data item exist.
For example, a tourist may want to know superior travel destinations compared with a visited place that satisfied him/her.
Finding those skyline points that dominate a designated candidate is useful for users in multi-criteria decision making scenarios.
\textbf{T6: Support refining skyline queries.}
During data exploration, users may want to exclude certain attributes or data items so that the skyline queries better match their requirements.
Fresh candidates can also appear in the refined skyline after excluding the undesired points from the skyline query.
Supporting a convenient skyline query refinement, such as setting the value range or removing certain attributes, can provide users an efficient and effective skyline exploration experience (\textbf{G4}).
This feature also helps control the skyline size within a manageable range and avoid the visual clutter problem.
\textbf{T7: Support filtering skyline results.}
From the skyline, users might opt to focus on a highly interesting subset or on a few candidates.
For example, when selecting a travel destination, users may only be interested in the places that have a moderate climate.
Furthermore, users may select their own attributes of interest and only keep the subspace skyline of these attributes for consideration.
Thus, the system should support skyline filtering by brushing certain value ranges and generating subspace skylines (\textbf{G4}).
\section{Discussion}
One key issue in SkyLens is its scalability.
We adopted B\"orzs\"onyi et al.'s algorithm~\cite{borzsony2001skyline} in the system.
Although the algorithm has a high time complexity of $O(n^2m)$, where $n$ is point number and $m$ is the dimension number, it is sufficient for our experiment datasets.
However, efficient implementations~\cite{TanE01} may be adopted for larger datasets.
From the visualization perspective,
too many skyline points may cause severe overlapping in the Projection View.
\changed{In our experiments, the Projection View can support more than a hundred skyline points with acceptable glyph overlaps.}
This issue can be further addressed by leveraging focus+context techniques.
In the Tabular View, the number of skyline points and attributes that can be displayed in the same window is also limited.
To address this issue, we design several interactions to help users exclude undesired skyline points and reorder attributes, so that they can place the most relevant information in the same view for exploration.
\changed
The Projection View and the Comparison View can support the visualization of multi-dimensional data with about a dozen attributes.
For datasets with higher dimensions, though we enable users to visualize all the attribute values, it might be difficult to perceive the values due to the small sector angles.
Users can use the Control Panel of SkyLens to select attributes of their interests for further exploration.}
\changed{Besides, though the vanilla skyline algorithm only supports numerical attributes, some skyline variants also consider categorical attributes with a partial order.
In the future, We will enable users to define the order of categorical attributes; thus make SkyLens support categorical attributes.}
\changed{
Although SkyLens is designed to facilitate decision making, it can be easily extended to solve more general problems for multi-dimensional data exploration and analysis.
One example is multi-objective optimization, in which users need to identify a point $x$ in a multi-dimensional database to optimize $k$ objective functions.
SkyLens can facilitate this task by using these $k$ functions as data attributes; then calculate and visualize the skyline of the updated data.
In the future, we aim to embed an attribute editor to help users flexibly define and modify the data attributes for skyline analysis.
Given a multi-dimensional dataset, we can also build a network model where the nodes indicate a multi-dimensional point and the edges represent the domination relation between points.
By adopting network analysis methods, SkyLens can further support identifying clusters and outliers.
}
\subsection{Projection View}
\label{sec:projectionview}
The Projection View aims at providing an overview of skyline to allow users to discover clusters and outliers (\textbf{T4}).
In addition, we design skyline glyphs to encode detailed attribute values of each point and help users compare different skyline points (\textbf{T1}).
\textbf{Projection layout.} The skyline points are projected onto a 2D space, and their relative similarities are reflected through their placements to help users discover clusters and outliers.
Many dimension reduction techniques, such as MDS~\cite{kruskal1978multidimensional} and PCA~\cite{peason1901lines}, may be used for this purpose.
In our system, we adopt the t-distributed stochastic neighbor embedding (t-SNE) algorithm because t-SNE repels dissimilar points strongly to form more obvious clusters~\cite{maaten2008visualizing}.
Subsequently, we construct a similarity matrix based on the Euclidean distance between skyline points and then use t-SNE to project all skyline points onto a 2D space.
Thus, the skyline is visualized so that similar points are placed nearby while dissimilar points are placed faraway.
\begin{figure}[!tb]
\centering
\includegraphics[width=1.0\linewidth]{figs/cluster_glyph}
\vspace{-6mm}
\caption{Skyline point glyphs in (a) the \textit{normal mode} and (b) the \textit{focus mode}. The inner circle color encodes the dominating score;
outer sector radiuses encode numerical values of attributes.}
\label{fig:glyph}
\vspace{-2mm}
\vspace{-.5mm}
\end{figure}
\begin{figure}[!tb]
\centering
\includegraphics[width=1.0\linewidth]{figs/cluster_glyph_alter}
\vspace{-5mm}
\caption{Three design alternatives for skyline point glyphs. All inner circles encode the dominating score. Attribute values are encoded differently: (a) using categorical colors to encode different attributes and using outer sector radiuses to encode numerical values; (b) using a sequential color scheme to encode numerical values; c) using a star glyph.}
\label{fig:glyph_alternatives}
\vspace{-5mm}
\end{figure}
\textbf{Skyline glyph.}
\changed{To better identify the differences between clusters and find representative skyline points, we further enhance the Projection View with glyphs in view of the effectiveness of glyphs in facilitating visual comparison and pattern recognition.}
Two fundamental metrics, namely, attribute values and dominating score, are used to differentiate skyline points and characterize clusters.
Accordingly, our glyph design is composed of two parts (Fig. \ref{fig:glyph}a): the inner circle and the outer sectors.
The inner circle color depicts the dominating score, where darker orange indicates a higher score.
The outer sectors represent the attribute values so that users can quickly identify skyline point clusters and outliers from the glyph shape.
To further assist in the comparison task of \textbf{T3}, we develop a \textit{focus mode} to enable users to obtain an intuitive overview of how a specific point differs from other skyline points.
When users select a glyph of interest, all the sectors of the other glyphs will be colored to highlight their differences from the selected one (Fig. \ref{fig:glyph}b).
For example, if the attribute value is higher than that of the selected glyph, the corresponding sector's color changes to blue.
This allows users to examine the differences between skyline points without changing the sector radius.
A potential drawback of the design is visual clutter, which is a common issue for many dimension reduction-based visualizations.
\changed{
To mitigate this problem, we first decrease glyph opacities so that individual glyphs can be observed.
When hovering over a glyph, the glyph will be enlarged and brought to the foreground.
In addition, we support panning and zooming to focus on a specific region of glyphs.}
\textbf{Glyph alternatives.}
During the glyph design process, we considered several design alternatives.
Our first design choice is between the circular design (e.g., radar charts) and the linear design (e.g., bar charts).
\changed{For the Projection View, we mainly focus on the overview of many glyphs.
Compared with circular designs, linear designs are more helpful when examining and comparing different glyphs at a specific attribute.}
In addition, linear designs often require more space to achieve the same level of legibility as that of circular designs~\cite{mcguffin2010quantifying}.
Thus, a circular-based design is adopted in our system.
We also experiment on three design alternatives of circular design.
In these designs, the visual encoding of the inner circle is the same as that of our final design, which uses a sequential color scheme to show the dominating score.
However, these designs are all abandoned due to various reasons.
For example, our first alternative (Fig.~\ref{fig:glyph_alternatives}a) uses double encoding (i.e., categorical color and angle) to identify attributes.
\changed{However, categorical colors might be too distractive when there are many attributes.}
Our second alternative (Fig.~\ref{fig:glyph_alternatives}b) fixes the radius of the outer sectors and uses a divergent red-blue color to encode the numerical attribute value of each sector.
However, this design has two main drawbacks.
First, the color saturation is a less accurate visual channel compared to the length channel for encoding numerical values.
Second, for overlapping glyphs, the color blending may lead to a misinterpretation of values.
We also attempt using classic star glyphs to encode the attribute values (Fig.~\ref{fig:glyph_alternatives}c).
However, compared with our final design, the lines in the star glyphs are difficult to perceive when the color saturation is low and when the glyphs are small.
\section{Related Work}
\label{sec:rel}
\subsection{Skyline Query}
\label{sec:rel-skyline-query}
Skyline queries can automatically extract superior points from a multi-dimensional dataset, which is very useful in multi-criteria decision making applications.
In research on skyline queries, a large number of studies aim to address two main drawbacks of skyline queries aside from developing algorithms that can more effectively process and accelerate skyline queries~\cite{kossmann2002shooting,papadias2005progressive,morse2007efficient}.
As the dimensionality increases, the size of skyline becomes large, causing failure for the skyline in providing interesting insights to users.
The other problem is that skyline queries do not incorporate user's preferences for different attributes.
Considerable effort has been devoted to generating a representative skyline from the entire skyline to reduce the skyline size in a high-dimensional space and to increase the discriminating power of skyline queries~\cite{papadias2003optimal,lin2007selecting,yiu2007efficient}.
They chose the $k$ most \textit{interesting} points by a metric of interestingness from the full skyline.
One category contributes to identifying a small subset of skyline that best summarizes the entire skyline.
For example, Tao et al.~\cite{tao2009distance} proposed the concept of distance-based representative skyline, in which the skyline points are clustered and the center point of each cluster is used as the representative subset of skyline.
Another group of studies quantifies interestingness numerically and ranks the skyline objects according to the numerical metric~\cite{gao2010finding,chan2006finding,nanongkai2010regret}.
Chan et al.~\cite{chan2006high} proposed \textit{skyline frequency}, which is defined as the number of subspaces that a point is in the skyline, to rank the skyline points and then return the top-$k$ frequent skyline points.
Although these metrics can reflect some aspects of the skyline point, they can not represent the specific needs of every end user.
Users may not be aware of these underlying metrics as well.
The actually interesting items could be missed when only the top $k$ items identified by user-oblivious metrics are provided to users.
Another drawback of skyline queries is that it treats all attributes as equally important.
In reality, however, users may not be interested in the skyline of full space (all attributes are considered) but rather in a subset of attributes~\cite{tao2006subsky,pei2007computing,pei2005catching}.
Several studies have attempted to integrate user preferences for attributes into the skyline queries and then reduce the skyline points of real interest.
Lee et al.~\cite{lee2009personalized} proposed an algorithm named \textit{Telescope}, which identifies personalized skyline points by considering both user-specific preferences over attributes and retrieval size.
Mindolin and Chomicki~\cite{mindolin2009discovering} proposed the \textit{p-skylines} framework, which augments skyline with the concept of attribute importance.
They developed a method to mine the relative importance of attributes from user-selected tuples of superior and inferior examples, which they have incorporated into the skyline queries.
These studies show that incorporating users' preferences for attributes can assist in filtering interesting points, but only few of them involved the real users.
By contrast, our system allows users to directly select attributes of interest and helps them select the most desirable point.
\subsection{Visualization for Multi-Criteria Decision Making}
\label{sec:rel-mcdmvis}
Ranking is one of the most popular methods for decision making, and ranking-based techniques can be applied to various applications such as billboard location selection~\cite{liu2017smartadp}, path finding~\cite{partl2016pathfinder}, and lighting design~\cite{sorger2016litevis}.
When weight is set to each attribute and the weighted attribute values are aggregated, multi-dimensional data points can be converted into scalar values and ranked according to these values.
Many visualization techniques have been proposed to help users dynamically adjust attribute weights and explore the relationships between weights and rankings.
For example, ValueCharts~\cite{carenini2004valuecharts} uses stacked bar charts to represent attribute weights and provides an immediate ranking feedback based on the aggregated weight values.
Lineup~\cite{gratzl2013lineup} further highlights the ranking changes after weight adjustment and allows users to compare multiple rankings and the corresponding weight settings simultaneously.
To analyze the relationships between ranking changes and weight modification, Weightlifter~\cite{pajer2017weightlifter} proposes the concept of weight space, which represents the ranges of the potential weights that guarantee a certain data point being ranked at the top positions.
However, though these methods allow users to set different weights to attributes iteratively, the process of finding a set of accurate weights that represent a specific user preference remains tedious and ineffective.
In fact, user preferences are often fuzzy and difficult to capture by a single weight.
Moreover, the preference of a user for an attribute may even be influenced by other attribute values.
For example, a tourist may not select a travel destination when the safety index of this place is excessively low regardless of how beautiful its environment is.
The reality complicates the weight-adjustment process and requires a heavy mental overhead from users.
Another popular approach to assist decision making is skyline queries.
Without requiring additional input from users, skyline queries can significantly reduce the size of candidates that users need to consider.
To facilitate skyline understanding, several multi-dimensional data visualization techniques have been leveraged.
For example, Lotov et al.~\cite{lotov2013interactive} visualized the bivariate relationships of skyline using the scatter-plot matrices, in which the points in each scatter-plot are colored according to their values in the third attribute.
All the other attributes are assigned to a certain value and users can use a slider to adjust values and explore skylines.
Andrienko et al.~\cite{andrienko2003building} improved this approach by adding a bar chart to show the distribution of differences between a specific skyline point and other points.
To support analyzing skyline in all dimensions simultaneously, some studies utilize Parallel Coordinates to visualize skyline~\cite{bagajewicz2003pareto}.
However, these approaches suffer from the visual clutter problem when the number of skyline points is large, a problem that prevents users from gaining insights into the skyline.
Projection-based methods have also been considered to help users explore skyline points.
For example, Shahar et al.~\cite{chen2013self} combined glyphs and the Self-Organizing Map (SOM)~\cite{kohonen1998self} to present skyline points and their affiliation to different attributes.
Although this solution provides an overview of skyline, projection and orientation errors could occur when more than three dimensions are considered in SOM~\cite{kohonen1998self}.
These errors may also mislead users in skyline interpretation without providing detailed skyline information.
In summary, these visualization techniques mainly focus on representing an overview of the whole skyline, which is not sufficient to support the decision making process that includes
exploring the whole skyline, narrowing down to a small subset, examining a few points in detail, and finally making a decision.
\subsection{Tabular View}
A major issue with skyline queries is that they only identify the skyline in the dataset without additional information.
Thus, we design the Tabular View to provide users with in-depth details about individual skyline points.
For example, users may want to know the difference between a specific skyline point and other skyline points to infer how good it is in the entire skyline (\textbf{T3}).
In addition, the decisive subspaces can help users understand how balanced a skyline point is (\textbf{T2}).
All these details are encoded in this view to provide users with insights into why and how a skyline point is superior, thereby facilitating the decision-making process (\textbf{T2, T3}).
To address the scalability issue, three interactions are also tightly integrated into this view to allow users to eliminate unsuitable skyline points rapidly (\textbf{T7}) and focus on the interesting subset of skyline points.
\textbf{Visual encoding.}
The Tabular View encodes the detailed information about each skyline point in an interactive tabular form (Fig. \ref{fig:tabular_view}). Attributes are encoded as columns in this view \changed{(e.g. \textit{Attr. I}, \textit{Attr. II}, and \textit{Attr. III} in Fig.~\ref{fig:tabular_view})}.
At the head of each column, an area plot shows the value distribution of all the data (Fig. \ref{fig:tabular_view}a), including both the skyline and the dominated points.
The $x$-axis represents the attribute value in an ascending order from left to right, while the $y$-axis represents the data density.
The skyline points are drawn as vertical gray lines on top of the area charts.
The combination of context area plots and foreground gray lines provides users with the distribution of the skyline lines and their places in the entire dataset to help them compare and evaluate the qualities of skyline points in terms of individual attributes.
Skyline points are represented as rows in the table \changed{(e.g.~\textit{ID A} and \textit{ID B} in Fig.~\ref{fig:tabular_view})}.
By default, all rows are displayed in the \textit{summary mode}, which summarizes the overall differences between skyline points.
Specifically, each table cell shows a diverging bar chart (Fig. \ref{fig:tabular_view}b).
\changed{We choose the linear bar chart design for focusing on the values of one single attribute in the data.}
Without lose of generality, we assume the table cell refers to skyline point $p_i \in \{p_1, p_2, \ldots, p_n\}$ and dimension $d_j\in\{d_1, d_2, \ldots, d_m\}$.
Accordingly, the cell has a total of $n$ bars, each representing a skyline point.
All bars are sorted (ascending) in accordance with their values at dimension $d_j$.
Among these bars, a special purple bar is placed to indicate the position of the skyline point $p_i$ in the sorted bars.
The height of each blue bar $s_{k}$, where $k\neq i$, represents the summarization of its differences from $p_i$ in all the other dimensions
(i.e., $\{d_l\}_{1\leq l\leq m}-\{d_j\}$).
Specifically,
$${\delta}_l(p_i,p_k)= (p_i^l-p_k^l) / \sqrt{\sum\nolimits_{i=1}^{n}{(p_i^l-\overline{p^l})^2/n}},$$
where $p_i^l$, $p_k^l$ are the values of $p_i$, $p_k$ at attribute $d_l$, and $\overline{p_i^l}$ is the mean value of attribute $d_l$.
Thus, the summary difference $\Delta(p_i,p_k) = \sum_{l=1}^{m}{\delta}_l(p_i,p_k)$, where ${l}\neq{j}$.
$\Delta(p_i,p_k)$ can be either positive or negative; thus, a horizontal dashed line is drawn in the middle of the table cell as a baseline.
The blue bars positioned above the baseline exhibit positive differences, whereas those below the baseline exhibit negative differences.
The summary mode is designed to help users compare skyline points from two aspects (\textbf{T3}).
First, users may select a table cell of interest to examine, and the position of the purple bar can give users a precise idea of the performance of the skyline point in terms of the attribute.
Then, the blue bars can further provide an overall idea of the performance of the skyline point in the other attributes.
Users can further click a row to expand it and examine its detailed comparison with other skyline points.
In this \textit{expansion mode}, we append a matrix below the diverging bar chart (Fig. \ref{fig:tabular_view}c).
In each small matrix, the columns represent the skyline points and are aligned with the bars in the diverging bar chart above the matrix.
The rows in the matrix represent the attributes and have the same order as the columns in the large table.
Each matrix cell is filled with a color to represent the difference between a specific skyline point (the matrix column $p_k$, where $k\neq i$) and the expanded one ($p_i$) in a specific attribute (the matrix row $d_l$), which is ${\delta}_l(p_k,p_i)$.
The decisive subspaces are also shown in the \textit{expansion mode}, specifically on the left side of the first detailed matrix (Fig. \ref{fig:tabular_view}d).
Each decisive subspace takes a vertical line, and each row represents a dimension.
Thus, if a dimension is involved in the decisive subspace, then a purple mark is placed in the corresponding space in the vertical line.
This visualization allows users to observe the number of decisive subspaces by counting the vertical lines.
The skyline points with numerous decisive subspaces are usually preferred because these skyline points are also strong in terms of different subspaces.
In addition, users can compare horizontally to identify which attributes are more involved in the decisive subspaces.
Thus, users may pay more attention to the skyline points that have attributes that are highly valued and involved in decisive subspaces because these attributes are often the merits of the corresponding skyline points.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.95\linewidth]{figs/tabular_view_final}
\vspace{-2mm}
\caption{Visual encodings in the Tabular View: (a) the column header showing a specific attribute's value distribution; (b) the diverging bar chart depicting the point's relative ranking at this attribute and its overall differences with the other skyline points; (c) the \textit{expansion mode} showing the detailed comparisons between this point and other points at all attributes; (d) the bars representing the decisive subspaces of the point; and (e) the linking curve connecting the relative ranking and absolute value of the point at this attribute.}
\label{fig:tabular_view}
\vspace{-6mm}
\end{figure}
\textbf{Interactions.}
The Tabular View also supports the following user interactions to help users highlight attribute information (\textbf{T1}) or filter certain skyline points (\textbf{T7}):
\begin{compactitem}
\item{\textbf{Filtering}.}
\changed{SkyLens allows users to filter the skyline points by two modes: filtering a subspace of interest and filtering a subset of skyline points.
By clicking table headers, users can select certain attributes and highlight the skyline in the subspace of the selected attributes.}
SkyLens also supports users to brush on the area plot in each column header to indicate an acceptable region of attribute values.
If a skyline point does not meet the regional conditions, then the corresponding table row turns gray to reduce the number of interesting skyline points.
\item{\textbf{Linking.}}
When the cursor hovers over the row, several red lines appear to connect the purple bars to the corresponding gray lines in the table header (Fig. \ref{fig:tabular_view}e).
The divergent bars in a table cell only show the relative rankings of the skyline points in terms of the corresponding attribute, whereas the red linking lines can help users examine the raw values of all skyline points.
\item{\textbf{Searching.}}
\changed{Users with prior knowledge can search a specific point in the dataset using the search box at the top of the Tabular View.
If the point happens to be a skyline point, the corresponding row is highlighted.}
However, if the point is not in the skyline, SkyLens will highlight the skyline points that dominate the point.
\end{compactitem}
\section{Evaluation}
\label{sec:evaluation}
\begin{figure}[!tb]
\centering
\includegraphics[width=1.0\linewidth]{figs/comparison_view_alter}
\vspace{-5mm}
\caption{Two design alternatives for the Comparison View: (a) using small circles to represent the dominated points and outer rings to distinguish skyline points; (b) using pie charts to represent the dominated points and sector colors to distinguish skyline points.}
\label{fig:comparison_view_alter}
\vspace{-4mm}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=1.0\textwidth]{figs/city.pdf}
\vspace{-5mm}
\caption{The Tabular View of Victoria: (a) the column header of \textit{Climate}, (b) the column headers of \textit{Environment} and \textit{Traffic}, (c) the decisive subspace, and (d) Wellington and Reykjavik that have higher value than Victoria in \textit{Environment}. (e) The Projection View that highlights the skyline of the subspace of \textit{Living Cost}, \textit{Traffic}, and \textit{Environment}.}
\label{fig:casestudy2}
\vspace{-.5mm}
\vspace{-4mm}
\end{figure*}
\subsection{Usage Scenario I}
\label{sec:usagescenarioI}
The first usage scenario describes Alan, a journalist who wants to write an article about the most outstanding players of an NBA season.
He chooses not to rank the players because any ranking criteria can easily be criticized by NBA fans as different readers have different preferences.
Thus, he decides to use SkyLens to explore the specific merits of the most outstanding skyline players and to investigate the differences between them (\textbf{G3}).
He then loads the NBA 2010--11 regular season statistics, which include 452 players and 12 numerical attributes, such as \textit{Points Scored} (\textit{PTS}), \textit{Field Goals} (\textit{FG}), and so on \changed{(Fig.~\ref{fig:teaser})}.
Alan first looks at the Projection View and identifies several outliers that have rather small glyph sizes (\textbf{T4}).
After examination, he discovers that the outliers are players who only have high shooting percentages (\textit{FG\%} and \textit{3P\%}) and play only a few games.
Alan is not interested in these players, so he excludes the players who attend less than 70 games using the Control Panel (\textbf{T6}).
Then, he explores each skyline player's dominating score to find the player who outperforms the largest number of players in all attributes for this season.
By examining the inner circle colors, he finds Lamar Odom, who dominates 183 players in total.
Alan wants to further explore how other players are compared with Lamar Odom, he then double-clicks Lamar's glyph to switch the Projection View into \textit{focus mode}.
From the skyline glyph positions and the outer sector colors (Fig.~\ref{fig:teaser}a), he observes three major clusters of players (\textbf{T4}).
The players in the upper cluster (Fig.~\ref{fig:teaser}a) mostly have higher values in \textit{PTS} and \textit{FG} than Lamar, which indicates they are good scorers.
In this cluster, Alan identifies a skyline glyph with many large blue outer sectors (Fig.~\ref{fig:tabular_view}a), which represents LeBron James.
This means LeBron outperforms Lamar in almost half the attributes.
Next, he uses the Tabular View to examine the detailed information about LeBron.
From the positions of the purple bars in the row of LeBron (Fig.~\ref{fig:teaser}b), He observes that LeBron has high rankings in most of the attributes, which indicates that he is also a versatile player (\textbf{T1}).
In that row, Alan further observes that all the blue bars, which measure the overall differences between other skyline players and LeBron, are positioned beneath the baseline with one exception, Dwight Howard.
This suggests that Dwight Howard, who belongs to another cluster (Fig.~\ref{fig:teaser}a) in the Projection View, has an overall comparable performance with LeBron (\textbf{T3}).
To further compare these two players in detail, Alan opens the expansion mode of LeBron (Fig.~\ref{fig:teaser}b) to locate Dwight in the expanded matrix and observes that Dwight outperforms LeBron in the defense-related attributes, such as \textit{Total Rebounds} (\textit{TRB}) and \textit{Blocks} (\textit{BLK}).
To verify whether these defense-related attributes make Dwight in the skyline, Alan switches to the row of Dwight.
By examining the expanded matrix of Dwight, Alan identifies four defense-related attribute rows that are colored in red.
This indicates that no other player in the skyline has a better performance than Dwight in these defense-related attributes, which verifies his hypothesis.
In addition, he finds that many skyline players outperform him in the attribute \textit{Assists} (\textit{AST}).
He then checks if any of these players are located in the last cluster in the Projection View.
By highlighting the corresponding matrix bars, he identifies Chris Paul (Fig.~\ref{fig:teaser}a), a player who has the best performance in both \textit{Assists} (\textit{AST}) and Steals (\textit{STL}).
Since each of the three players (LeBron, Dwight, and Chris) can represents an individual cluster respectively in the Projection View, Alan decides to write a paragraph about how they dominate other players.
Thus, he adds these three players into the Comparison View.
From the central domination glyph (Fig.~\ref{fig:teaser}c) that summarizes their differences in dominating scores, Alan finds that LeBron and Dwight have almost the same number of players they dominate, while Chris only dominates half of the number of players and has very few exclusive players he dominates(\textbf{T5}).
When examining the other three pairwise domination glyphs, Alan observes that LeBron dominates almost all of the players that are dominated by Chris.
Considering Chris ranks much higher than LeBron in both \textit{AST} and \textit{STL}, it is strange for Chris to have a such limited number of players he exclusively dominates.
Alan investigates this phenomenon by hovering the cursor over the corresponding outer sector of the domination glyph.
From the pop-up radar chart (Fig.~\ref{fig:teaser}e), he realizes that Chris also performs slightly better than LeBron in \textit{3P\%}, in addition to \textit{AST} and \textit{STL}.
In addition, the nine players that are exclusively dominated by Chris all have lower values in \textit{AST} and \textit{STL}, but higher values in \textit{3P\%} than LeBron.
To discover the underlying reason, Alan switches to the Tabular View and observes that few players have higher rankings than LeBron in either \textit{AST} or \textit{STL} from the distribution flow (Fig.~\ref{fig:teaser}b).
Thus, Alan understands why Chris does not dominate more players exclusively, although he performs extremely well at both \textit{AST} and \textit{STL}.
\subsection{Usage Scenario II}
\label{sec:usagescenarioII}
In the second usage scenario, we demonstrate how Lorraine, who is planning a one-month holiday, utilizes SkyLens to find a desirable city to visit.
Lorraine chooses to explore the Numbeo quality-of-life dataset~\cite{numbeodataset}, which includes 176 cities worldwide and 8 numerical attributes describing the overall living conditions of those cities.
Since Lorraine has little knowledge in how these attribute values are calculated, she decides to first use SkyLens to obtain outstanding cities and see their dominant attributes (\textbf{G2}).
Lorraine first excludes two traveler-irrelevant attributes by adding two filters in the Control Panel: the \textit{Purchasing Power} and the \textit{Housing Affordability}.
Since she has decided to spend her holiday outside Asia to experience a different culture, Lorraine also adds another filter to the \textit{Continent} attribute (\textbf{T6}) to exclude Asian cities.
Then, she regenerates skyline to obtain 62 candidate cities.
From the skyline cities, Lorraine identifies Victoria, a city in Canada, where she enjoyed the pleasant climate last summer.
She wants to further investigate what attributes make the city outstanding so that she can use it as a benchmark city.
Thus, she locates Victoria in the Tabular View and observes the purple lines that indicate its relative rankings in individual attributes.
Surprisingly, Lorraine finds that Victoria is just average in the attribute \textit{Climate} (Fig. \ref{fig:casestudy2}a).
However, by tracking the red curve that connects Victoria's relative ranking to the absolute value in the column header of \textit{Climate}, she realizes that most skyline cities perform well in \textit{Climate}.
In other words, most skyline cities have a moderate climate (\textbf{T1}).
Thus, the \textit{Climate} attribute is probably not a deciding factor for an ideal vacation destination.
On the other hand, Lorraine observes that Victoria has rather high relative rankings on \textit{Traffic} and \textit{Environment} (Fig.~\ref{fig:casestudy2}b).
Hence, she guesses these two attributes are what makes Victoria excel.
\changed{To verify this hypothesis, Lorraine switches to \textit{expansion mode} for the city and surprisingly discovers that the only decisive subspace of Victoria (\textbf{T2}) is (\textit{Living Cost}, \textit{Environment}) (Fig.~\ref{fig:casestudy2}c).}
Thus, at least one city is better than Victoria in both attributes.
To reveal these cities, Lorraine further examines the matrix in the \textit{expansion mode}.
She identifies that there are only two cities, Wellington and Reykjavik, ranking higher than Victoria in the \textit{Environment} attribute (Fig.~\ref{fig:casestudy2}d).
Nevertheless, these two cities also rank higher than Victoria in the \textit{Traffic}.
By further checking the matrix, Lorraine observes that Victoria only has higher values than Wellington and Reykjavik in the attribute of \textit{Living Cost}, which is consistent with Victoria's decisive subspace.
From her exploration of Victoria, Lorraine realizes that she wants to stay in a city that is good in \textit{Traffic} and \textit{Environment}, as well as having a reasonable value in \textit{Living Cost}.
In other words, she wants to find a city that is better than Victoria in the attribute of \textit{Living Cost}, while being close to Victoria, not necessarily better, in the attributes of \textit{Traffic} and \textit{Environment}.
After brushing the corresponding column headers, Lorraine sadly finds that no city satisfies all these requirements.
She decides to make a compromise and selects these three attributes (\textit{Living Cost}, \textit{Traffic}, and \textit{Environment}) as a subspace and highlight the cities in this subspace skyline (\textbf{T7}).
She switches to the Projection View (Fig.~\ref{fig:casestudy2}d) to examine the highlighted cities.
She observes that all the highlighted cities excel in at least two of the selected three attributes, while many cities have low values in a third attribute.
Since she does not want a city that has unacceptably low values in any attributes, only two cities, Gdansk and Cluj-napoca, are shortlisted.
She then switches to the Comparison View to compare these two points in detail, and finds that Gdansk has higher values in the attributes of \textit{Climate}, \textit{Traffic}, and \textit{Environment} than Cluj-napoca, which indeed satisfies her preferences.
Thus, she selects Gdansk as her travel destination.
\subsection{Qualitative User Study}
\label{sec:userstudy}
A formal comparative study with an existing skyline visualization system is not applicable because previous skyline visualization work mainly focuses the overview of skyline, which only covers a part of the tasks we list in Sec.~\ref{sec:analyticaltasks}.
Questions that involve interpreting and comparing skyline points require a complex examination from various aspects and cannot be simplified as yes/no questions.
Therefore, we choose to perform a qualitative study rather than a controlled quantitative experiment.
\changed{In addition to the qualitative study, we also conducted an informal comparison between our system and LineUp~\cite{gratzl2013lineup}, a ranking-based visual analytic tool to facilitate the decision making process.}
\textbf{Study design}.
We recruited 12 participants (3 females, aged 21 to 28 years (mean = 26.5, SD = 2.1)) with normal or corrected-to-normal vision.
All the participants were students in the computer science department of our local university.
Among them, 5 students had experience in information visualization and 3 students knew skyline queries.
We designed 10 tasks that covered all the important aspects in skyline analysis (Sec.~\ref{sec:analyticaltasks}) for the participants to perform.
The participants also needed to utilize all the views in SkyLens together to perform all the tasks successfully.
We also conducted several pilot studies to ensure that the study was appropriately designed
The study began with a brief introduction of our system using the NBA dataset to help the participants get familiar with our system.
We also encouraged the users to freely explore our system after the introduction.
During this stage, we asked the participants to think aloud and ask questions if they encountered any problems.
To avoid memorization of data, we used the Numbeo quality-of-life dataset for the formal study.
For each task, we recorded the completion time and took notes of the feedback or problems raised by the participants for later analysis.
After the participants finished all the 10 tasks, we asked them to finish a questionnaire containing 19 questions about the usefulness and aesthetics of SkyLens.
Those questions were designed to evaluate our system in a 7-point Likert scale from strongly disagree (1) to strong agree (7).
In addition, we conducted an informal post-session interview with each participant to learn their opinion about our system in general.
During the interview, we also introduced LineUp to them and discussed with them about the differences between our system and LineUp in multi-criteria decision making scenarios.
\changed{We asked the participants to review all the tasks and suggest which of them can be performed using LineUp.}
On average, the entire study took approximately 40 minutes to finish.
The detailed task description, questionnaires, and study results can be found in our supplement materials.
\textbf{Results and discussion}.
All participants managed to complete the tasks in a short period of time (33.6s on average for each task).
However, task 8 took relatively longer time as it required the participants to manually search and add three cities into the Comparison View.
For the questionnaire results, most participants thought that it is easy to perform skyline analysis tasks using SkyLens (6.5).
They also reported that the system is visually pleasing in general (6.6), the interactions are easy in general (6.3), and the tool would be useful for many multi-criteria decision making scenarios (6.6).
In the post-session interviews, most participants appreciated the effectiveness and powerfulness of SkyLens as it can facilitate skyline understanding and enlighten the trade-off between attributes.
Specifically, they highlighted the usefulness of the Comparison View and the Tabular View.
Some participants commented that \textit{``The pie chart plus the outer sectors is indeed a smart design to help identify the domination differences between skyline points quickly.''}
While another participant added that \textit{``The Comparison View provides the flexibility to choose different combinations of players for comparison.''}
Some participants also appreciated the insights provided by the Tabular View.
Some participants reported that \textit{``The \textit{expansion mode} in the Tabular View is of great help to identify what combinations of attributes make a point outstanding, compared to the raw attribute values or rankings.''}
\changed{Apart from the positive feedback, the participants also suggest several improvements to our system.
For the Projection View, three participants, who used relatively long time to finish the Projection View's tasks, suggested to enlarge the default outer sector radius for the domination glyph to better compare the attribute values of different points.
Some participants also wanted to further examine a group of similar glyphs that locate together.
We adjusted the outer sector radius and enabled users to pan and zoom the Projection View after the interview.
For the Tabular View, a few participants who have no prior knowledge of skyline described that they needed some time to fully understand the visual encoding and the meaning of decisive subspaces.
This implies that the learning curve of SkyLens may be steep for people who have no experience in skyline analysis.
For the Comparison View, one participant reported that the sizes of some pie chart sectors are too small to select, thus we enlarged the minimum sector size accordingly.
}
As a comparison with LineUp, the participants reported that LineUp is a powerful tool and really easy to understand.
However, they all felt that LineUp can only support a small part of the tasks we focus (2.4), and SkyLens could cost less time when performing these tasks (6.6).
\changed{We believe this is because LineUp and SkyLens follow different approaches to decision making: ranking v.s. skyline analysis.}
\changed{For example, the participants identified that it was difficult to use LineUp to exclude the points that are dominated (i.e. worse in every aspects) by at least one point.
When comparing a few points in detail, the participants reported that they had to repeatedly perform multiple alignment interactions to determine the strong and weak attributes of different points using LineUp.}
In addition, the participants found that the weight adjustment process in LineUp is dubious and they usually did not know whether they had achieved the right weights to reflect their preferences.
When exploring the Numbeo dataset, one participant asked \textit{``Why Canberra always stays on the top? How can I change the weights to make other cities on the top of the list?''}
In summary, the participants reported that they would choose LineUp when they already have a good understanding about the dataset and a few trade-offs to consider.
Nevertheless, when they do not have enough prior knowledge of the dataset and want to carefully examine the data points from different perspectives, they preferred using SkyLens.
\changed{Thus, LineUp is desired when users know exactly their goals and SkyLen is preferred when users' requirements are vague and need detailed data comparison.
The two systems are complementary and are appropriate for different tasks in decision making scenarios.}
\section{SkyLens Design}
\label{sec:visualdesign}
\changed{Motivated by the above analytical tasks, we design SkyLens to allow users to explore and compare skyline points at different scales and from different perspectives.
Our prototype\footnote{\textit{{http://vis.cse.ust.hk/skylens}}} is implemented \changed{using} Flask~\cite{flask}, VueJS~\cite{vuejs}, and D3~\cite{d3}.}
\changed{The system consists of a data analysis module and a visual analysis module.
In the data analysis module, we unify the raw data to ensure higher values are better (Sec.~\ref{sec:background}) and then compute skyline.}
The visual analysis module incorporates three major views: 1) the \textit{Projection View} (Fig.~\ref{fig:teaser}a) that provides an overview of the entire skyline to identify clusters and outliers; 2) the \textit{Tabular View} (Fig.~\ref{fig:teaser}b) that summarizes the attribute-wise rankings and differences between skyline points, thereby allowing users to understand what combination of factors make a point in skyline, and 3) the \textit{Comparison View} (Fig.~\ref{fig:teaser}c) that aims to compare a small set of skyline points from both the attribute and domination perspective in detail.
We also provide a \textit{Control Panel} (Fig.~\ref{fig:teaser}d) to help users load data and refine skyline queries such as removing some specific attributes or excluding certain points.
A set of interactions is also provided to help users explore and refine skyline freely by filtering, linking, and brushing.
\input{texfiles/projectionview}
\input{texfiles/tabularview}
\input{texfiles/comparisonview}
\subsection{Interactions}
\label{sec:interactions}
\changed{We developed a set of interactions to help users switch between the coordinated views.}
\changed{First, users can change the order of attributes in all the views by dragging the attribute rows in the Attribute Table (Fig.~\ref{fig:teaser}d).
In addition, when clicking a skyline glyph in the Projection View, not only will the skyline point be appended to the Comparison View, but the Tabular View will also automatically scroll to the row that represents this skyline point.}
Similarly, when hovering over a row in the Tabular View or hovering over a radar chart of the comparison view, the corresponding skyline glyph in the Projection View will be enlarged and moved to the foreground.
Besides, when brushing certain attribute ranges or calculating a subspace skyline, the results will also be highlighted in the Projection View.
| {
"attr-fineweb-edu": 1.637695,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbhfxK03BfL1dT6QH | \section{Introduction} \label{Sec1}
The FIFA World Cup, the most prestigious soccer tournament around the world, is followed by millions of fans. According to \citet{Palacios-Huerta2014}, 5\% of all the people who \emph{ever} lived on the Earth watched the final of the 2010 FIFA World Cup played by the Netherlands and Spain. Qualification to the FIFA World Cup creates widespread media coverage in the competing countries \citep{FrawleyVandenHoven2015} and brings significant economic benefits \citep{StoneRod2016}: each participating teams has received at least 9.5 million USD in the 2018 FIFA World Cup \citep{FIFA2017b}.
Success in soccer can even help build nations \citep{Depetris-ChauvinDuranteCampante2020}.
Some research has addressed the World Cup qualifiers.
\citet{Flegl2014} applies Data Envelopment Analysis (DEA) to evaluate the performance of national soccer teams during the 2014 FIFA World Cup qualification.
\citet{StoneRod2016} aim to assess the degree to which the allocation of qualification berths among the six FIFA confederations reflects the quality of the teams from their specific region, mainly by descriptive statistics.
\citet{DuranGuajardoSaure2017} recommend integer programming to construct alternative schedules for the South American qualifiers that overcome the main drawbacks of the previous approach. Their proposal has been unanimously approved by the South American associations to use in the 2018 FIFA World Cup qualification.
\citet{PollardArmatas2017} investigate home advantage in the FIFA World Cup qualification games.
\citet{Csato2020f} identifies an incentive incompatibility problem in the European section of the recent FIFA World Cup qualifications.
However, the FIFA World Cup qualification process has never been analysed before via Monte-Carlo simulations in the scientific literature. A possible reason is the complexity of the qualifying system as will be seen in Section~\ref{Sec2}.
Our paper aims to fill this research gap.
In particular, the probability of qualification to the 2018 FIFA World Cup is quantified for the 102 nations of the AFC (Asian Football Confederation), CONCACAF (Confederation of North, Central American and Caribbean Association Football), CONMEBOL (South American Football Confederation), and OFC (Oceania Football Confederation) to answer three questions:
(a) Is the qualification process designed \emph{fairly} in the sense that it provides a higher chance for a better team both within and between the confederations?
(b) Is it possible to improve fairness without reallocating the qualifying berths?
(c) How did the move of Australia from the OFC to the AFC in 2006 affect the teams?
The main contributions can be summarised as follows:
\begin{enumerate}
\item
First in the academic literature, the paper calculates the qualifying probabilities for the FIFA World Cup based on the Elo ratings of the national teams.
\item
A method is proposed to measure the degree of unfairness. It shows that essentially all the four qualifiers are constructed in a fair way. On the contrary, substantial differences are found between the confederations.
\item
Using a well-devised fixed matchup in the inter-continental play-offs---a policy applied in the 2010 FIFA World Cup qualification---instead of the current random draw can reduce unfairness by about 10\%.
\item
Australia has increased its probability of playing in the 2018 FIFA World Cup by 75\% as a result of leaving the OFC and joining the AFC in 2006. The move has been detrimental to all AFC nations, while it has favoured any other countries, especially New Zealand.
\end{enumerate}
Our approach can be applied in any sports where teams contest in parallel tournaments for the same prize, thus the natural issue of equal treatment of equals emerges. While these designs have recently been analysed with respect to incentive incompatibility \citep{Vong2017, DagaevSonin2018, Csato2021a}, the numerical investigation of fairness is currently limited to the qualification for the UEFA Euro 2020 \citep{Csato2020b}.
Regarding the structure of the article, Section~\ref{Sec2} gives a concise overview of connected works. The designs of the four FIFA World Cup qualifiers and the inter-confederation play-offs are described in Section~\ref{Sec3}. The simulation methodology is detailed in Section~\ref{Sec4}. Section~\ref{Sec5} presents the suggested measure of unfairness and numerical results, while Section~\ref{Sec6} concludes.
\section{Related literature} \label{Sec2}
Our paper contributes to at least three fields: fairness in sports, analysis of FIFA competitions and rules, and simulation of tournament designs.
A usual interpretation of fairness is that
(1) stronger players should be preferred to weaker players; and
(2) equal players should be treated equally.
Otherwise, an incentive might exist to manipulate the tournament.
\citet{GrohMoldovanuSelaSunde2012} check which seedings in elimination tournaments satisfy the two properties.
\citet{ArlegiDimitrov2020} apply these requirements to different kinds of knockout contests and characterise the appropriate structures.
\citet{Csato2020b} shows the unfairness of the qualification for the 2020 UEFA European Championship with important lessons for sports management \citep{HaugenKrumer2021}.
Both theoretical \citep{KrumerMegidishSela2017a, KrumerMegidishSela2020a, Sahm2019} and empirical \citep{KrumerLechner2017} investigations reveal that the ex-ante winning probabilities in round-robin tournaments with three and four symmetric players may depend on the schedule, which can lead to severe problems in the 2026 FIFA World Cup \citep{Guyon2020a}.
Soccer penalty shootouts seem to be advantageous for the first shooter \citep{ApesteguiaPalacios-Huerta2010, Palacios-Huerta2014, VandebroekMcCannVroom2018} but this bias can be mitigated by a carefully devised mechanism \citep{AnbarciSunUnver2019, BramsIsmail2018, CsatoPetroczy2021b, Palacios-Huerta2012}.
The knockout bracket of the 2016 UEFA European Championship has created imbalances among the six round-robin groups of four teams each \citep{Guyon2018a}.
\citet{VaziriDabadghaoYihMorin2018} state some fairness properties of sports ranking methods.
In contrast to the World Cup qualifiers, several attempts have been made to forecast the FIFA World Cup final tournament games.
\citet{DyteClarke2000} treat the goals scored by the teams as independent Poisson variables to simulate the 1998 FIFA World Cup.
\citet{Deutsch2011} aims to judge the impact of the draw in the 2010 World Cup, as well as to look back and identify surprises, disappointments, and upsets.
\citet{GrollSchaubergerTutz2015} fit and examine two models to forecast the 2014 FIFA World Cup.
\citet{OLeary2017} finds that the Yahoo crowd was statistically significantly better at predicting the outcomes of matches in the 2014 World Cup compared to the experts and was similar in performance to established betting odds.
Further aspects of the FIFA World Cup have also been researched extensively.
\citet{Jones1990} and \citet{RathgeberRathgeber2007} discuss the consequences of the unevenly distributed draw procedures for the 1990 and 2006 FIFA World Cups, respectively.
\citet{ScarfYusof2011} reveal the effect of seeding policy and other design changes on the progression of competitors in the World Cup final tournament.
\citet{Guyon2015a} collects some flaws and criticisms of the World Cup seeding system. \citet{LalienaLopez2019} and \citet{CeaDuranGuajardoSureSiebertZamorano2020} provide a detailed analysis of group assignment in the FIFA World Cup.
Finally, since historical data usually do not make it possible to calculate the majority of tournament metrics such as qualifying probabilities, it is necessary to use computer simulations for this purpose, especially for evaluating new designs \citep{ScarfYusofBilbao2009}.
Any simulation model should be based on a prediction model for individual ties.
According to \citet{LasekSzlavikBhulai2013}, the best performing algorithm of ranking systems in soccer with respect to accuracy is a version of the famous Elo rating.
\citet{BakerMcHale2018} provide time-varying ratings for international soccer teams.
\citet{VanEetveldeLey2019} overview the most common ranking methods in soccer.
\citet{LeyVandeWieleVanEeetvelde2019} build a ranking reflecting the teams' current strengths and illustrate its usefulness by examples where the existing rankings fail to provide enough information or lead to peculiar results.
\citet{CoronaForrestTenaWiper2019} propose a Bayesian approach to take into
account the uncertainty of parameter estimates in the underlying match-level
forecasting model.
\section{The 2018 FIFA World Cup qualification} \label{Sec3}
The \href{https://en.wikipedia.org/wiki/FIFA_World_Cup_qualification}{FIFA World Cup qualification} is a series of tournaments to determine the participants of the \href{https://en.wikipedia.org/wiki/FIFA_World_Cup}{FIFA World Cup}. Since 1998, the final competition contains 32 teams such that the host nation(s) receive(s) a guaranteed slot.
The number of qualifying berths for the continents is fixed from 2006 to 2022 as follows:
\begin{itemize}
\item
AFC (Asian Football Confederation): 4.5;
\item
CAF (Confederation of African Football): 5;
\item
CONCACAF (Confederation of North, Central American and Caribbean Association Football): 3.5;
\item
CONMEBOL (South American Football Confederation): 4.5;
\item
OFC (Oceania Football Confederation): 0.5;
\item
UEFA (Union of European Football Associations): 13.
\end{itemize}
The six confederations organise their own contests.
The 0.5 slots represent a place in the inter-continental play-offs, which is the only interaction between the qualifying tournaments of different geographical zones.
The qualifications of all confederations are played in rounds. Each round is designed either in a \emph{knockout} format (where two teams play two-legged home-away matches) or in a \emph{round-robin} format (where more than two teams play in a single or home-away group against every other team of the group). The rounds are often \emph{seeded}, that is, the participating countries are divided into the same number of pots as the number of teams per group (meaning two pots in the knockout format) and one team from each pot goes to a given group. The \emph{traditional seeding} is based on an exogenously given ranking---usually the FIFA World Ranking at a specific date---such that, if a pot contains $k$ teams, the best $k$ teams are in the first pot, the next $k$ are in the second pot, and so on.
Our paper focuses on four qualifications, the AFC, the CONCACAF, the CONMEBOL, and the OFC because
(1) contrary to the CAF and UEFA competitions, they are connected to each other;
(2) the largest and most successful nation of the OFC, Australia, switched to the AFC in 2006.
The \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_(AFC)}{2018 FIFA World Cup qualification (AFC)} contained $46$ nations and four rounds.
The starting access list was determined by the FIFA World Ranking of January 2015.
\begin{itemize}
\item
\href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_AFC_First_Round}{\textbf{First round}} \\
Format: knockout \\
Competitors: the $12$ lowest-ranked teams ($35$--$46$) \\
Seeding: traditional; based on the FIFA World Ranking of January 2015
\item
\href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_AFC_Second_Round}{\textbf{Second round}} \\
Format: home-away round-robin, 8 groups of five teams each \\
Competitors: the $34$ highest-ranked teams ($1$--$34$) + the six winners from the first round \\
Seeding: traditional; based on the FIFA World Ranking of April 2015\footnote{~Since the seeding order differed from the ranking in the AFC entrant list, three winners in the first round (India, Timor-Leste, Bhutan) were not seeded in the weakest pot 5.}
\item
\href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_AFC_Third_Round}{\textbf{Third round}} \\
Format: home-away round-robin, 2 groups of six teams each \\
Competitors: the eight group winners and the four best runners-up in the second round\footnote{~Group F in the second round consisted of only four teams because Indonesia was disqualified by the FIFA. Therefore, the matches played against the fifth-placed team were disregarded in the comparison of the runners-up.} \\
Seeding: traditional; based on the FIFA World Ranking of April 2016 \\
The two group winners and the two runners-up qualified to the 2018 FIFA World Cup.
\item
\href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_AFC_Fourth_Round}{\textbf{Fourth round}} \\
Format: knockout \\
Competitors: the third-placed teams from the groups in the third round \\
Seeding: redundant \\
The winner advanced to the \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_(inter-confederation_play-offs)}{inter-confederation play-offs}.
\end{itemize}
The \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_(CONCACAF)}{2018 FIFA World Cup qualification (CONCACAF)} contained $35$ nations and five rounds.
The access list was determined by the FIFA World Ranking of August 2014.
\begin{itemize}
\item
\href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_CONCACAF_First_Round}{\textbf{First round}} \\
Format: knockout \\
Competitors: the $14$ lowest-ranked teams ($22$--$35$) \\
Seeding: traditional; based on the FIFA World Ranking of August 2014
\item
\href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_CONCACAF_Second_Round}{\textbf{Second round}} \\
Format: knockout \\
Competitors: the teams ranked $9$--$21$ in the access list + the seven winners from the first round \\
Seeding: the seven teams of pot 5 (ranked $9$--$15$) were drawn against the teams of pot 6 (the winners from the first round) and the three teams of pot 3 (ranked $16$--$18$) were drawn against the three teams of pot 4 (ranked $19$--$21$); based on the FIFA World Ranking of August 2014
\item
\href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_CONCACAF_Third_Round}{\textbf{Third round}} \\
Format: knockout \\
Competitors: the teams ranked $7$--$8$ in the access list + the $10$ winners from the second round \\
Seeding: traditional; based on the FIFA World Ranking of August 2014
\item
\href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_CONCACAF_Fourth_Round}{\textbf{Fourth round}} \\
Format: home-away round-robin, 3 groups of four teams each \\
Competitors: the teams ranked $1$--$6$ in the access list + the six winners from the third round \\
Seeding: pot 1 (teams ranked $1$--$3$), pot 2 (teams ranked $4$--$6$), pot 3 (the winners from the third round) such that each group contained a team from pot 1, a team from pot 2, and two teams from pot 3; based on the FIFA World Ranking of August 2014
\item
\href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_CONCACAF_Fifth_Round}{\textbf{Fifth round}} \\
Format: home-away round-robin, one group of six teams \\
Competitors: the group winners and the runners-up in the fourth round \\
Seeding: redundant \\
The top three teams qualified to the 2018 FIFA World Cup and the fourth-placed team advanced to the \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_(inter-confederation_play-offs)}{inter-confederation play-offs}.
\end{itemize}
The \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_(CONMEBOL)}{2018 FIFA World Cup qualification (CONMEBOL)} contained $10$ nations, which contested in a home-away round-robin tournament \citep{DuranGuajardoSaure2017}. The top four teams qualified to the 2018 FIFA World Cup, and the fifth-placed team advanced to the \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_(inter-confederation_play-offs)}{inter-confederation play-offs}.
The \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_(OFC)}{2018 FIFA World Cup qualification (OFC)} contained $11$ nations and four rounds.
\begin{itemize}
\item
\href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_OFC_First_Round}{\textbf{First round}} \\
Format: single round-robin, one group organised in a country (Tonga was chosen later) \\
Competitors: the four lowest-ranked teams ($8$--$11$), based on FIFA World Ranking and sporting reasons \\
Seeding: redundant
\item
\href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_OFC_Second_Round}{\textbf{Second round}} \\
Format: single round-robin, 2 groups of four teams each, all matches played in one country \\
Competitors: the seven strongest teams ($1$--$7$) + the group winner in the first round \\
Seeding: traditional; based on the FIFA World Ranking of July 2015
\item
\href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_OFC_Third_Round}{\textbf{Third round}} \\
Format: home-away round-robin, 2 groups of three teams each \\
Competitors: the top three teams from each group in the second round \\
Seeding: pot 1 (\href{https://en.wikipedia.org/wiki/2016_OFC_Nations_Cup}{2016 OFC Nations Cup} finalists), pot 2 (\href{https://en.wikipedia.org/wiki/2016_OFC_Nations_Cup}{2016 OFC Nations Cup} semifinalists), pot 3 (third-placed teams in the second round) such that each group contained one team from pots $1$--$3$ each\footnote{~The group stage of the \href{https://en.wikipedia.org/wiki/2016_OFC_Nations_Cup}{2016 OFC Nations Cup} served as the second round of the 2018 FIFA World Cup qualification (OFC). Any group winner was matched with the runner-up of the other group in the semifinals of the 2016 OFC Nations Cup.}
\item
\href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_\%E2\%80\%93_OFC_Third_Round#Final}{\textbf{Fourth round}} \\
Format: knockout \\
Competitors: the group winners in the third round \\
Seeding: redundant \\
The winner advanced to the \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_(inter-confederation_play-offs)}{inter-confederation play-offs}.
\end{itemize}
Consequently, the \href{https://en.wikipedia.org/wiki/2018_FIFA_World_Cup_qualification_(inter-confederation_play-offs)}{inter-confederation play-offs} were contested by four teams from the four confederations (AFC, CONCACAF, CONMEBOL, OFC), and were played in a knockout format. The four nations were drawn randomly into two pairs without seeding. The two winners qualified to the 2018 FIFA World Cup.
The inter-confederation play-offs of the \href{https://en.wikipedia.org/wiki/2006_FIFA_World_Cup_qualification_(inter-confederation_play-offs)}{2006 FIFA World Cup} and the \href{https://en.wikipedia.org/wiki/2014_FIFA_World_Cup_qualification_(inter-confederation_play-offs)}{2014 FIFA World Cup qualification} were also drawn randomly. This policy will be followed in the \href{https://en.wikipedia.org/wiki/2022_FIFA_World_Cup_qualification_(inter-confederation_play-offs)}{2022 FIFA World Cup}, too.
However, FIFA fixed the ties in the \href{https://en.wikipedia.org/wiki/2010_FIFA_World_Cup_qualification_(inter-confederation_play-offs)}{inter-continental play-offs of the 2010 FIFA World Cup qualification} as AFC vs.\ OFC and CONCACAF vs.\ CONMEBOL to pair teams being in closer time zones.\footnote{~Similarly, FIFA matched a randomly drawn UEFA runner-up with the AFC team, and two nations from CONMEBOL and OFC in the two \href{https://en.wikipedia.org/wiki/2002_FIFA_World_Cup_qualification_(inter-confederation_play-offs)}{inter-continental play-offs of the 2002 FIFA World Cup qualification}.}
\section{Methodology and implementation} \label{Sec4}
In order to quantify a particular tournament metric of the FIFA World Cup qualification, it is necessary to follow a simulation technique because historical data are limited: the national teams do not play many matches and the outcome of a qualification is only a single realisation of several random variables. Such a model should be based on predicting the result of individual games. For this purpose, the strengths of the teams are measured by the \href{https://en.wikipedia.org/wiki/World_Football_Elo_Ratings}{World Football Elo Ratings}, available at the website \href{http://eloratings.net/}{eloratings.net}.
Elo-inspired methods are usually good in forecasting \citep{LasekSzlavikBhulai2013}, and have been widely used in academic research \citep{HvattumArntzen2010, LasekSzlavikGagolewskiBhulai2016, CeaDuranGuajardoSureSiebertZamorano2020}.
Elo ratings depend on the results of previous matches but the same result is worth more when the opponent is stronger. Furthermore, playing new games decreases the weight of earlier matches. Since there is no official Elo rating for national teams, this approach can be implemented in various ways. For instance, while the official \href{https://en.wikipedia.org/wiki/FIFA_World_Rankings}{FIFA World Ranking} adopted the Elo method of calculation after the 2018 FIFA World Cup \citep{FIFA2018a, FIFA2018c}, it does not contain any adjustment for home or away games. However, home advantage has been presented to be a crucial factor in international soccer, even though its influence appears to be narrowing \citep{BakerMcHale2018}.
The World Football Elo Ratings takes into account some soccer-specific parameters such as the margin of victory, home advantage, and the tournament where the match was played.
In the 2018 FIFA World Cup qualification, three types of matches were played: group matches in a home-away format, single group matches in a randomly chosen country that is assumed to be a neutral field (only in the first and the second rounds in the OFC zone), and home-away knockout matches.
For group matches, the win expectancy can be directly obtained from the formula of Elo rating according to the system of World Football Elo Ratings (see \href{http://eloratings.net/about}{http://eloratings.net/about}):
\begin{equation} \label{eq1}
W_{RR}^e = \frac{1}{1 + 10^{-d/400}},
\end{equation}
where $d$ equals the difference in the Elo ratings of the two teams, and the home advantage is fixed at $100$.
On the other hand, in knockout clashes, the teams focus primarily on advancing to the next round rather than winning one match. Therefore, we have followed the solution of the ClubElo rating (see \href{http://clubelo.com/System}{http://clubelo.com/System}), namely, such two-legged matches are considered as one long match with a corresponding increase in the difference between the strengths of the teams:
\begin{equation} \label{eq2}
W_{KO}^e = \frac{1}{1 + 10^{- \sqrt{2} d/400}}.
\end{equation}
The Elo ratings are dynamic but we have fixed them for the sake of simplicity. In each of the four confederations, the ratings of all teams on the day before the first match of the relevant qualification tournament and the last day of the inter-confederation play-offs (15 November 2015) have been averaged. Four tables in the Appendix show the corresponding measures of strength:
Table~\ref{Table_A1} for the $35$ CONCACAF teams;
Table~\ref{Table_A2} for the $46$ AFC teams;
Table~\ref{Table_A3} for the $10$ CONMEBOL teams; and
Table~\ref{Table_A4} for the $11$ OFC teams.
On the basis of formulas \eqref{eq1} and \eqref{eq2}, each individual game can be simulated repeatedly. In particular, the win probability $w_i$ of team $i$ is determined for a match played by teams $i$ and $j$. A random number $r$ is drawn uniformly between $0$ and $1$, team $i$ wins if $r < w_i$, and team $j$ wins otherwise. Thus draws are not allowed, and group rankings are calculated by simply counting the number of wins. Ties in the group rankings are broken randomly.
Our computer code closely follows the rules of the qualification process described in Section~\ref{Sec3}. Pot assignment and seeding is based on the Elo rating of the teams in each case since the strength of the teams are given by this measure instead of the FIFA ranking, which was a rather bad predictor before the 2018 reform \citep{LasekSzlavikBhulai2013, CeaDuranGuajardoSureSiebertZamorano2020}.
Hence, although the official AFC qualification updated the ranking of the teams before the seeding in each round (see Section~\ref{Sec3})---thus the results of matches played already during the qualification may have affected the subsequent rounds---, that complication is disregarded in our simulations.
Using the Elo rating for seeding is also necessary to guarantee the consistency of the simulations since the proposed fairness measure will be based on the Elo ratings. Therefore, every instance of unfairness will be intrinsic to the tournament design.
Finally, the move of Australia from the OFC to the AFC will also be evaluated. Accordingly, an alternative design of the FIFA World Cup qualification should be chosen with Australia being in the OFC instead of the AFC. Since then there are only $45$ countries in Asia, a straightforward format would be to organise the first knockout round with the 10 lowest-ranked teams ($36$--$45$), while the second round is contested by the $35$ highest-ranked teams ($1$--$35$) plus the five winners from the first round.
Together with Australia, the OFC qualification contains $12$ teams. Fortunately, the design of the \href{https://en.wikipedia.org/wiki/2006_FIFA_World_Cup_qualification_(OFC)}{2006 FIFA World Cup qualification (OFC)} can be adopted without any changes:
\begin{itemize}
\item
\href{https://en.wikipedia.org/wiki/2006_FIFA_World_Cup_qualification_\%E2\%80\%93_OFC_First_Round}{\textbf{First round}} \\
Format: single round-robin, 2 groups of five teams each, held in one country \\
Competitors: the $10$ lowest-ranked teams, that is, all nations except for Australia and New Zealand \\
Seeding: traditional\footnote{~This is only a (reasonable) conjecture as we have not found the official regulation.}
\item
\href{https://en.wikipedia.org/wiki/2004_OFC_Nations_Cup}{\textbf{Second round}} \\
Format: single round-robin, one group of six teams, held in one country \\
Competitors: the two highest-ranked teams (Australia, New Zealand) + the group winners and the runners-up in the first round \\
Seeding: redundant
\item
\href{https://en.wikipedia.org/wiki/2006_FIFA_World_Cup_qualification_(OFC)\#Final_round}{\textbf{Third round}} \\
Format: knockout \\
Competitors: the group winner and the runner-up in the second round \\
Seeding: redundant \\
The winner advanced to the \href{https://en.wikipedia.org/wiki/2006_FIFA_World_Cup_qualification_(inter-confederation_play-offs)}{inter-confederation play-offs}.
\end{itemize}
Any theoretical model is only as good as its assumptions. It is worth summarising the main limitations here:
\begin{itemize}
\item
The strength of the teams is exogenously given and fixed during the whole qualification process.
\item
Goal difference is not accounted for in any stage of the qualification.
\item
Draws are not allowed, which is not in line with the rules of soccer.
\item
Home advantage does not differ between the confederations despite the findings of \citet{PollardArmatas2017}. However, the influence of the corresponding parameter is minimal since all matches are played both home and away except for Oceania, where some games are hosted by a randomly drawn country.
\item
The efforts of the teams do not change even if they have already qualified as a group winner.
\end{itemize}
These mean that our numerical results are primarily for comparative purposes.
Consequently, the direction of changes in the tournament metrics after modifying the tournament design is more reliable than, for example, the computed probability of qualification for the FIFA World Cup.
Each simulation has been carried out with 10 million independent runs. A further increase does not reduce statistical errors considerably, and would be a futile exercise anyway in the view of the above model limitations.
\section{Results} \label{Sec5}
The three main research questions, presented in the Introduction, will be discussed in separate subsections.
\subsection{Quantifying unfairness}
\input{Figure1_qualifying_probability}
Figure~\ref{Fig1} shows the probability of qualification for the 2018 FIFA World Cup as the function of the Elo rating.
Unsurprisingly, the simple round-robin format of the CONMEBOL qualification guarantees that this tournament metric depends monotonically on the strength of the teams.
The structure of the OFC qualification does not necessarily satisfy the fairness condition but it still holds because only the four weakest teams should play in the first round and the seeding is based on the strengths of the teams. Similarly, the AFC and CONCACAF qualifications are also essentially conforming to the principle of giving higher chances for better teams.
The degree of unfairness can be quantified by ranking the teams according to their Elo rating and summing the differences of qualifying probabilities that do not fall into line with this ranking. Formally, the measure of unfairness $UF$ is defined as:
\begin{equation} \label{eq_unfairness}
UF = \sum_{\text{Elo}(i) \geq \text{Elo}(j)} \max \{ 0; p(j) - p(i) \},
\end{equation}
where $\text{Elo}(i)$ and $p(i)$ are the Elo rating and the probability of qualification for team $i$, respectively.
Formula~\eqref{eq_unfairness} only considers the ordinal strength of the teams because prescribing how the differences in Elo rating should be converted into differences in qualifying probabilities would require further assumptions, which seem to be challenging to justify. While the above metric depends on the number of teams as well as the number of slots available, it does not mean a problem when both of these variables are fixed.
\begin{table}[t]
\centering
\caption{The level of unfairness within the confederations}
\label{Table1}
\rowcolors{1}{}{gray!20}
\begin{tabularx}{0.5\textwidth}{l C} \toprule
Confederation & Value of $UF$ \\ \bottomrule
AFC & 0.0000005 \\
CONCACAF & 0.0000002 \\
CONMEBOL & 0.0000000 \\
OFC & 0.0000000 \\ \bottomrule
\end{tabularx}
\end{table}
As Table~\ref{Table1} reveals, the qualification tournaments of all confederations are constructed fairly. The negligible numbers for AFC and CONCACAF are only due to the stochastic nature of the simulation, which leads to volatile qualifying probabilities for weak teams.
Unfairness has another dimension, that is, between the confederations. In order to investigate this issue, Peru (the $6$th strongest team in CONMEBOL, Elo: 1844.5) has been exchanged sequentially with the strongest teams in the other three confederations: Iran (AFC, Elo: 1762), Mexico (CONCACAF, Elo: 1871), and New Zealand (OFC, Elo: 1520.5). These countries are highlighted in Figure~\ref{Fig1}.
\begin{table}[t]
\centering
\caption{Qualifying probabilities when Peru is moved to another confederation}
\label{Table2}
\rowcolors{1}{gray!20}{}
\begin{tabularx}{\textwidth}{ll CccC} \toprule \hiderowcolors
Team & Original & \multicolumn{4}{c}{Peru plays in} \\
& confederation & AFC & CONCACAF & CONMEBOL & OFC \\ \bottomrule \showrowcolors
Iran & AFC & 0.156 & 0.825 & 0.852 & 0.818 \\
Mexico & CONCACAF & 0.958 & 0.539 & 0.960 & 0.955 \\
Peru & CONMEBOL & 0.937 & 0.938 & 0.416 & 0.495 \\
New Zealand & OFC & 0.199 & 0.196 & 0.155 & 0.000 \\ \bottomrule
\end{tabularx}
\end{table}
Table~\ref{Table2} reports the probabilities of qualification for the four nations if Peru would contest in various confederations. According to the numbers in the diagonal, any team is the worst off when playing in the CONMEBOL qualifiers. On the other hand, the chance of Peru to participate in the 2018 FIFA World Cup would more than double by playing in the AFC or CONCACAF zone. Compared to these options, being a member of the OFC would be less beneficial for Peru due to the lack of a direct qualification slot. Its effect can be seen to some extent in the qualifying probabilities of Iran and Mexico: since Peru would qualify with more than 96\% probability from the OFC qualifiers to the inter-confederation play-offs, the two teams would have a larger probability to face Peru there, which would reduce their chances to advance to the World Cup finals.
\subsection{A potential improvement of fairness}
The straightforward solution to handle unfairness between the confederations would be to reallocate the slots available for them, especially because the current allocation system lacks any statistical validation, does not ensure the qualification of the best teams in the world, and does not reflect the number of teams per federation \citep{StoneRod2016}. The whole process is far from being transparent and is mainly determined by political, cultural, and historical factors. Consequently, operations research has a limited role to influence the allocation of World Cup slots between the FIFA confederations.
However, the matching in the two inter-confederation play-offs is probably a variable to be chosen freely by the FIFA executives who are responsible for the tournament design, as illustrated by the two policies used recently (see the last paragraph of Section~\ref{Sec3}). We have considered three possibilities:
\begin{itemize}
\item
\emph{Random draw} for the play-offs: the four participants from the confederations AFC, CONCACAF, CONMEBOL, and OFC are drawn randomly into two pairs;
\item
\emph{Close draw} for the play-offs: the four participants are paired such that AFC vs.\ OFC and CONCACAF vs.\ CONMEBOL;
\item
\emph{Fair draw} for the play-offs: the four participants are paired such that AFC vs.\ CONCACAF and CONMEBOL vs.\ OFC.
\end{itemize}
The random draw has been used in the \href{https://en.wikipedia.org/wiki/2006_FIFA_World_Cup_qualification}{2006 FIFA World Cup qualification}, as well as since the \href{https://en.wikipedia.org/wiki/2014_FIFA_World_Cup_qualification}{2014 FIFA World Cup qualifiers}. The close draw has been used in the \href{https://en.wikipedia.org/wiki/2010_FIFA_World_Cup_qualification}{2010 FIFA World Cup qualification competition}: it matches nations from closer time zones, which allows for better kick-off times, and it can be optimal for the players, as well as may maximise gate revenue and the value of television rights. Finally, the fair draw is inspired by Figure~\ref{Fig1} and Table~\ref{Table2} since the CONMEBOL team is usually the strongest and the OFC team is usually the weakest in the play-offs.
\begin{table}[t]
\centering
\caption{Unfairness and the draw for the play-offs}
\label{Table3}
\begin{subtable}{\textwidth}
\centering
\caption{The overall level of unfairness}
\label{Table3a}
\begin{tabularx}{0.5\textwidth}{CCC} \toprule
\multicolumn{3}{c}{Draw policy for the play-offs} \\
Random & Close & Fair \\ \midrule
14.39 & 16.18 & 13.12 \\ \bottomrule
\end{tabularx}
\end{subtable}
\vspace{0.5cm}
\begin{subtable}{\textwidth}
\centering
\caption{The qualifying probabilities of certain teams}
\label{Table3b}
\rowcolors{1}{gray!20}{}
\begin{tabularx}{0.8\textwidth}{ll CCC} \toprule \hiderowcolors
Team & Confederation & \multicolumn{3}{c}{Draw policy for the play-offs} \\
& & Random & Close & Fair \\ \bottomrule \showrowcolors
Australia & AFC & 0.765 & 0.798 & 0.768 \\
Iran & AFC & 0.852 & 0.876 & 0.856 \\
Mexico & CONCACAF & 0.960 & 0.948 & 0.964 \\
Peru & CONMEBOL & 0.416 & 0.402 & 0.436 \\
New Zealand & OFC & 0.155 & 0.216 & 0.046 \\ \toprule
\end{tabularx}
\end{subtable}
\end{table}
The draws for the play-offs are compared in Table~\ref{Table3}, the measure of unfairness $UF$ (formula~\eqref{eq_unfairness}) is presented in Table~\ref{Table3a}, while Table~\ref{Table3b} provides the probability of qualification for some countries. Intuitively, the fair draw is the closest to fairness. The close draw mostly favours AFC and OFC, however, it is detrimental to the CONCACAF and CONMEBOL members, implying the most severe unfairness.
\input{Figure2_draw_policy_probability_change}
The effect of a fair draw is detailed in Figure~\ref{Fig2} for the teams with at least 0.1 percentage points change in the probability of qualification to the 2018 FIFA World Cup. Compared to the current random design, all South American countries would be better off and the strongest AFC and CONCACAF countries are also preferred. On the other hand, all nations of the OFC, in particular, the dominating New Zealand, would lose substantially from this reform. The gains are distributed more equally because there is no such a prominent team in the other zones. Some weak AFC and CONCACAF members are worse off due to the impossibility of playing against New Zealand in the inter-confederation play-offs.
\subsection{Counterfactual: was it favourable for Australia to join the AFC?}
FIFA president \emph{Sepp Blatter} had promised a full slot to the OFC as part of his re-election campaign in November 2002 but the suggestion was reconsidered in June 2003 \citep{ABC2003}.
Subsequently, the largest and most successful nation of the OFC, Australia, left to join the AFC in 2006. It raises the interesting issue of how this move has affected the 2018 FIFA World Cup qualification.
First, the unfairness measure $UF$ would be $15.87$ with Australia playing in the OFC, which corresponds to an increase of more than 9\% as opposed to the current situation. The action of Australia has contributed to the fairness of the 2018 FIFA World Cup qualification. The magnitude of the improvement is similar to the proposed novel draw for the inter-confederation play-offs.
\input{Figure3_Australia_AFC_probability_change}
Second, the probabilities of qualification are computed if Australia would have remained in the OFC. Figure~\ref{Fig3} plots the effects for the national teams facing a change of at least 0.1 percentage points. Notably, Australia has increased the probability of participating in the 2018 FIFA World Cup from 44\% to 77\% by leaving the OFC for the AFC. The move has also been strongly favourable for New Zealand, which is now the strongest OFC team and has more than 70\% chance to grab the slot guaranteed in the play-offs for Oceania. Every CONCACAF and CONMEBOL member has been better off due to the reduction in the expected strength of the countries contesting in the play-offs. However, all original AFC nations have lost with the entrance of Australia, especially those teams that are only marginally weaker than Australia.
\section{Conclusions} \label{Sec6}
We have analysed four series of qualification tournaments for the 2018 FIFA World Cup via Monte-Carlo simulations. Their design does not suffer from serious problems but the CONCACAF competition can be criticised for the great role attributed to the FIFA World Ranking. Perhaps it is not only a coincidence that this confederation has fundamentally restructured its \href{https://en.wikipedia.org/wiki/2022_FIFA_World_Cup_qualification_(CONCACAF)}{qualifying tournament for the 2022 FIFA World Cup} \citep{CONCACAF2019}.
On the other hand, there are substantial differences between the chances of nations playing in different continents: Peru would have doubled its probability of qualification by competing in the AFC or CONCACAF zone, while New Zealand would have lost any prospect of participation by being a member of CONMEBOL. Australia is found to have greatly benefited from leaving the OFC for the AFC in 2006.
Hopefully, this paper will become only a pioneering attempt of academic researchers to simulate the qualification to the soccer World Cup. There remains a huge scope for improving the model, especially concerning the prediction of individual matches. The results might be useful for sports governing bodies: we believe that FIFA could further increase the economic success of World Cups by using a more transparent and statistically validated method in the allocation of qualifying berths and the design of confederation-level qualification tournaments.
In addition, probably first in the literature, a measure of unfairness has been proposed to quantify to which extent weaker teams are preferred over stronger teams by a tournament design. A simple modification in the format of the inter-confederation play-offs can reduce this metric by about 10\%, hence the novel policy shall be seriously considered by FIFA.
\section*{Acknowledgements}
\addcontentsline{toc}{section}{Acknowledgements}
\noindent
This paper could not have been written without \emph{my father} (also called \emph{L\'aszl\'o Csat\'o}), who has coded the simulations in Python. \\
Four anonymous reviewers provided valuable comments and suggestions on earlier drafts. \\
We are indebted to the \href{https://en.wikipedia.org/wiki/Wikipedia_community}{Wikipedia community} for collecting and structuring valuable information on the sports tournaments discussed. \\
The research was supported by the MTA Premium Postdoctoral Research Program grant PPD2019-9/2019.
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 1.720703,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfJU25V5jOxNYllLH | \section{Reduction from Multislope Ski Rental to Online weighted Bipartite Vertex Cover}
\label{sec:multislope}
There is a total of $n$ states $[n]$ in the multislope ski rental problem. Each state $i$ associated with buying cost $b_i$ and rental cost $r_i$. As argued in [.], we may assume that we start in state 1 and have $0=b_1\leq b_2\leq \ldots\leq b_n, r_1\geq r_2\geq \ldots\geq r_n\geq 0$. The game starts at time 0 and ends at some unknown time $t_{end}$ determined by the adversary.
At each time $t\in [0,t_{end}]$, we can transition from the current state $i$ to some state $j>i$. Let state $f$ be the final state at time $t_{end}$. The total cost incurred is given by $$b_f+\sum_{i=1}^f x_i r_i,$$where $x_i$ is the amount of time spent in state $i$. The classical ski rental problem corresponds to $n=2$ and $b_1=0,b_2=B,r_1=1,r_2=0$.
Consider now the discrete version of this problem. We discretize time into consecutive intervals of length $\epsilon$ for some small $\epsilon >0$. At the beginning of each interval, we can stay in the current state $i$ or transition from $i$ to some state $j>i$. Each of the two choices correspond to a cost of $r_i\epsilon$ or $b_j-b_i+r_j\epsilon$.
We are ready to describe the reduction to online vertex-weighted bipartite vertex cover. Let $L=\{1,2,\ldots,n\}$ with weights $w_i=b_{i+1}-b_i$ for $i<n$ and $w_n=\infty$. The $(qn+r)$-th online vertex $v_{qn+k}\in R$, where $q$ is a nonnegative integer and $1\leq k\leq n$, has weight $(r_{k}-r_{k+1})\epsilon$ (with $r_{n+1}=0$) and is adjacent to the left vertices $1,\ldots,k$.
Intuitively, the $(q+1)$-th time interval is represented by the online vertices $qn+1,\ldots,qn+n$. If we are in state $i$, (1) the left vertices $1,\ldots,i-1$ should be covered and have total weight $b_i-b_0=b_i$ and, (2) the online vertices $qn+i\ldots qn+n$ should be covered and have total weight $r_i\epsilon$. Thus when we transition from state $i$ to state $j>i$, the vertices $i,\ldots,j-1$ should be added to the cover. Moreover, the left vertex $n$, which has infinite weight, is used to ensure that the algorithm is forced to put the online vertex $qn+n$, which has weight $r_n$, into the cover.
Finally, we show that a $c$-competitive algorithm for online vertex-weighted bipartite vertex cover gives a $c$-competitive algorithm for multislope ski rental under the above reduction. Consider the vertex cover maintained by the algorithm after processing online vertices $qn+1,\ldots,qn+n$. Suppose that $1,\ldots,i-1$ are in the cover but $i$ is not. Then the online vertices $qn+i\ldots qn+n$ must also be in the cover. Thus we can simply stay in (or transition to if the previous state is smaller) state $i$. It is clear that this strategy is valid by the preceding discussion. Furthermore, the cost incurred by the algorithm for multislope ski rental is no greater than the counterpart for vertex cover.
\section{Proof of Theorem 2}
We modify GreedyAllocation as follows. The only difference is that $\sum_{u\in N(v)\cap T} \max\{y-y_u,0\} \leq f(y)$ is replaced by $\sum_{u\in N(v)\cap T} w_u\max\{y-y_u,0\} \leq w_vf(y)$.
\begin{algorithm}[h!]
\SetAlgoLined
\caption{$GreedyAllocation$ with allocation function $y+\alpha$}
\label{alg:general greedy}
\KwIn{Online graph $G=(V,E)$ with offline vertices $U\subset V$}
\KwOut{A fractional vertex cover of $G$}
Initialize for each $u\in U$, $y_u = 0$\;
Let $T$ be the set of known vertices. Initialize $T=U$\;
\For{each online vertex $v$}
{
Maximize $y\le 1$, s.t., $\sum_{u\in N(v)\cap T} w_u\max\{y-y_u,0\} \leq w_v(y+\alpha)$\;
For each $u\in N(v)\cap T$, $y_u \leftarrow \max\{ y_u, y\}$\;
$y_v \leftarrow 1-y$\;
$T\leftarrow T\cup \{v\}$\;
}
Output $\{y_v\}$ for all $v\in V$\;
\end{algorithm}
To analyze the algorithm, we need the following lemma which is an easy extension of Lemma~\ref{lem:charging}.
\begin{lemma}
\label{lem:weightedcharging}
Let $f:[0,1]\longrightarrow\mathbb{R}_+$ be continuous such that $\frac{1-t}{f(t)}$ is decreasing, and $F(x)=\int_0^x \frac{1-t}{f(t)}\mathrm{d}t$. If $\sum_{u\in X} w_u(y-y_u)= w_vf(y)$ for some set $X$ and $y\geq y_u$ for $u\in X$, then
$$w_v(1-y)\leq \sum_{u\in X} w_u\left(F(y)-F(y_u)\right).$$
\end{lemma}
We only give a sketch of the charging scheme as it is very similar to the unweighted case.
We charge the potentials used to the vertices of the minimum cover $C^*$. Let $v$ be an online vertex. The case $v\in C^*$ is trivial.
Now consider the case $v\notin C^*$. We charge the potential spent on $u\in N(v)\subseteq C^*$ to $u$ itself. The potential spent on $v$ is $y_v = w_v(1-y)$ where $y$ is the final water level.
Let $X\subset N(v)$ be the set of vertices whose potentials increase when processing $v$.
If $y =1$, we are done. If $y <1$, we have $\sum_{u\in X} w_u(y-y_u) = w_v(\alpha +y)$, where $y_u$ is the potential of $u$ before processing $v$. By Lemma~\ref{lem:weightedcharging}, $w_v(1-y) \leq \sum_{u\in X} w_u(G(y)-G(y_u))$.
Now the charges to each $u\in C^*\cap R$ are at most $1+G(1)-G(0)=1+\alpha$.
\section{ Proof of Theorem 6}
The proof is an extension of the one for Theorem~\ref{thm:pdgeneral}. We analyze the following algorithm using the primal-dual method. It is also possible to give a charging-based analysis of the online vertex-weighted vertex cover part of the algorithm.
The function $f$ below is the same as that for Theorem~\ref{thm:pdgeneral}. Recall that $\beta\geq 1+f(1-z)+\int_z^1 \frac{1-t}{f(t)}\mathrm{d}t$ for $z\in [0,1]$.
\begin{algorithm}[h!]
\SetAlgoLined
\caption{$PrimalDual-Weighted$}
\label{alg:general greedy weighted}
\KwIn{Online graph $G=(V,E)$, weights/capacities $w_v,v\in V$}
\KwOut{A fractional vertex cover $\{y_v\}$ of $G$ and a fractional capacitated matching $\{x_{uv}\}$.}
Let $T$ be the set of known vertices. Initialize $T=\emptyset$\;
\For{each online vertex $v$}
{
Maximize $y\le 1$, s.t., $\sum_{u\in N(v)\cap T} w_u\max\{y-y_u,0\} \leq w_vf(y)$\;
Let $X = \{u \in N(v)\cap T \,\mid\, y_u < y\}$\;
\For{each $u\in X$}
{
$y_u\leftarrow y$\;
$x_{uv}\longleftarrow \frac{w_u(y-y_u)}{\beta}\left(1+\frac{1-y}{f(y)}\right)$\;
}
For each $u \in (N(v)\cap T)\setminus X$, $x_{uv}\longleftarrow 0$\;
$y_v \leftarrow 1-y$\;
$T\leftarrow T\cup \{v\}$\;
}
Output $\{y_v\}$ for all $v\in V$\;
\end{algorithm}
We claim that the following two invariants hold:
\textbf{Invariant 1:} $$w_u\cdot\frac{y_{u}+f(1-z_u)+\int_{z_u}^{y_{u}}\frac{1-t}{f(t)}\mathrm{d}t}{\beta}\geq x_{u},$$ where $z_u$ is the potential of $u$ set upon its arrival, $y_u$ is the current potential of $u$ and $x_u = \sum_{v \in N(u)} x_{uv}$ is the sum of the potentials on the edges incident to $u$. Note that the LHS is at most 1 by the definition of $\beta$, which guarantees that the primal is feasible as long as the invariant holds.
\textbf{Invariant 2:} $$\sum_{u\in T} w_uy_{u}=\beta\sum_{(u,v)\in E\cap (T\times T)} x_{uv}$$
We sketch why these two invariants are preserved after processing each vertex $v$. The proof is almost identical to the unweighted case.
Invariant 2: The dual increment is $$w_v(1-y)+\sum_{u\in X}w_u(y-y_u)$$ and the primal increment is $$\sum_{u\in X} \frac{w_u(y-y_u)}{\beta}\left( 1 + \frac{1-y}{f(y)}\right).$$
Thus it suffices to show that $w_v(1-y)=\sum_{u\in X}w_u(y-y_u)\frac{1-y}{f(y)}$.
When $y=1$, the statement trivially holds. On the other hand, when $y<1$, by construction, we have
$\sum_{u\in X}w_u(y-y_u)=w_vf(y)$.
Invariant 1: We first show that Invariant 1 still holds for $x_v$. Note that $x_v=\sum_{u\in X}x_{uv}$ is just the increase in the primal objective value. By Invariant 2, we have
$$x_v=\frac{w_v(1-y)+\sum_{u\in X}w_u(y-y_u)}{\beta}\le \frac{w_v}{\beta}(z_u + f(1-z_u)).$$
We now show that Invariant 1 is preserved for each $u\in V$. By Invariant 1, the previous $x_u$ satisfies $$x_u-x_{uv}\leq w_u\frac{y_{u}+f(1-z_u)+\int_{z_u}^{y_{u}}\frac{1-t}{f(t)}\mathrm{d}t}{\beta}.$$
This proof is finished by noticing that $$x_{uv}=\frac{w_u(y-y_u)}{\beta}\left( 1 + \frac{1-y}{f(y)}\right)\leq \frac{w_u}{\beta} \left( y-y_u+ \int_{y_u}^y \frac{1-t}{f(t)}
\mathrm{d}t\right),$$ as $\frac{1-t}{f(t)}$ is a decreasing function.
\section{Introduction}
In this paper, we study the online vertex cover problem in bipartite and general graphs. Given a graph $G=(V,E)$, $C\subseteq V$ is a vertex cover of $G$ if all edges in $E$ are incident to $C$. In the online setting, the vertices of $V$ arrive one at a time. When a vertex arrives, its edges incident to the previously arrived neighbors are revealed.
We are required to maintain a {\em monotone} vertex cover for the revealed subgraph at all time. In particular, no vertices can be removed from the cover once added. The objective is to minimize the size of the final vertex cover.
Our study of the online vertex cover problem is motivated by two apparently unrelated lines of research in the literature, namely ski rental and online bipartite matching.
\paragraph{Online bipartite matching.}
The online bipartite matching problem has been intensively studied over the past decade. An instance of this problem specifies a bipartite graph $G=(L,R,E)$ in which the set of left vertices $L$ is known in advance, while the set of right vertices $R$ and edges $E$ are revealed over time. An algorithm maintains a monotone matching that is empty initially. At each step, an online vertex $v\in R$ arrives and all of its incident edges are revealed. An algorithm must immediately and irrevocably decide if $v$ should be matched to a previously unmatched vertex in $N(v)$. The objective is to maximize the size of the matching found at the end.
This problem and almost all of its variants studied in the literature share the common feature that vertices of only one side of the bipartite graph arrive online. While this property indeed holds in many applications, it does not necessarily reflect the reality in general. We exemplify this by the following application:
\begin{itemize}
\item {\bf Online market clearing.} In a commodity market, buyers and sellers are represented by the left and right vertices. An edge between a buyer and seller indicates that the price that the buyer is willing offer is higher than the price at which the seller is willing to take. The objective is to maximize the number of trades, or the size of the matching. In this problem, both the buyers and sellers arrive and leave online continuously.
\end{itemize}
Thus a more general model of online bipartite matching is to allow all vertices to be online. In this paper, we obtain the first non-trivial algorithm for the fractional version of this generalization. Our algorithm is 0.526-competitive and, in fact, also works in general graphs.
\paragraph{Ski rental and online bipartite vertex cover.}
The ski rental problem is perhaps one of the most studied online problems. Recall that in this problem, a skier goes on a ski trip for $N$ days but has no information about $N$. On each day he has the choice of renting the ski equipment for 1 dollar or buying it for $B>1$ dollars. His goal is to minimize the amount of money spent.
We consider the online bipartite vertex cover problem, which is a generalization of ski rental. The setting of this problem is exactly identical to that of online bipartite matching except that the task is to maintain a monotone vertex cover instead. Ski rental can be reduced to online bipartite vertex cover via a complete bipartite graph with $B$ left vertices and $N$ right vertices. One may view this problem as ski rental with a combinatorial structure imposed.
We show that the optimal competitive ratio of online bipartite vertex cover is $\frac{1}{1-1/e}$. In other words, we still have the same performance guarantee even though the online bipartite vertex cover problem is considerably more general than the ski rental problem.
\paragraph{The connection.}
Recall that bipartite matching and vertex cover are dual of each other in the offline setting. It turns out that the analysis of an algorithm for online bipartite fractional matching in~\cite{Buchbinder2007} implies an optimal algorithm for online bipartite vertex cover. On the other hand, online bipartite vertex cover generalizes ski rental. This connection is especially interesting because online bipartite matching does not generalize ski rental but is the dual of its generalization\footnote{Coincidentally, the first papers on online bipartite matching and ski rental were both published in 1990 but to our knowledge, their connection was not realized, or at least explicitly stated.}.
\paragraph{The greedy algorithm.}
There is a simple well-known greedy algorithm for online matching and vertex cover in general graphs. As each vertex arrives, we match it to an arbitrary unmatched neighbor (if any) and put both of them into the vertex cover. It is easy to show that this algorithm is $1/2$-competitive for online matching and $2$-competitive for online vertex cover.
The greedy algorithm for the vertex cover problem is optimal assuming the Unique Game Conjecture even in the offline setting~\cite{Khot2008}. Thus there is no hope of doing better than 2 if we strict ourselves to integral vertex covers in general graphs. For the other problems studied in this paper, e.g. matching and vertex cover in bipartite graphs and matching in general graphs, no known algorithm beats the greedy algorithm in the online setting.
We present the first successful attempt in breaking the barrier of 2 (or 1/2) achieved by the greedy algorithm. In the fractional setting, our algorithm is $1.901$-competitive (against the minimum fractional cover) for online vertex cover and $\frac{1}{1.901}\approx 0.526$-competitive (against the maximum fractional matching) for online matching in general graphs. It is possible to convert the fractional algorithm to a randomized integral algorithm for online vertex cover in bipartite graphs. On the other hand, it is not clear whether it is possible to round our algorithm or its variants for online matching in either bipartite graphs or general graphs.
We stress that the fractional setting is still of interest for two reasons:
\begin{itemize}
\item As well-articulated in~\cite{Buchbinder2007}, some commodities are divisible and hence should be modeled as fractional matchings. In fact, for divisible commodities one would even prefer a fractional matching assignment since the maximum fractional matching may be larger than the maximum integral matching in general graphs. Thus a $c$-competitive algorithm against fractional matching would be preferable to a $c$-competitive algorithm against integral matching.
\item Our 0.526-competitive algorithm for fractional matching suggests that it may be possible to beat the greedy algorithm for online integral matching in the oblivious adversarial model.
\end{itemize}
\subsection{Our results and techniques}
Our algorithms rely on a charging-based algorithmic framework for online vertex cover-related problems. The following results on vertex cover were obtained using this method:
\begin{itemize}
\item A new optimal $\frac{1}{1-1/e}$-competitive algorithm for online bipartite vertex cover\footnote{A similar algorithm is implied by the analysis of the algorithm for online bipartite fractional matching in~\cite{Buchbinder2007}.}.
\item A 1.901-competitive algorithm for online fractional vertex cover in general graphs.
\end{itemize}
We stress that the fact that our result holds only for the \emph{fractional} version of online vertex cover in general graphs is reasonable. In fact, even in the offline setting, the best known approximation algorithm for minimum vertex cover is just the simple 2-approximate greedy algorithm. Getting anything better than 2 would disprove the Unique Game Conjecture even in the offline setting~\cite{Khot2008} and have profound implications to the theory of approximability.
Our algorithms can also be analyzed in the prime-dual framework~\cite{Buchbinder2007}. As by-products, we obtain dual results on the maximum matching as follows:
\begin{itemize}
\item A 0.526-competitive algorithm for online fractional matching in general graphs. This improves the result on the online edge-selection problem studied in~\cite{Blum2006}.
\end{itemize}
All of these results also hold in the vertex-weighted setting (for vertex cover) and the b-matching setting.
Section~\ref{sec:rounding} explains how to convert essentially any algorithm for online {\em fractional} vertex cover to an algorithm for online {\em integral} vertex cover in the case of bipartite graphs with the same (expected) performance.
On the hardness side, we establish the following lower bound (for vertex cover) and upper bound (for matching) on the competitive ratios. Notice that these bounds also apply to the integral version of the problems.
\begin{itemize}
\item A lower bound of $1+\sqrt{\frac{1}{2}\left(1+\frac{1}{e^2}\right)}\approx 1.753$ for the online {\em fractional} vertex cover problem in bipartite graphs.
\item An upper bound of 0.625 for the online {\em fractional} matching problem in bipartite graphs.
\end{itemize}
\paragraph{Main ingredients.}
Our result is based on a novel charging-based analysis of a new {\em water-filling} algorithm for the online bipartite vertex cover problem. In the {\em water-filling} algorithm, for each online vertex, we are allowed to use water of amount at most $\frac{1}{1-1/e}$ to cover the new edges. (Recall that in the original {\em water-filling} algorithm for matching, the amount of water is at most $1$.) In our charging scheme, for an online vertex in the optimal cover, we charge all the water used in processing this vertex to itself. For an online vertex not in the optimal cover, we charge the water spent on the online vertex to its neighbors, which must be in the optimal cover. In particular, in the bipartite graph case with one-sided online vertices, an online vertex in the optimal cover will take care of the cost processing itself wherea an offline vertex in the optimal cover is responsible for the charge from its online neighbors.
In generalizing the charging scheme to the two-sided online bipartite and the general graph cases, a vertex must take care of both the cost in processing itself and the charges received from future neighbors. In such generalizations, we cannot use a fixed amount of water in processing each vertex. A key insight behind our algorithm is that the amount of water used should be related to the actual final water level. In other words, for a final water level $y$, the amount of water used should be $f(y)$ for some allocation function $f(\cdot)$. By extending our previous charging scheme, the competitive ratio of our new water-filling algorithm for the online fractional vertex cover problem in general graphs will be a function of $f(\cdot)$. We also derive the constraints which $f(\cdot)$ must satisfy in order to make the analysis work.
As a result, we are left with a non-conventional minimax optimization problem. (See Eqn.(\ref{eqn:opt}).) The most exciting part, however, is that we can actually solve this optimization problem {\em optimally}.\footnote{Our solution is optimal in {\em our framework}. It may not be optimal for the online fractional vertex cover problem. } The optimal allocation function in Theorem~\ref{thm:vcgeneral} implies a competitive ratio of $1.901$ for the online fractional vertex cover problem in general graphs.
Our primal-dual analysis for the online fractional matching problem in general graphs is obtained by reverse-engineering the charging-based analysis.
\paragraph{Remark:} In retrospect, it may be much harder to directly develop a water-filling algorithm for online matching in general graphs.
Firstly, it may take some work to realize that the amount of water used should be variable rather than 1 as in online bipartite matching. On the contrary, in vertex cover, the amount of water is already variable even for the basic one-sided online bipartite vertex cover. Secondly, to analyze a water-filling algorithm on the matching, one has to optimize over the allocation function, which specifies the total amount of water used as a function of the water level, and another function which updates the potentials of the dual variables. As a consequence, the competitive ratio would be an optimization problem involving {\em two variable functions}! In fact, if we reverse engineer a water-filling algorithm on the matching from our solution, the corresponding allocation function does not have a known closed form.
Thus our charging-based analysis for online vertex cover is a critical step in developing the algorithms. Our starting point, the vertex cover, turns out to be a surprising blessing.
\subsection{Previous work}
There are three lines of research related to our work. The first two categories discussed below are particularly relevant.
\paragraph{Online matching.}
The online bipartite matching problem was first studied in the seminal paper by Karp et al.~\cite{Karp1990}. They gave an optimal $1-1/e$-competitive algorithm. Subsequent works studied its variants such as $b$-matching~\cite{kalyanasundaram2000optimal}, vertex weighted version~\cite{Aggarwal2011,devanurrandomized}, adwords~\cite{Buchbinder2007,DevenurH09,Mehta2007,devanurrandomized, devanur2012online,goel2008online,Aggarwal2011} and online market clearing~\cite{Blum2006}. Water-filling algorithms have been used for a few variants of the online bipartite matching problem (e.g. ~\cite{kalyanasundaram2000optimal,Buchbinder2007}).
Another line of research studies the problem under more relaxed adversarial models by assuming certain inherent randomness in the inputs~\cite{Feldman2009,Manshadi2011,Mahdian2011, Karande2011}. Online matching for general graphs have been studied under similar stochastic models~\cite{bansal2010lp}. To our knowledge, there is no result on this problem in the more restricted adversarial models other than the well-known $1/2$-competitive greedy algorithm, even for just bipartite graphs with vertices from both sides arriving online~\cite{Blum2006}.
Analyzing greedy algorithms for maximum matching in the offline setting is another related research area. Aronson et al.~\cite{Aronson1995} showed that a randomized greedy algorithm is a $\frac{1}{2}+\frac{1}{400,000}$-approximation. The factor was recently improved to $\frac{1}{2}+\frac{1}{256}$~\cite{poloczek12}. A new greedy algorithm with better ratio was presented in~\cite{goel12}. Our 0.526-competitive algorithm for online fractional matching complements these results.
\paragraph{Ski rental.} The ski rental problem was first studied in~\cite{karlin1988competitive}. Karlin et al. gave an optimal $\frac{1}{1-1/e}$-competitive algorithm in the oblivious adversarial model~\cite{Karlin1994}. There are many generalizations of ski rental. Of particular relevance are multislope ski rental~\cite{Lotker2008} and TCP acknowledgment~\cite{Karlin2001}, where the competitive ratio $\frac{1}{1-1/e}$ is still achievable.
The online vertex-weighted bipartite vertex cover problem presented in this paper is also of this nature and, in fact, further generalizes multislope ski rental, as shown in Appendix~\ref{sec:multislope}.
\paragraph{Online covering.} Another line of related research deals with online integral and fractional covering programs of the form $\min\{ cx\mid Ax\geq 1,0\leq x\leq u\}$, where $A\geq 0,u\geq 0$, and the constraints $Ax\geq 1$ arrive one after another~\cite{Buchbinder2009}. Our online vertex cover problem also falls under this category. The key difference is that the online covering problems are so general that the optimal competitive ratios are usually not constant but logarithmic in some parameters of the input.
Finally, online vertex cover for general graphs was studied by Demange et al.~\cite{Demange2005} in a model substantially different from ours. Their competitive ratios are characterized by the maximum degree of the graph.
\section{Preliminaries}
Given $G=(V,E)$, a vertex cover of $G$ is a subset of vertices $C\subseteq V$ such that for each edge $(u,v) \in E$, $C\cap \{u,v\} \neq \emptyset$. A matching of $G$ is a subset of edges $M\subseteq E$ such that each vertex $v \in V$ is incident to at most one edge in $M$.
$\by\in [0,1]^V$ is a fractional vertex cover if for any edge $(u,v)\in E$, $y_u+y_v \geq 1$. We call $y_v$ the {\em potential} of $v$. $\bx\in [0,1]^E$ is a fractional matching if for each vertex $u\in V$, $\sum_{v\in N(u)} x_{uv} \leq 1$. It is well-known that vertex cover and matching are dual of each other.
{\bf LPs for fractional vertex cover and matching.}
\begin{center}
\begin{tabular}{ | r l | r l | }
\hline
Primal (Matching): & & Dual (Vertex Cover): &\\
& $\max\sum_{e\in E} x_e$ & & $\min\sum_{v\in V}y_v$ \\
s.t. & $ x_v:=\sum_{u\in N(v)} x_{uv}\leq 1,\,\forall v\in V$ & s.t. & $y_u+y_v\geq 1,\,\forall (u,v)\in E$ \\
& $x\geq 0$ & & $y\geq 0$ \\
\hline
\end{tabular}
\end{center}
In this paper, the matching and vertex cover LPs are called the primal and dual LPs, respectively. By weak duality, we have $$\sum_{e\in E} x_e\leq\sum_{v\in V}y_v $$ for any feasible fractional matching $\bx$ and vertex cover $\by$.
{\bf Competitive analysis.}
We adopt the competitive analysis framework to measure the performance of online algorithms. The size of the vertex cover (or matching) found by an algorithm is compared against the (offline) optimal solution, in the worst case.
An algorithm, possibly randomized, is said to be \textit{$c$-competitive} if for any instance, the size of the solution $ALG$ found by the algorithm and the size of the optimal solution $OPT$ satisfy
\[
\mathbb{E}[ALG] \leq c\cdot OPT\,\mathrm{ or }\,\mathbb{E}[ALG]\geq c\cdot OPT
\]
depending on whether the optimization is a minimization or maximization problem. The constant $c$ is called the \textit{competitive ratio}.
A few different adversarial models have been considered in the literature. In this paper, we focus on the {\em oblivious adversarial} model, in which the adversary must specify the input once-and-for-all at the beginning and is not given access to the randomness used by the algorithm.
\subsection{Algorithms for online vertex cover and matching}
In the online setting, the vertices of $G$ arrive one at a time in an order determined by the adversary. When an online vertex $v$ arrives, all of its edges incident to the {\em previously arrived} vertices are revealed. We denote the set of arrived vertices by $T\subset V$ and $G(T)$ is the subgraph of $G$ induced by $T$.
An algorithm for online integral matching maintains a monotone matching $M$. As each vertex $v$ arrives, it must decide if $(u,v)$ should be added to $M$ for some previously unmatched $u\in N(v)\cap T$, where $N(v)$ is neighbors of $v$ in $G$. No edge can be removed from $M$. The objective is to maximize the size of the final matching $M$. For online fractional matching, a fractional matching $\bx$ for $G(T)$ is maintained and at each step, $x_{uv}$ must be initialized for $u\in N(v)\cap T$ so that $\bx$ remains a fractional matching. The objective is to maximize the final $\sum_{e\in E} x_e$.
An algorithm for online integral vertex cover maintains a monotone vertex cover $C$. As each vertex $v$ arrives, it must insert a subset of $\{v\}\cup N(v)\backslash C$ into $C$ so that it remains a vertex cover. No vertex can be removed from $C$. The objective is to minimize the size of the final cover $C$. For online fractional vertex cover, a fractional vertex cover $\by$ for $G(T)$ is maintained and at each step, we must initialize $y_v$ and possibly increase some $y_u$ for $u\in T$ so that $\by$ remains a fractional vertex cover. The objective is to minimize the final $\sum_{v\in V}y_v$.
To simplify the terminology, we refer to the online vertex cover (matching) problem as the instances where all vertices in the graph arrive online. On the other hand, to be conformal with the existing terminology in the literature, we refer to the online {\em bipartite} vertex cover (matching) problem as the instances where the graph is bipartite and only one side of the vertices arrive online. This is the traditional case studied in the literature.
{\bf Weighted vertex cover and b-matching.} Our results can be generalized to cases of weighted vertex cover and b-matching.
For vertex cover, the objective function becomes $\sum_{v\in C}w_v$ (integral) or $\sum_{v\in V}w_vy_v$ (fractional), where $w_v\geq 0$ are weights on the vertices that are revealed to the algorithm when $v$ arrives.
For b-matching, the only difference is that each vertex can be matched up to $w_v\in\mathbb{N}$ times instead of just 1 (integral) or the constraint $x_v:=\sum_{u\in N(v)}x_{uv}\leq w_v$, where $w_v\geq 0$, replaces $x_v\leq 1$ (fractional). See below the LP formulation of the two problems for the fractional solution.
{\bf LPs for fractional weighted vertex cover and b-matching.}
\begin{center}
\begin{tabular}{ | r l | r l | }
\hline
Primal: & & Dual: &\\
& $\max\sum_{e\in E} x_e$ & & $\min\sum_{v\in V}w_vy_v$ \\
s.t. & $ x_v:=\sum_{u\in N(v)} x_{uv}\leq w_v,\, \forall v\in V$ & s.t. & $y_u+y_v\geq 1,\, \forall (u,v)\in E$ \\
& $\bx\geq 0$ & & $\by\geq 0$ \\
\hline
\end{tabular}
\end{center}
\subsection{Rounding fractional vertex cover in bipartite graphs}
\label{sec:rounding}
We present a rounding scheme that converts any given algorithm for online {\em fractional} vertex cover to an algorithm for online {\em integral} vertex cover in bipartite graphs~\cite{NivPersonal}.\footnote{We previously had a more complex rounding scheme. We thank Niv Buchbinder for letting us present his simple scheme.} This allows us to obtain the integral version of our results on fractional vertex cover for bipartite graphs.
Let ${\bf y}$ be the fractional vertex cover maintained by the algorithm.
Sample $t\in [0,1]$ uniformly at random before the first online vertex arrives. Throughout the execution of the algorithm, assign $u\in L$ to the cover if $y_u\geq t$ and $v\in R$ to the cover if $y_v\geq 1-t$, where $L$ and $R$ are the left and right vertices of the graph $G$ respectively.
As $y_u$ and $y_v$ never decrease in the online algorithm, our rounding procedure guarantees that once a vertex enters the cover, it will always stay there.
We next claim that this scheme gives a valid cover. Since $\by$ is always feasible, we have $y_u+y_v\geq 1\forall (u,v)\in E$ and hence at least one of $y_u\geq t$ and $y_v\geq 1-t$ must hold. In other words, one of $u$ and $v$ must be in the cover.
Therefore the cover obtained by applying this scheme is indeed valid and monotone, as required.
Finally, for each vertex $v$ with final potential $y_v$, the probability that $v$ is in the cover after the rounding is exactly $y_v$. Therefore, by linearity of expectation, the expected size of the integral vertex cover after the rounding is exactly $\sum_{v\in L\cup R} y_v$. Hence, this rounding scheme does not incur a loss.
\section{Online bipartite vertex cover problem}
In this section, we study the online bipartite vertex cover which is the dual of the traditional online bipartite matching problem. In this problem, the left vertices of the graph $G=(L,R,E)$ are offline and the right vertices in $R$ arrive online one at a time.
As mentioned in the introduction, online bipartite vertex cover generalizes the well known ski rental problem.
\begin{lemma}
Online bipartite vertex cover generalizes ski rental. In particular, no algorithm for online bipartite vertex cover achieves a competitive ratio better than $1+\alpha:=\frac{1}{1-1/e}$, which is the optimal ratio for ski rental~\cite{Karlin1994}.
\end{lemma}
\subsection{An optimal algorithm: $GreedyAllocation$}
\label{subsec:greedyallocation}
We present an optimal algorithm for the online vertex cover problem in bipartite graphs. Notice that the primal-dual analysis of the previously studied water-level algorithms on the online bipartite matching problem implies an optimal algorithm for the online bipartite vertex cover problem. Our algorithm applies the {\em water level} paradigm on {\em vertex cover} instead of {\em matching}.
This difference may appear trivial but it actually has profound consequences. In the water-filling algorithms for matching, the amount of water used is typically at most 1, i.e. the online vertex can be matched at most once. This is independent of the final water level. However, in vertex cover, we use at most $y+\alpha$ amount of water on the neighbors of the online vertex when the final water level is $y$. Our use of a general allocation function $f(\cdot)$ in the general graph case is partly inspired by this. Secondly, our new algorithm permits a novel charging-base analysis,
which encompasses several key observations that are helpful in developing our algorithm for online vertex cover in general graphs.
To avoid repetition, we present our algorithm in the general case as Algorithm~\ref{alg:general greedy} with allocation function $f(\cdot)$. For each vertex $v$, we maintain a non-decreasing cover potential $y_v$ which is initialized to $0$.
When an online vertex $v$ arrives, the edges between $v$ and $N(v)\cap T$ are revealed. In order to cover these new edges, we must increase the potential of $v$ and its neighbors. Suppose that we set $y_v=1-y$ after processing $v$. To maintain a feasible vertex cover, we must increase any $y_u<y$ for $u\in N(v)$ to $y$. We call $y$ the {\em water level}.
The trick here lies in how $y$ is determined. We consider a simple scheme in which $y$ is related to the total potential increment of $N(v)$. More precisely, we require that the total potential increment $\sum_{u\in N(v):y_u<y}(y-y_u)$ be at most $f(y)$, where $f$ is a positive continuous function on $[0,1]$.
For the online bipartite vertex cover problem considered in this section, the {\em allocation function} $f(y)=\alpha + y$ turns out to be an optimal choice. Another interpretation of this allocation function is that we spend at most $(1-y)+(\alpha + y)=1+\alpha$ amount of water on each online vertex. This observation will be crucial in the analysis.
\begin{algorithm}[h!]
\SetAlgoLined
\caption{$GreedyAllocation$ with allocation function $f(\cdot)$}
\label{alg:general greedy}
\KwIn{Online graph $G=(V,E)$ with offline vertices $U\subset V$}
\KwOut{A fractional vertex cover of $G$}
Initialize for each $u\in U$, $y_u = 0$\;
Let $T$ be the set of known vertices. Initialize $T=U$\;
\For{each online vertex $v$}
{
Maximize $y\le 1$, s.t., $\sum_{u\in N(v)\cap T} \max\{y-y_u,0\} \leq f(y)$\;
For each $u\in N(v)\cap T$, $y_u \leftarrow \max\{ y_u, y\}$\;
$y_v \leftarrow 1-y$\;
$T\leftarrow T\cup \{v\}$\;
}
Output $\{y_v\}$ for all $v\in V$\;
\end{algorithm}
\subsection{Analyzing $GreedyAllocation$}
Now we analyzing the performance of $GreedyAllocation$ with $f(y) = y +\alpha$ for the online bipartite vertex cover problem.
Let $C^*$ be a minimum vertex cover of $G$. Our strategy is to charge the potential increment to vertices of $C^*$ in such a way that each vertex of $C^*$ is charged at most $1+\alpha$.
Let $v$ be the current online vertex. Suppose that our algorithm sets $y_v=1-y$ for some $y$.
Let $y_u$ be the potential of $u\in N(v)$.
We consider two cases.
\underline{Case 1:} $v\in C^*$.
It is natural to charge the potential increment in $N(v)$ and $v$ to $v$.
By our construction, $v$ will be charged at most $1+\alpha$.
\underline{Case 2:} $v\notin C^*$. Notice that we must have $N(v)\subseteq C^*$. In this case, vertices of $N(v)$ should be responsible for the potential $y_v=1-y$ used by $v$.
We describe how to charge $1-y$ to $N(v)$ as follows.
Intuitively, if $\sum_{u\in N(v)} (y-y_u)=f(y)=\alpha + y$, the most fair scheme should charge $\frac{1-y}{f(y)}(y-y_u)$ to $u \in N(v)$ whose potentials increase
since the fair ``unit charge" is $\frac{1-y}{f(y)}$.
If $\frac{1-t}{f(t)}$ is decreasing, $\frac{1-y}{f(y)}(y-y_u)$ can be upper bounded by $\int_{y_u}^y \frac{1-t}{f(t)}\mathrm{d}t$. This observation motivates the next lemma which forms the basis of all the major results in this paper.
\begin{lemma}
\label{lem:charging}
Let $f:[0,1]\longrightarrow\mathbb{R}_+$ be continuous such that $\frac{1-t}{f(t)}$ is decreasing, and $F(x)=\int_0^x \frac{1-t}{f(t)}\mathrm{d}t$. If $\sum_{u\in X} (y-y_u)= f(y)$ for some set $X$ and $y\geq y_u$ for $u\in X$, then
$$1-y\leq \sum_{u\in X} \left(F(y)-F(y_u)\right).$$
\end{lemma}
\begin{proof}
We have the following
\begin{eqnarray*}
\sum_{u\in X}\left(F(y)-F(y_u)\right) & = & \sum_{u\in X}\int_{y_u}^y \frac{1-t}{f(t)}\mathrm{d}t\\
& \geq & \sum_{u\in X}(y-y_u)\frac{1-y}{f(y)}=1-y,
\end{eqnarray*}
where the inequality above holds as $\frac{1-t}{f(t)}$ is decreasing.
\end{proof}
We are ready to evaluate the performance of $GreedyAllocation$.
\begin{theorem}
\label{thm:no alternation}
$GreedyAllocation$ is $1+\alpha$-competitive and hence optimal for the online bipartite vertex cover problem.
\end{theorem}
\begin{proof}
We charge the potentials used to the vertices of the minimum cover $C^*$. Let $v$ be an online vertex. The case $v\in C^*$ is trivial as explained before.
Now consider the case $v\notin C^*$. We charge the potential spent on $u\in N(v)\subseteq C^*$ to $u$ itself. The potential spent on $v$ is $y_v = 1-y$ where $y$ is the final water level after processing $v$.
Let $X\subset N(v)$ be the set of vertices whose potentials increase when processing $v$.
If $y =1$, we are done as no charging is necessary. If $y <1$, then we have $\sum_{u\in X} (y-y_u) = \alpha +y$, where $y_u$ is the potential of $u$ before processing $v$. We charge each vertex $u\in X$ by $G(y)-G(y_u)$. By Lemma~\ref{lem:charging}, $1-y \leq \sum_{u\in X} (G(y)-G(y_u))$, i.e., our charging is sufficient.
In summary, each online vertex of $C^*$ is responsible for $1+\alpha$ potential. On the other hand, each left vertex of $C^*$ is responsible for itself (which contributes at most 1 to $C$) as well as the incoming charges from its neighbors. For $u\in L\cap C^*$, the sum of these charges can be at most $G(1)-G(0)$ as the sum $G(y)-G(y_u)$, taken over the iterations in which $y_u$ increases, telescopes. Therefore the amount of potential charged to a left vertex is also bounded by $1+G(1)-G(0) = (\alpha+1)\ln (1+\frac{1}{\alpha}) = 1+\alpha$ since $\alpha = \frac{1}{e-1}$.
This gives our desired result.
\end{proof}
In fact, $GreedyAllocation$ can be extended to the vertex-weighted setting. To avoid diversion from the main results, we defer the proof of the following theorem to the appendix.
\begin{theorem}
$GreedyAllocation$ (modified) is $1+\alpha$-competitive and hence optimal for online vertex-weighted bipartite vertex cover.
\end{theorem}
\section{Online fractional vertex cover in general graphs}
The lessons learned in the last section are actually much more general. As suggested in the description of $GreedyAllocation$, we can generalize the algorithm to general graphs. However, we have to carefully design the allocation function $f(\cdot)$ to get a non-trivial competitive ratio.
Before getting into the details, we revisit the analysis in the last section to gain some insights which will be helpful to tackle the general graph version of the problem. In our charging argument, each vertex in $L\cap C^*$ is responsible for the charges from its neighbors. On the other hand, a vertex in $R\cap C^*$ is only responsible for the potential increment when processing itself. However, if both vertices in $L$ and $R$ are online, an online vertex $v\in C^*$ should be responsible for the potential used to process it when it arrives as well as the charges from future neighbors.
Let $f(x)$ be a general allocation function such that $\frac{1-t}{f(t)}$ is decreasing. Informally, if the water level when processing $v$ is $y<1$, i.e. the initial potential of $v$ is $1-y$, we use potential $f(y)$ on $v$'s neighbors and $1-y$ on $v$ itself. Afterwards, $v$ will take charges from its future neighbors. Notice that $v$'s potential will grow from $1-y$ to at most $1$. By Lemma~\ref{lem:charging}, $v$ will take charges at most $\int_{1-x}^1 \frac{1-t}{f(t)} \mathrm{d}t$. Putting the two pieces together, the total charges to each $v\in C^*$ and hence the competitive ratio are at most
\[
\beta(f) = \max_{z\in [0,1]} 1 + f(1-z) + \int_{z}^1 \frac{1-t}{f(t)}\mathrm{d}t.
\]
We will show how to compute the optimal allocation function $f(\cdot)$ in Sec.~\ref{sec:optimization}. From now on, we will formally show that the performance of $GreedyAllocation$ in general graphs with allocation function $f(\cdot)$ is at most $\beta(f)$.
\begin{lemma}
Let $f(\cdot)$ be the allocation function.
In processing vertex $v$ in $GreedyAllocation$, we must have either $y=1$ or $\sum_{u\in N(v)} \max\{y-y_u,0\}= f(y)$.
\end{lemma}
\begin{proof}
Let $H(t)=\sum_{u\in N(v)}\max\{t-y_{u},0\}- f(t)$. Note that $H$ is continuous and $H(0)=-f(0)<0$.
Assume $y< 1$. Notice that $H(1)>0$. Otherwise, we can set $y=1$. If $H(y) <0$, then by intermediate value theorem there is some $t\in (y,1)$ for which $H(t)=0$. This contradicts the maximality of $y$. Hence $H(y)=0$, as desired.
\end{proof}
Our previous discussion implies that $GreedyAllocation$ is competitive against the minimum {\em integral} vertex cover. In fact, our algorithm is also competitive against the minimum {\em fractional} vertex cover in general graphs.
\begin{theorem}
Let $f: [0,1]\longrightarrow R^+$ be the continuous allocation function such that $\frac{1-t}{f(t)}$ is decreasing.
Let $\beta=\max_{z\in [0,1]} 1+f(1-z)+\int_z^1 \frac{1-t}{f(t)}\mathrm{d}t$ and $F(x)=\int_0^x \frac{1-t}{f(t)}\mathrm{d}t$.
$GreedyAllocation(f)$ is $\beta$-competitive against the optimal fractional vertex cover in general graphs.
\end{theorem}
\begin{proof}
Let $ y^*$ be the minimum fractional vertex cover. Denote by $v$ the current online vertex. Consider the following charging scheme.
\begin{itemize}
\item Charge $\left(f(y)+1-y\right)y_v^*$ to $v$.
\item Charge $\left(y-y_u+F(y)-F(y_u)\right)y_u^*$ to $u\in X$, where $X=\{ u\in N(v)\mid y_u<y\}$.
\end{itemize}
We claim that the total charges are sufficient to cover the potential increment $1-y+\sum_{u\in X}(y-y_u)$.
Observe that since $y_v^*+y_u^*\geq 1$ for all $u\in N(v)$. Since $f(y)\geq \sum_{u\in X}(y-y_u)$, we have
\begin{eqnarray*}
f(y)y_v^*+\sum_{u\in X}(y-y_u)y_u^* & \geq & \sum_{u\in X}(y-y_u)(y_v^*+y_u^*)\\
& \geq & \sum_{u\in X}(y-y_u).
\end{eqnarray*}
Furthermore,
\begin{eqnarray*}
&&(1-y)y_v^*+\sum_{u\in X}\left(F(y)-F(y_u)\right)y_u^* \\
&\geq& (1-y)y_v^*+\sum_{u\in X}\left(F(y)-F(y_u)\right)(1-y_v^*)\\
&\geq& 1-y,
\end{eqnarray*}
where the last inequality follows from Lemma~\ref{lem:charging}.
The above shows that the proposed charging scheme indeed accounts for the total potential increment. Now we bound the total charges to a vertex $v$ over the execution of the algorithm.
When $v$ arrives, $y_v$ is initialized as $1-y$ and $v$ is charged $\left(f(y)+1-y\right)y_v^*$. After that, when $y_v$ increases from $a$ to $b$, $v$ is charged $\left(a-b+F(a)-F(b)\right)y_v^*$ . Note that the sum of these terms telescopes and is at most $$\left(1-(1-y)+F(1)-F(1-y)\right)y_v^*=\left(y+F(1)-F(1-y)\right)y_v^*.$$
Therefore the total charges to $v$ are at most
\begin{eqnarray*}
&&\left(f(y)+1-y\right)y_v^*+\left(y+F(1)-F(1-y)\right)y_v^*\\
&=& \left(1+f(1-y)+\int_y^1 \frac{1-t}{f(t)}\mathrm{d}t\right)y_v^*\\
&\leq& \beta y_v^*.
\end{eqnarray*}
This implies that the total potential is bounded by $\beta\sum_{v\in V} y_v^*$, which shows that our algorithm is $\beta$-competitive.
\end{proof}
\subsection{Computing the optimal allocation function}
\label{sec:optimization}
The next question is then to find a good $f(y)$ to get a small $\beta$. In essence, the goal is to solve the following optimization problem
\begin{equation}
\label{eqn:opt}
\inf_{f\in \mathcal{F}}\max_{z\in [0,1]} 1+f(1-z)+\int_z^1 \frac{1-t}{f(t)}\mathrm{d}t.
\end{equation}
where $\mathcal{F}$ is the class of positive continuous functions on $[0,1]$ such that $\frac{1-t}{f(t)}$ is decreasing for each $f\in \mathcal{F}$.
To the best of our knowledge, there is no systematic approach to tackle a minimax optimization problem of this form. A natural way is to first express the optimal $z$ in terms of $f$, and then use techniques from calculus of variation to compute the best $f$. However, a major difficulty is that there is no closed form expression for the optimal $z$.
To overcome this hurdle, we first disregard the requirement that $\frac{1-t}{f(t)}$ be decreasing. (Though, our final optimal solution turns out to satisfy this condition.) We show that such a relaxation of the optimization problem admits a very nice optimality condition, namely that there exists some optimal $f$ such that $1+f(1-z)+\int_z^1 \frac{1-t}{f(t)}\mathrm{d}t$ is constant for all $z$. We characterize this property in the following lemma.
\begin{lemma}
Let $r:[0,1]\longrightarrow\mathbb{R}_{+}$ be a
continuous function such that for $\forall p\in [0,1]$, $r(p)+\int_{1-p}^{1}\frac{1-x}{r(x)}\mathrm{d}x\leq\gamma$
for some $\gamma>0$. Then there exists a continuous function $f:[0,1]\longrightarrow\mathbb{R}_{+}$
such that $\forall p\in [0,1]$, $f(p)+\int_{1-p}^{1}\frac{1-x}{f(x)}\mathrm{d}x \equiv\gamma$.
\end{lemma}
\begin{proof}
Let $r_{1}=r$ and $R_{1}(p)=r_{1}(p)+\int_{1-p}^{1}\frac{1-x}{r_{1}(x)}\mathrm{d}x$.
Define two sequences of functions $\{r_{i}\},\{R_{i}\}$ recursively
as follows:
\[
r_{i+1}=r_{i}+\gamma-R_{i},R_{i+1}(p)=r_{i+1}(p)+\int_{1-p}^{1}\frac{1-x}{r_{i+1}(x)}\mathrm{d}x.
\]
Note that $r_{i},R_{i}$ are positive and continuous for every $i$. We first show $R_{i}\leq\gamma$ by induction. The base case for $i=1$ is trivial. Now we assume $R_i\leq \gamma$ for some $i$. This implies that $r_i\leq r_{i+1}$. Then
Notice
\begin{align*}
R_{i+1}(p)&=r_{i+1}(p)+\int_{1-p}^{1}\frac{1-x}{r_{i+1}(x)}\mathrm{d}x
\leq r_{i+1}(p) +\int_{1-p}^{1}\frac{1-x}{r_{i}(x)}\mathrm{d}x \\
&=r_{i+1}(p) + R_i(p) - r_i(p) = \gamma.
\end{align*}
Therefore $R_i\le \gamma$ for all $i$ and consequently $r_i\leq r_{i+1}$.
Observe that $r_{i}$ converges pointwise as
$r_{i}$ is bounded by $\gamma$ and monotonically increases. Let $r_{\infty}=\lim_{i\rightarrow\infty}r_{i}$.
Moreover, since $r_{i+1}=r_{i}+\gamma-R_{i}$, $R_{\infty}=\lim_{i\rightarrow\infty}R_{i}\equiv\gamma$.
On the other hand, we have
\begin{align*}
\gamma=R_{\infty}(p) &=\lim_{i\rightarrow\infty}\left(r_{i}(p)+\int_{1-p}^{1}\frac{1-x}{r_{i}(x)}\mathrm{d}x\right)\\
&=r_{\infty}(p)+\lim_{i\rightarrow\infty}\int_{1-p}^{1}\frac{1-x}{r_{i}(x)}\mathrm{d}x.
\end{align*}
By the dominated convergence theorem, $\lim_{i\rightarrow\infty}\int_{1-p}^{1}\frac{1-x}{r_{i}(x)}\mathrm{d}x=\int_{1-p}^{1}\frac{1-x}{r_{\infty}(x)}\mathrm{d}x$
since $\frac{1-x}{r_{i}(x)}$ is bounded by $\frac{1-x}{r_{1}(x)}$.
By taking limit in the second recurrence, we get
\[
r_{\infty}(p)=\gamma-\int_{1-p}^{1}\frac{1-x}{r_{\infty}(x)}dx
\]
which implies $r_{\infty}$ is continuous and hence satisfies our requirement.
\end{proof}
Therefore, it is sufficient to consider functions $f$ that satisfy this optimality condition. A consequence is that $f(1-z)=\beta-1-\int_z^1 \frac{1-t}{f(t)}\mathrm{d}t$ is actually differentiable. Differentiating $1+f(1-z)+\int_z^1 \frac{1-t}{f(t)}\mathrm{d}t$ yields $-f'(1-z)-\frac{1-z}{f(z)}=0$, or equivalently, $$f(z)f'(1-z)=z-1.$$
Although this differential equation is atypical as $f(z)$ and $f'(1-z)$ are not taken at the same point, surprisingly it has closed form solutions, as given below.
\begin{lemma}
Let $r$ be a non-negative differentiable function on $[0,1]$ such that $r(z)r'(1-z)=z-1$. Then $$r(z)=\left(\frac{1+k}{2}-z\right)^{\frac{1+k}{2k}}\left(z+\frac{k-1}{2}\right)^{\frac{k-1}{2k}},$$where $k\geq1$. Moreover, $\frac{1-t}{r(t)}$ is decreasing for $t\in [0,1]$.
\end{lemma}
\begin{proof}
We have
\[
r(p)r'(1-p)=p-1.
\]
Replacing $p$ by $1-p$, we get
\begin{equation}
\label{eqn:p}
r(1-p)r'(p)=-p.
\end{equation}
Hence,
\begin{equation}
\label{eqn:pp2c}
(r(p)r(1-p))'=1-2p\implies r(p)r(1-p)=p-p^{2}+c
\end{equation}
for some $c$. Note that $r(0)r(1)=c \ge 0$.
From Eqn~(\ref{eqn:p}) and (\ref{eqn:pp2c}), we get $r'(p)/r(p)=p/(p^{2}-p-c)$.
Let $k=\sqrt{1+4c}\geq 1$. By taking partial fraction and using $(\ln r(p))'=r'(p)/r(p)$,
\[
\frac{r'(p)}{r(p)}=\frac{1}{2k}\left(\frac{1+k}{p-\frac{1+k}{2}}-\frac{1-k}{p-\frac{1-k}{2}}\right)\implies r(p)=D\frac{\left|p-\frac{1+k}{2}\right|^{\frac{1+k}{2k}}}{\left|p-\frac{1-k}{2}\right|^{\frac{1-k}{2k}}}
\]
for some constant $D$. It is easy to check that $r(p)r(1-p)=D^{2}(p-p^{2}-c)\implies D=1$. Since $k\geq 1$, we get the required $r(p)$.
Now we show that
$\frac{1-t}{r(t)}$ is decreasing for $t\in [0,1]$. Taking the derivative of $\frac{1-t}{r(t)}$, we have $-1-(1-t)r'(t)/r(t)=-1-(1-t)t/(t^{2}-t-c)=c/(t^{2}-t-c)\leq 0$,
as desired.
\end{proof}
The final step is just to select the best $f$ from the family of solutions. Since $1+f(1-z)+\int_z^1 \frac{1-t}{f(t)}\mathrm{d}t$ is constant, it suffices to find the smallest $1+f(0)$, which corresponds to the case $k\approx1.1997$, the real fixed point of the hyperbolic cotangent function.\footnote{The optimal $k$ is closely related to the Laplace limit in the solution of Kepler's equation~\cite{weisstein}.}
\begin{theorem}
\label{thm:vcgeneral}
Let $f(z)=\left(\frac{1+k}{2}-z\right)^{\frac{1+k}{2k}}\left(z+\frac{k-1}{2}\right)^{\frac{k-1}{2k}}$, where $k\approx1.1997$. $GreedyAllocation(f)$ is 1.901-competitive for the fractional online vertex cover problem in general graphs.
\end{theorem}
Finally, we remark that our algorithm can be viewed as a generalization of the well-known greedy algorithm because the solution $f(z)=1-z$ (with $k=1$) is equivalent to a variant of the greedy algorithm.
\section{Online fractional matching in general graphs}
We give a primal-dual analysis of the algorithm given in the last section.
A by-product of this primal-dual analysis is a $\frac{1}{1.901}\approx 0.526$-competitive algorithm for online fractional matching in general graphs.
Let $\beta\approx 1.901$ be the competitive ratio established in the last section and $f(z)$ be the same as that of Theorem~\ref{thm:vcgeneral}. Our primal-dual analysis shares some similarities with the one for online bipartite fractional matching by Buchbinder et al.~\cite{Buchbinder2007}.
Our algorithm $PrimalDual$ applies to both online fractional vertex cover and matching. When restricted to the dual, it is identical to $GreedyAllocation$.
\begin{algorithm}[h!]
\SetAlgoLined
\caption{$PrimalDual$}
\label{alg:general greedy}
\KwIn{Online graph $G=(V,E)$}
\KwOut{A fractional vertex cover $\{y_v\}$ of $G$ and a fractional matching $\{x_{uv}\}$.}
Let $T$ be the set of known vertices. Initialize $T=\emptyset$\;
\For{each online vertex $v$}
{
Maximize $y\le 1$, s.t., $\sum_{u\in N(v)\cap T} \max\{y-y_u,0\} \leq f(y)$\;
Let $X = \{u \in N(v)\cap T \,\mid\, y_u < y\}$\;
\For{each $u\in X$}
{
$y_u\leftarrow y$\;
$x_{uv}\longleftarrow \frac{y-y_u}{\beta}\left(1+\frac{1-y}{f(y)}\right)$\;
}
For each $u \in (N(v)\cap T)\setminus X$, $x_{uv}\longleftarrow 0$\;
$y_v \leftarrow 1-y$\;
$T\leftarrow T\cup \{v\}$\;
}
Output $\{y_v\}$ for all $v\in V$\;
\end{algorithm}
To analyze the performance, we claim that the following two invariants hold throughout the execution of the algorithm.
\textbf{Invariant 1:} $$\frac{y_{u}+f(1-z_u)+\int_{z_u}^{y_{u}}\frac{1-t}{f(t)}\mathrm{d}t}{\beta}\geq x_{u},$$ where $z_u$ is the potential of $u$ set upon its arrival, $y_u$ is the current potential of $u$ and $x_u = \sum_{v \in N(u)} x_{uv}$ is the sum of the potentials on the edges incident to $u$. Note that the LHS is at most 1 (see last section for details), which guarantees that the primal is feasible as long as the invariant holds.
\textbf{Invariant 2:} $$\sum_{u\in T} y_{u}=\beta\sum_{(u,v)\in E\cap T^2} x_{uv}$$
Invariant 2 guarantees that the primal and dual objective values are within a factor of $\beta$ from each other. By weak duality, this implies that the algorithm is $\beta$-competitive for online fractional vertex cover and $\frac{1}{\beta}$-competitive for online fractional matching in general graphs.
Note that both invariants trivially hold at the beginning.
The idea behind Invariant 1 is to enforce some kind of correlation between $y_u$ and $x_u$. For instance, when $y_u$ is small, $x_u$ should not be excessively large because $x_u$ must be increased to (partially) offset any future increase in $y_u$ in order to maintain Invariant 2.
We claim that both invariants are preserved.
\begin{lemma}[Invariant 2]
\label{lem:inv2}
In each iteration of the algorithm, the increase in the dual objective value is exactly $\beta$ times that of the primal.
\end{lemma}
\begin{proof}
The dual increment is $$1-y+\sum_{u\in X}(y-y_u)$$ and the primal increment is $$\sum_{u\in X} \frac{y-y_u}{\beta}\left( 1 + \frac{1-y}{f(y)}\right).$$
Thus it suffices to show that $1-y=\sum_{u\in X}(y-y_u)\frac{1-y}{f(y)}$. This just follows from Lemma~\ref{lem:charging}, which states that we have either $y=1$ or $\sum_{u\in X}(y-y_u)=f(y)$.
\end{proof}
\begin{lemma}[Invariant 1]
\label{lem:inv1}
After processing online vertex $v$, we have $x_v\leq \frac{y_v+f(1-y_v)}{\beta}$ and $x_u\leq \frac{y+f(1-z_u)+\int_{z_u}^{y}\frac{1-t}{f(t)}\mathrm{d}t}{\beta}$ for $u\in X$.
\end{lemma}
\begin{proof}
Note that $x_v=\sum_{u\in X}x_{uv}$ is just the increase in the primal objective value. By Invariant 2, $x_v=\frac{1-y+\sum_{u\in X}(y-y_u)}{\beta}$. Our claim for $x_v$ follows since $y_v=1-y$ and $\sum_{u\in X}(y-y_u)\leq f(y)$.
By Invariant 1, the previous $x_u$ satisfies $$x_u-x_{uv}\leq\frac{y_{u}+f(1-z_u)+\int_{z_u}^{y_{u}}\frac{1-t}{f(t)}\mathrm{d}t}{\beta}.$$
This proof is finished by noticing that $$x_{uv}=\frac{y-y_u}{\beta}\left( 1 + \frac{1-y}{f(y)}\right)\leq \frac{1}{\beta} \left( y-y_u+ \int_{y_u}^y \frac{1-t}{f(t)}\mathrm{d}t\right),$$ as $\frac{1-t}{f(t)}$ is a decreasing function.
\end{proof}
Finally, it is clear that the dual is always feasible. The primal is feasible because $x\geq 0$ and Invariant 1 guarantees that $x_v\leq 1$, as discussed earlier. Combining this and the two lemmas, we have our main result.
\begin{theorem}
\label{thm:pdgeneral}
Our algorithm is $\beta\approx 1.901$-competitive for online fractional vertex cover and $\frac{1}{\beta}\approx 0.526$-competitive for online fractional matching for general graphs.
\end{theorem}
It is possible to extend our algorithm to the vertex-weighted fractional vertex cover problem and the fractional b-matching problem, as shown in the appendix.
\begin{theorem}
\label{thm:vertexweighted}
There exists an algorithm that is $\beta\approx 1.901$-competitive for online vertex-weighted fractional vertex cover and $\frac{1}{\beta}\approx 0.526$-competitive for online capacitated fractional matching for general graphs.
\end{theorem}
\section{Hardness Results}
In this section, we obtain new hardness results in our model. All of our hardness results are obtained by considering appropriate bipartite graphs.
Let $G=(L,R,E)$ be a bipartite graph with left vertices $L$ and right vertices $R$. We study different variants of the online vertex cover and matching problems by imposing certain constraints on the vertex arrival order.
\begin{itemize}
\item {\bf 1-alternation.} The left vertices $L$ are offline and the right vertices in $R$ arrive online. When a vertex $v\in R$ arrives, all its incident edges are revealed. This is the case studied in the literature.
\item {\bf $k$-alternation:} There are $k$ phases and $L_0\subseteq L$ is the set of offline vertices. In each phase $1\leq i\leq k$, if $i$ is odd (resp. even), vertices from a subset of $R$ (resp. $L$) arrive one by one.
The {\em no alternation} case corresponds to $k =1$.
Note that the case $k=\infty$ effectively removes any constraint on the vertex arrival order, and is called the unbounded alternation case below.
\item {\bf Unbounded alternation:} The vertices in $L\cup R$ arrive in an arbitrary order.
\end{itemize}
\subsection{Lower bounds for the online vertex cover problem}
We give lower bounds on the competitive ratios for online bipartite vertex cover with 2- and 3-alternation, and an upper bound for online bipartite matching with 2-alternation. These hardness results also apply to the more general problems of online vertex cover and matching in general graphs.
\begin{proposition}
There is a lower bound of $1+\frac{1}{\sqrt{2}}\approx 1.707$
for online bipartite vertex cover with 2-alternation.
\end{proposition}
\begin{proof}
It suffices to establish the result for the fractional version of the problem. Suppose that an algorithm $A$ is $(1+\beta)$-competitive.
Without loss of generality, we may assume that $A$ is deterministic.
Our approach is to bound $\beta$ by considering a family of complete bipartite graphs. Thus a new online vertex is always adjacent to all the vertices on the other side.
Let $|L_0|=d$ and $y$ be the fractional vertex cover maintained by $A$.
We claim that after processing the $i$-th vertex in $R_1$, we have
$$\sum_{u\in L_0} y_u\leq i\beta.$$
The reason is that the adversary can generate infinitely many left online vertices in phase 2 and hence $y_v$, for any $v\in R_1$, converges to 1 (otherwise, if $y_v$, which monotonically increases, converges to some $l<1$, then $y_u\geq 1-l$ for $u\in L_2$ and the cost of the vertex cover found is unbounded while the optimal solution is at most $i$).
Let $v_i^{(1)}$ be the $i$th vertex in $R_1$. Next we claim that $$y_{v_{i}^{(1)}}\geq 1-\frac{i\beta}{d}.$$ Since $\sum_{u\in L_0} y_u\leq i\beta$ after processing $v_{i}^{(1)}$, by Pigeonhole Principle there must be some $y_u\leq \frac{i\beta}{d}$. To maintain a valid vertex cover, we need $y_{v_{i}^{(1)}}\geq 1-\frac{i\beta}{d}$.
Finally, we have
\begin{equation}
\label{[eqn:yv]}
\sum_{v\in R_1} y_v\leq d\beta.
\end{equation}
Otherwise, the adversary can generate infinitely many online vertices to append $R_1$ in which case
$y_u$ will be increased to $1$ eventually for all $u\in L_0$, i.e., $\sum_{u\in L_0}y_u = d$.
This contradicts the fact that $A$ is $(1+\beta)$-competitive.
Now by taking $|R_1|=\sqrt{2}d$, we get$$\sum_{i=1}^{\sqrt{2}d}\left( 1-\frac{i\beta}{d}\right) \leq\sum_{v\in R_1} y_v\leq d\beta ,$$
from which our desired result follows by taking $d\longrightarrow\infty$.
\end{proof}
The proofs of the next two results are in the appendix.
\begin{proposition}
There is a lower bound of $1+\sqrt{\frac{1}{2}\left(1+\frac{1}{e^2}\right)}\approx1.753$
for online bipartite vertex cover with 3-alternation.
\end{proposition}
\begin{proof}
Again, let $d=|L_{0}|$. We extend the idea used in the proof of the
bound $1+\frac{1}{\sqrt{2}}$ for 2-alternation. Let $x_{i}$ be the
amount of resources spent on $L_{0}$ by the $i$-th vertex of $R_{1}$, i.e. the increment in the potential of $L_0$.
Let $y_{i}$ be its own potential. Then $y_{i}\geq 1-(x_{1}+\cdots+x_{i})/d$,
$x_{1}+\cdots+x_{i}\leq i\beta$ and $y_{1}+\cdots+y_{i}\leq d\beta$ by the argument used in the proof of Proposition 1.
The new idea is that in phase 2, assuming $|R_1| =i$, at most $i\beta$ resources can be spent
on $L_{0}$ and $L_{2}$. (This is because the adversary can append infinitely many vertices to the current $L_2$.) Now consider the $j$-th vertex $u_j$ in $L_2$. Similar to Eqn.(\ref{[eqn:yv]}),
we have $\sum_{v\in R_1} y_v \leq (d+j)\cdot \beta$ after processing $u_j$, since the adversary can append infinitely many online vertices to $R_3$. Consequently, $y_{u_j} \geq 1- \min\{y_v \mid v\in R_1 \} \geq 1- (d+j)\cdot \beta/i$ by the pigeonhole principle. Therefore,
\[
x_{1}+\cdots+x_{i}\leq i\beta-\sum_{j=1}^{\ell}\left(1-\frac{(d+j)\beta}{i}\right),
\]
where $\ell = |L_2|$.
Let $X(i)=x_1+x_2+\cdots+x_i$.
If $i \le d\beta$, we have $X(i) \leq i\beta$. When $i>d\beta$, by setting $\ell = \frac{i}{\beta} - d$, we have
\begin{align}
X(i) &= i\beta - \left(1-\frac{d\beta}{i}\right)\ell+\frac{\beta}{2i}\ell^2+O(1) \nonumber\\
&= d+i\beta-\frac{i}{2\beta}-\frac{\beta d^2}{2i}+O(1).\nonumber
\label{eqn:Xi}
\end{align}
Notice that our bound on $X(i)$ holds for arbitrary $i$, since the adversary can arbitrarily manipulate the future input graph to fool the deterministic algorithm.
Since $y_i\geq 1-X(i)/d$ and $\sum_{i=1}^k y_i \leq d\beta$ for any $k$, we have
\[
\sum_{i=1}^k \left(1-\frac{X(i)}{d}\right) \leq d\beta.
\]
Let $\alpha = \frac{1}{\sqrt{2\beta^2-1}}$. By setting $k = \beta \alpha d$ and considering $i\leq d\beta$, $i>d\beta$ separately,
we get
\begin{align}
d\beta &\geq \sum_{i=1}^{d\beta} \left(1-\frac{i\beta}{d}\right) +\sum_{i=d\beta +1}^k \left(\frac{i}{2\beta d}+\frac{\beta d}{2i}-\frac{i\beta}{d}+O(1/d)\right)\nonumber
\end{align}
By taking $d\longrightarrow\infty$ and using $\sum_{i=1}^n 1/i\approx \ln n$, we have the desired result.
\end{proof}
\subsection{Upper bounds for the online matching problem}
Before establishing our last result on the upper bound for online bipartite matching with 2-alternation, we review how the bound $1-1/e$ is proved for the original problem (i.e. 1-alternation) as the same technique is used in a more complicated way. The next proof is a variant of that in \cite{Karp1990}.
\begin{proposition}
There is an upper bound of $1-1/e\approx 0.632$ for online bipartite matching (with 1-alternation).
\end{proposition}
\begin{proof}
Again, we can consider only the fractional version of the problem and deterministic algorithms. Suppose that an algorithm maintains a fractional matching $x$. Let $L=\{ u_1,...,u_n\}$ and $R=\{ v_1,...,v_n\}$, with $v_i$ adjacent to $u_1,...,u_{n+1-i}$. The size of the maximum matching is clearly $n$. Let $v_1,...,v_n$ be the order in which the online vertices arrive.
Observe that when $v_i$ arrives, $u_1,...,u_{n+1-i}$ are indistinguishable from each other. Thus $x_{v_i}:=\sum_{u\in N(v_i)} x_{uv_i}$ should be evenly distributed to $u_1,...,u_{n+1-i}$, i.e. $x_{uv_i}=\frac{x_{v_i}}{n+1-i}$. This argument can be made formal by considering graphs isomorphic to $G$ with the labels of vertices in $L$ being randomly permuted.
Thus, after processing $v_k$ we have $$x_{u_i}=\frac{x_{v_1}}{n}+...+\frac{x_{v_k}}{n+1-k}.$$ Moreover, the size of the matching found is $x_{v_1}+\cdots+x_{v_n}$ and $x_{v_1},\cdots,x_{v_n}$ satisfy $\frac{x_{v_1}}{n}+\cdots+\frac{x_{v_n}}{1}\leq 1$.
Viewing the above as a LP, it is easy to see that $x_{v_1}+...+x_{v_n}$ is maximized when $x_{v_n},...,x_{v_k}=1,x_{v_{k+1}},...,x_{v_n}=0$ and $\frac{1}{n}+...+\frac{1}{n-k+1}\approx 1$ for some $k$. Now when $n$ is large, $\frac{1}{n}+...+\frac{1}{n-k+1}\approx \ln\frac{n}{n-k}$.
Finally, $$x_{v_1}+\cdots+x_{v_n}=k=n(1-1/e)=(1-1/e) \cdot OPT.$$
\end{proof}
\begin{proposition}
There is an upper bound of 0.6252 for the online matching problem in bipartite graphs with 2-alternation.
\end{proposition}
\begin{proof}
Again, we can consider only the fractional version of the problem and deterministic algorithms. Suppose that an algorithm is $\gamma$-competitive and maintains a fractional matching $x$.
Let $|L_0|=|L_2|=n,|R_1|=2n$. The first $n$ vertices of $R_1$ are adjacent to all vertices in $L_0$. The two subgraphs induced by $L_0$ \& the last $n$ vertices of $R_1$ and $L_1$ \& the first $n$ vertices of $R_1$ are isomorphic to the graph used in the proof of the last theorem. Note that the size of maximum matching is $2n$.
The most important observation here is that after processing the first $n$ vertices of $R_1$, the fractional matching found must have size at least $n\gamma$ as the current optimal solution has size $n$. In other words, we have $$x_{u_{1}^{(1)}}+...+x_{u_{n}^{(1)}}=x_{v_{1}^{(1)}}+...+x_{v_{n}^{(1)}}\geq n\gamma$$after the first $n$ vertices of $R_1$ arrive.
Now the next $n$ vertices of $R_1$, by the same reasoning in the last theorem, are matched to the extent of $k$ such that $\gamma + \frac{1}{n}+...+\frac{1}{n+1-k}\approx 1$, from which we obtain $k=n(1-1/e^{1-\gamma})$. Similarly, $L_2$ is also matched to an extent of $n(1-1/e^{1-\gamma})$.
Putting all the pieces together, we have the inequality $$\frac{n\gamma + 2n(1-1/e^{1-\gamma})}{2n}\geq \gamma\Rightarrow 1-\frac{1}{e^{1-\gamma}}-\frac{\gamma}{2}\geq0.$$
The function $1-\frac{1}{e^{1-\gamma}}-\frac{\gamma}{2}$ is decreasing and has root approximately at 0.6252.
\end{proof}
\section{Discussion and open problems}
We presented the first nontrivial algorithm for the online fractional matching and vertex cover problems in graphs where all vertices arrive online. A natural question is whether our competitive ratios, 1.901 and 0.526, are optimal for these two problems. For the special case of the bipartite graphs, can we extend our charging-based framework to get improved algorithms?
Another interesting problem is to beat the greedy algorithm for the online {\em integral} matching problem in bipartite graphs or even general graphs. Very recently, the connection between the optimal algorithms for online bipartite integral and fractional matching was established via the randomized primal-dual method~\cite{devanurrandomized}. This is promising as the techniques developed may also be applicable to our problem. However, it seems quite difficult to reverse-engineer an algorithm for online integral matching based on the analysis of our algorithm.
For online integral vertex cover, as mentioned earlier, there is essentially no hope to do better than 2 assuming the Unique Game Conjecture. Nevertheless, it will still be interesting to obtain an unconditional online hardness result which could be easier than the offline counterpart.
Finally, our discussion has been focused on the {\em oblivious adversary} model. It would be interesting to study our problems in weaker adversary models, i.e., stochastic~\cite{Feldman2009,Manshadi2011} and random arrival models~\cite{Mahdian2011,Karande2011}.
{\noindent \bf Acknowledgments:} We thank Michel Goemans for helpful discussions and Wang Chi Cheung for comments on a previous draft of this paper.
| {
"attr-fineweb-edu": 1.40332,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUeojxK7IAEeGRf7Xu | \section{Introduction}
Beginning in the mid 1980's, horse racing has witnessed the rise of betting syndicates akin to hedge funds profiting from statistical techniques similar to high frequency traders on stock exchanges~\cite{kaplan2002}. This is possible as parimutuel wagering is employed at racetracks, where money is pooled for each bet type, the racetrack takes a percentage, and the remainder is disbursed to the winners in proportion to the amount wagered.\\
Optimization in the horse racing literature can be traced back to Isaacs deriving a closed form solution for the optimal win bets when maximizing expected profit in 1953 \cite{isaacs1953}. Hausch et al. \cite{hau81} utilized an optimization framework to show inefficiencies in the place and show betting pools using win bet odds to estimate race outcomes. In particular, they used the Kelly criterion \cite{kelly1956}, maximizing the expected log utility of wealth and found profitability when limiting the betting to when the expected return was greater than a fixed percentage. More recently, Smoczynski and Tomkins derived a simple procedure for the optimal win bets under the Kelly criterion using the KKT conditions \cite{smoc2010}. Although the Kelly criterion maximizes the asymptotic rate of asset growth, the volatility of wealth through time is too large for most, resulting in many professional investors employing a fractional Kelly criterion \cite{thorp2008}, which has been shown to possess favourable risk-return properties by MacLean et al. \cite{maclean1992}. We investigate a further manner of risk management in the form of a chance constraint, taking into account the time horizon of the bettor, which can be employed in conjunction with the Kelly criterion.\\
There are several different types of wagers one can place on horses, but in order to best display the effect of the chance constraint, we concentrate on the riskiest of bets on a single race, the superfecta, which requires the bettor to pick, in order, the first 4 finishers.
\section{Optimization Model}
\label{sec:OM}
\subsection{Time Horizon}
To motivate the discussion, we examine the 4 horse outcome probabilities of race 5 on March 20, 2014 at Flamboro Downs, Hamilton, Ontario, Canada. Information about the race dataset and how these probabilities are estimated can be found in Section \ref{sec:CS}. Let $S$ represent the set of top 4 horse finishes with each $s\in S$ corresponding to a sequence of 4 horses. If we bet on this race an infinite number of times, then the average number of races before a superfecta bet on outcome $s$ pays off would be $\frac{1}{\pi_s}$, where $\pi_s$ is the outcome's probability. Summary statistics for the average wait time is in the following table.
\begin{table}[htb]
\centerline{
\resizebox{0.25\textwidth}{!}{
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{lrr}
\hline
\bf{Statistic} &\bf{Races}\\
\hline
min&$141$\\
\hline
max&$566,225$\\
\hline
median&$13,600$\\
\hline
mean&$38,192$\\
\hline
\end{tabular}}}
\caption{Average wait time statistics} \label{T4}
\end{table}
The median wait time for a superfecta bet to payoff is then over 11 seasons with roughly 1,200 races per season. Assuming the horseplayer requires some form of regular income or desires to at least turn a profit every season, consideration of the likelihood of receiving a payoff is warranted. In particular, we can limit betting strategies to those which pay out with high probability over a number of races equal to the desired time horizon, $\tau$. Let $x=\{x_s\}$ be our decision variables dictating how much to wager on each outcome $s$. For a betting decision $\hat{x}$, let $B_{\hat{x}}\sim \text{binomial}(\tau,\pi_{\hat{x}})$, where $\pi_{\hat{x}}$ is the probability of a payout. In order to enforce the gambler's time horizon, we require that $\PP(B_{\hat{x}}\geq 1)\geq 1-\alpha$, where $\alpha$ is our error tolerance, which is chosen arbitrarily small. Rearranging, we require $\pi_{\hat{x}}\geq 1-\alpha^{\frac{1}{\tau}}$. Assuming independence between races, limiting betting decisions to having a payout probability of at least $1-\alpha^{\frac{1}{\tau}}$ ensures that a payout will occur with probability at least $1-\alpha$ over $\tau$ races.
\subsection{Optimization Program}
The objective is to maximize the exponential rate of return. Let $P_{\rho}(x)$ be the random payout given our decision vector $x$. The payout uncertainty stems from the result of the race, $\rho$, with $S$ as its sample space. Let $w$ be the current wealth of the gambler. Incorporating the gambler's time horizon through the use of a chance constraint, the optimization problem is below.
\begin{alignat}{6}
&\max&&\text{ }\EE\log(P_{\rho}(x)+w-\sum_{s\in S}x_s)\nonumber\\
&\mbox{s.t. }&&\sum_{s\in S}x_s\leq w \nonumber\\
&&&\PP(P_{\rho}(x) > 0 | \sum_{s\in S}x_s>0)\geq 1-\alpha^{\frac{1}{\tau}}\nonumber\\
&&&x_{s}\geq 0\hspace{5 pt}\forall s\in S \nonumber
\end{alignat}
The chance constraint is conditional on there being favourable bets to be placed, as we do not want to decrease our expected utility below $\log(w)$ to satisfy it. We assume that the frequency with which we are forced to abstain from gambling is sufficiently small so as not to significantly alter our effective time horizon.
\section{Computational Substantiation}
\label{sec:CS}
The optimization model was tested using historical race data from the 2013-2014 season at Flamboro Downs. This amounted to a total of 1,168 races. Race results, including the payouts, pool sizes, and final win bet odds were collected from TrackIT~\cite{TI}. Handicapping data, generated by CompuBet~\cite{CB}, was collected from HorsePlayer Interactive~\cite{HPI}. The first $70\%$ of the race dataset was used to calibrate the race outcome probabilities and payout models, with the remaining $30\%$ of races used for out of sample testing.
\subsection{Estimating Outcome Probabilities and Payouts}
\label{sec:RO}
The multinomial logistic model, first proposed by Bolton and Chapman~\cite{Bolt86}, was used to estimate win probabilities. Given a vector of handicapping data on each horse $h$, $v_{h}$, the horses are given a value $V_{h}=\beta^Tv_{h}$, and assigned winning probabilities $\pi_{h}=\frac{e^{V_{h}}}{\sum_{i=1}^{n}e^{V_i}}$. A three factor model was used, including the log of the public's implied win probabilities from the win bet odds, $\log{\pi^p_h}$, and the log of two CompuBet factors, which were all found to be statistically significant. The analysis was performed using the {\it mlogit} package~\cite{croi12} in {\it R}.
The discount model, derived by Lo and Bacon-Shone~\cite{Lo08}, was used to estimate the order probabilities,
$\pi_{ijkl}=\pi_i\frac{\pi^{\lambda_1}_j}{\sum_{s\neq i}\pi^{\lambda_1}_s}\frac{\pi^{\lambda_2}_k}{\sum_{s\neq i,j}\pi^{\lambda_2}_s}\frac{\pi^{\lambda_3}_l}{\sum_{s\neq i,j,k}\pi^{\lambda_3}_s}$, where optimal $\lambda_i$'s were determined using multinomial logistic regression.\\
Let $Q$ and $Q_{s}$ be the superfecta pool size, and the total amount wagered on sequence $s$. The only information available to bettors is the value of $Q$. The approach taken to estimate $Q_s$ is motivated by the work of Kanto and Renqvist~\cite{kant08} who fit the win probabilities of the Harville model~\cite{Harv73} to the money wagered on Quinella bets using multinomial maximum likelihood estimation. The amount wagered on sequence $s$ is $Q_{s}=\frac{Q(1-t)}{P_s}$, where $t=24.7\%$ is the track take at Flamboro Downs and $P_{s}$ is the \$1 payout.
The minimum superfecta bet allowed in practice is $\$0.2$ with $\$0.2$ increments, so let $n=5Q_s$ be the number of bets placed on $s$ out of $N=5Q$, which we assume follows a binomial distribution.
We model the public's estimate of outcome probabilities using the discount model with their implied win probabilities, so for $s=\{i,j,k,l\}$,
$\pi^p_s=\frac{(\pi^p_i)^{\theta_1}}{\sum(\pi^p_h)^{\theta_1}}\frac{(\pi^p_j)^{\theta_2}}{\sum_{h\neq i}(\pi^p_h)^{\theta_2}}\frac{(pi^p_k)^{\theta_3}}{\sum_{h\neq i,j}(\pi^p_h)^{\theta_3}}
\frac{(\pi^p_l)^{\theta_4}}{\sum_{h\neq i,j,k}(\pi^p_h)^{\theta_4}}=\frac{(\pi^p_s)^u}{(\pi^p_s)^l}$.
The likelihood function, using data from $R$ historical races assumed to be independent, with $w_r$ being the winning sequence in race $r$, is
$\L(\theta)\propto\Pi_{r=1}^R(\pi^p_{w_r})^{n_r}(1-\pi^p_{w_r})^{N_r-n_r}$. The negative log-likelihood is a difference of convex functions,
$-\log\L(\theta)\propto\sum_{r=1}^R N_r\log((\pi_{w_r}^p)^l)-(n_r\log((\pi_{w_r}^p)^u)+(N_r-n_r)\log((\pi_{w_r}^p)^l-(\pi_{w_r}^p)^u))$. This function was minimized twice using {\it fminunc} in {\it Matlab}, the first with an initial guess that the public uses the Harville model, $\theta_i=1$, the second assuming that the public believes superfecta outcomes are purely random, $\theta_i=0$, with both resulting in the same solution.
The payout function is $P_{s}(x)=x_{s}\frac{(Q+\sum_{u\in S}x_u)(1-t)}{Q_{s}+x_{s}}$, where we take $Q_s=\pi_s^pQ$, the expected amount wagered on $s$.
\subsection{Optimization Program Formulation}
Our optimization program now has the following form. When testing the model we round down the optimal solution to the nearest $0.2$ to avoid overbetting. The $z_s$ variables are used to indicate when $x_s\geq0.2$, implying $P_{s,\xi_s}(x) > 0$ and $\bar{z}$ nullifies the chance constraint when $\sum_{s\in S}x_s=0$.
\begin{alignat}{6}
&\max&&\text{ }\sum_{s\in S}\pi_{s}\log(x_{s}\frac{(Q+\sum_{u\in S}x_u)(1-t)}{Q_s+x_{s}}+w-\sum_{u\in S}x_u)\nonumber\\
&\mbox{s.t. }&&\sum_{s\in S}x_s\leq w\bar{z} \nonumber\\
&&&\sum_{s\in S}\pi_sz_s\geq (1-\alpha^{\frac{1}{\tau}})\bar{z} \nonumber\\
&&&\left(\frac{Q_s+0.2}{Q_s}\right)^{z_s}\leq\frac{Q_s+x_s}{Q_s}\hspace{5 pt}\forall s\in S \nonumber\\
&&&\bar{z},z_{s}\in \{0,1\}\hspace{71 pt}\forall s\in S \nonumber\\
&&&x_{s}\geq 0\hspace{103 pt}\forall s\in S \nonumber
\end{alignat}
We use the 1 to 1 mapping proposed by Kallberg and Ziemba~\cite{Kall08}, $y_s=\log(x_s+Q_s)$, which results in the following program whose linear relaxation is convex.
\begin{alignat}{6}
&\max&&\text{ }\sum_{s\in S}\pi_{s}\log(Q+w-(t+(1-t)Q_se^{-y_s})\sum_u e^{y_u})\nonumber\\
&\mbox{s.t. }&&\sum_{s\in S}e^{y_s}\leq w\bar{z}+Q \nonumber\\
&&&\sum_{s\in S}\pi_sz_s\geq (1-\alpha^{\frac{1}{\tau}})\bar{z} \nonumber\\
&&&z_s\ln\left(\frac{Q_s+0.2}{Q_s}\right)\leq y_s-\log{Q_s}\hspace{5 pt}\forall s\in S \nonumber\\
&&&\bar{z},z_{s}\in \{0,1\}\hspace{99 pt}\forall s\in S \nonumber\\
&&&y_s\geq \log(Q_s)\hspace{99 pt}\forall s\in S \nonumber
\end{alignat}
\subsection{Results}
\label{sec:R}
The model was tested on a total of 350 races. Given our optimal betting solution, the realized payout was calculated by adjusting the published payout to account for our wagers and breakage. The gambler's wealth over the course of the races was calculated using the optimization program with and without the chance constraint, $\text{Opt}^+$ and Opt respectively. Initial wealth was set to \$1000, with the time horizon set to $\tau=350$ and $\alpha=0.01$. All testing was conducted on a Windows 7 Home Premium 64-bit, Intel Core i5-2320 3GHz processor with 8 GB of RAM. The implementation was done in Matlab R2012a with the OPTI toolbox, using the IPOPT\cite{IPOPT06} and Bonmin\cite{Bonmin2008} solvers.
\begin{figure}[htb]
\centerline{
\resizebox{0.7\textwidth}{!}{
\begin{tikzpicture}
\begin{axis}[xlabel=Race,ylabel=Wealth,ylabel style={yshift=5pt},ylabel style={rotate=0},
legend style={legend pos=outer north east,font=\tiny}]
\addplot[mark=none,mark size=0.75,draw=blue]
table[x=x,y=y1]
{results4.dat};
\addplot[mark=none,mark size=0.75,draw=red]
table[x=x,y=y2]
{results4.dat};
\legend{$\text{Opt}^+$,Opt}
\end{axis}
\end{tikzpicture}}}
\caption{Wealth over the course of 350 races at Flamboro Downs.} \label{T9}
\end{figure}
The result in Figure 1 is intuitive, as $\text{Opt}^+$ attempts to mimic Opt, while generally having to take on extra bets to satisfy the chance constraint. This extra cost results in a lower wealth until one of these extra wagers does in fact payout, which occured at approximately race 280, resulting in a superior return of 28.8\% compared to 17.8\% for Opt.
\section{Conclusion}
\label{sec:C}
We presented a chance constrained optimization model for parimutuel horse race betting, as well as a method for estimating superfecta bet payouts.
Profitability was achieved when employing the Kelly criterion, with a superior return when taking into consideration the gambler's time horizon.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.760742,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd4E5qdmDNlI6-3pY | \section{Introduction}
The ubiquity of professional sports and specifically the NFL have lead to an increase in popularity for Fantasy Football. Every week, millions of sports fans participate in their Fantasy Leagues. The main tasks for users are the draft, the round-based selection of players before each season, and setting the weekly line-up for their team. For the latter, users have many tools at their disposal: statistics, predictions, rankings of experts and even recommendations of peers. There are issues with all of these, though. Most users do not want to spend time reading statistics. The prediction of Fantasy Football has barely been studied and are fairly inaccurate. The experts judge mainly based on personal preferences instead of unbiased measurables. Finally, there are only few peers voting on line-up decisions such that the results are not representative of the general opinion. Especially since many people pay money to play, the prediction tools should be enhanced as they provide unbiased and easy-to-use assistance for users.
This paper provides and discusses approaches to predict Fantasy Football scores of Quarterbacks with relatively limited data. In addition to that, it includes several suggestions on how the data could be enhanced to achieve better results. The dataset consists only of game data from the last six NFL seasons. I used two different methods to predict the Fantasy Football scores of NFL players: Support Vector Regression (SVR) and Neural Networks. The results of both are promising given the limited data that was used.
After an overview of related work in Section~\ref{sec:rel}, I present my solution. Afterwards, I describe the data set in greater detail before Section~\ref{sec:exp} explains the experiments and show the results. Finally, Section~\ref{sec:disc} discusses the findings and possible future work.
\section{Related Work}
\label{sec:rel}
Most research in sports prediction focuses on predicting the winner of a match instead of Fantasy Football scores or specific game stats that are important for Fantasy Football. Min et al.~[5] used Bayesian inference and rule-based reasoning to predict the result of American Football matches. Their work is based on the idea that sports is both highly probabilistic and at the same time based on rules because of team strategies. Sierra, Fosco and Fierro~[1] used classification methods to predict the outcome of an American Football match based on a limited number of game statistics excluding most scoring categories. In their experiments, linear Support Vector Machines had by far the most accurate predictions and ranked above all expert predictions they compared with. Similarly, Harville~[8] used a linear model for the same task. A Neural Network approach to predicting the winner of College Football games was proposed by Pardee~[7]. His accuracy of $76\%$ improved on expert predictions. Stefani~[9] used an improved least squares rating system to predict the winner for nearly 9000 games from College and Pro Football and even other sports.
Fokoue and Foehrenbach~[2] have analyzed important factors for NFL teams with Data Mining. Their results are especially useful for this work because the first step for predicting Fantasy Football scores involves identifying important features. Even though Spann and Skiera's work on sports predictions~[3] is unrelated to American Football, their prediction markets could be used for Fantasy Football predictions. The prices on the prediction market determine the probability of a specific outcome. Such prediction markets can be used for various purposes, for example, in companies such as Google~[4]. In his blog on sports analysis, Rudy~[6] used regression models to predict Fantasy Football scores. His findings suggest that modelling positions separately improves the accuracy of a model.
\section{Proposed Solution}
A large part of my work is to create a proper dataset from the real game data. The raw game data has to be filtered and manipulated such that only relevant data cases are used later. The predictions will be made with two different methods: Support Vector Regression (SVR) and Neural Networks. Linear models like SVR have been very successful for predicting the winner of a match~[1,8]. Neural Networks are able to adjust to the data and therefore especially useful when the structure of the data is not known. In this specific case, I have no prior knowledge of whether linear models perform well or not, so the Neural Networks will at least provide another prediction to compare the results of SVR with. As the two approaches do not have much in common, I will describe them separately.
\subsection{Support Vector Regression}
Support Vector Regression (SVR) is a linear regression model. Compared to other models, SVR is $\epsilon$-insensitive, i.e. when fitting the model to data the error is not determined on a continuous function. Instead, deviation from the desired target value $y_i$ is not counted within $\epsilon$ of the target value $y_i$. More specifically, SVR uses a regression function $f_{SVR}$ with feature vectors $x_i$ and their labels $y_i$ as follows:
\[
f_{SVR}(x) = \left(\sum_{d=1}^D w_dx_d\right) + b = xw + b
\]
where $w$ and $b$ are chosen such that
\[
w^*,b^* = \argmin_{w,b} \dfrac{C}{N} \sum_{i=1}^N V_\epsilon(y_i - (x_iw+b)) + \|w\|_2^2
\]
$V_\epsilon$ is a function that returns $0$ if the absolute value of its argument is less than $\epsilon$ and otherwise calculates the difference of the absolute value of its argument and $\epsilon$. Therefore, the loss increases linearly outside the $\epsilon$-insensitive corridor. $C$ is a regularization parameter and chosen from $(0,1]$. The smaller $C$ is chosen, the less influence is given to the features. The use of a kernel enables SVR to work well even when the structure of the data is not suitable for a linear model. There are several options for kernels which will be examined in Section~\ref{sec:exp}. The hyperparameter $\gamma$ for some kernels describes how close different data cases have to be in order to be identified as similar by the model.
\subsection{Neural Networks}
Neural Networks have the advantage that they adapt to the problem by learning the characteristics of the data set. After an initial input of the features, there are potentially multiple hidden layers and finally the output layer. In each layer there are so-called neurons or hidden units which perform computations. From layer to layer, there are connections such that the outputs of the previous layer's neurons are the inputs of the next layer's neurons. In order to work for regression the Neural Net is configured to have a linear output layer. The hidden layer units should not be linear. Common choices for the activation function are hyperbolic tangent ($tanh$) or sigmoid.
In my experiment, I only used Neural Networks with one hidden layer. The activation function in the hidden layer for the $k$th unit is
\[h_k = \dfrac{1}{1+ \exp\left(-\left(\sum\limits_{d=1}^D w_{dk} + b_{dk}\right)\right)}\]
While the hidden layers can be non-linear, the output layer has to consist of a linear function:
\[
\hat{y} = \sum_k w^o_kh_k + b^o
\]
The parameters $w^o_k$, $b^o$ for the output layer and $w_k$, $b_k$ for the hidden layer are learnt over multiple epochs with Backpropagation. The data cases are evaluated on the Neural Network and the error is determined. Then, updates are performed based on the contribution of each hidden unit to the error, which is calculated by using derivatives going backwards through the Network.
\subsection{Pipeline}
The Neural Networks are simply given the data set in the original form. Feature selection and normalization are not performed. The Neural Network can implicitly do this or not depending on whether it improves their predictions.
Before applying SVR, I scaled the features down to the interval $[0,1]$ in order to improve the performance specifically for linear and polynomial kernel SVR. After the normalization comes the feature selection. There are three options: no feature selection, manual feature selection and Recursive Feature Elimination with Cross Validation (RFECV). All of them have certain advantages and problems. Not using feature selection can result in inaccurate predictions because of correlated features. Manual feature selection requires domain knowledge. Lastly, RFECV takes a lot of time depending on the number of features, but the results should be reasonably well since the elimination is cross validated. After the feature selection the hyperparameters for SVR have to be determined. A reasonable number of configurations is tested several times on a held-out validation set after being trained on the rest of the training data. The hyperparameter configuration with the best average score is finally selected and used for all further operations. This includes fitting the model to the whole training data set and finally predicting the Fantasy Football scores of the test cases.
\section{Data Set}
The data set consists of NFL game data from 2009 to 2014. I accessed it with the API from \href{https://github.com/BurntSushi/nflgame}{github.com/BurntSushi/nflgame} which gets the data from \href{NFL.com}{NFL.com}. Before using it to make predictions I performed several operations. First of all, I filtered the data such that only Quarterbacks (QB) with at least $5$ passes are selected. This restriction is necessary such that non-QB players or backup QBs are not taken in to account. Then, for every game I included as features the current age of the QB, his experience in years as a professional, the stats of the previous game, the average stats of the last $10$ games as well as the stats of the opposing defense in their last game and their average over the last $10$ games. The stats for a QB include $12$ features that show the performance in passing and rushing as well as turnovers. The values are all treated as continuous real values. For defenses, there are $4$ categories, namely the number of points allowed, passing and rushing yards allowed as well as turnovers forced. The target value in each case is the actual Fantasy Football score the QB received for the given game. I used the NFL's standard scoring system which is described on \href{http://www.nfl.com/fantasyfootball/help/nfl-scoringsettings}{nfl.com/fantasyfootball/help/nfl-scoringsettings}.
In order to have sigificant past data even for the first data cases, I did not use the first year, 2009. I split the data into training and test data such that the seasons 2010 to 2013 belong to the training set and the 2014 season is the test data. As a result, there are 2167 training cases and 553 test cases.
First-year players become a separate problem because the predictions can not be based on their past production. To overcome this, they are assigned the average over all first-year QB per-game average stats for the first game. From the second game on, their own statistics are used.
Even though the data access through the API is limited to years 2009-2014, this is not necessarily a limitation. As various independent statistics and reports~[11,12,13,14] show, Football has evolved especially over the last few years. Such changes also influence Fantasy Football. As a consequence, the data from ten years ago might not properly represent today's games any more thus affecting the predictions negatively.
The test data includes lots of cases with QBs that would never be used in Fantasy Football because of a lack of experience, production or inconsistency. Therefore it makes sense to restrict the evaluation to the best QBs that actually have a chance to be used in Fantasy Football. In standard leagues with $12$ teams one QB starts for every team, so the evaluation considers the predictions of the $24$ best QBs (see Appendix List~\ref{listof24qbs}).
\section{Experiments and Results}
\label{sec:exp}
\begin{table}
\centering
\begin{tabular}{lllllll}\hline
Feature Sel. & RMSE (all) & RMSE (24) & MAE (all) & MAE (24) & MRE (all) & MRE (24) \\ \hline
None & $7.815$ & $7.925$ & $6.238$ & $6.265$ & $0.453$ & $0.419$ \\
RFECV & $7.759$ & $7.833$ & $6.221$ & $6.248$ & $0.448$ & $0.418$ \\
manual & $7.796$ & $7.914$ & $6.224$ & $6.256$ & $0.450$ & $0.418$ \\
\hline
\end{tabular}
\caption{The results of applying SVR after no feature selection, RFECV and manual feature selection. The hyperparameters were set to $C=0.25$, $\epsilon=0.25$, $\text{kernel}=\text{linear}$. For reasons of comparability with other sources three different errors are shown: Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and Mean Relative Error (MRE). MRE is defined as $\frac{|y-\text{prediction}|}{\text{prediction}}$. All errors are shown for the whole test set (all) and for data cases which involved the best $24$ players only.}
\label{tab:SVR}
\end{table}
In this section, I will talk about the different experiments and the obtained results. First of all, I tried Support Vector Regression (SVR) using the scikit-learn implementation~[15]. There are several features that probably do not influence the result very much, e.g. the number of two point conversions in the last game, so feature selection could actually improve the accuracy. The method I chose for feature selection is \textit{Recursive Feature Elimination with Cross Validation} (RFECV). It recursively eliminates features and checks if the regression method's results improve by cross validating. In order to reduce the running time I applied the feature selection before the hyperparameter selection. The assumption was that no matter which regression method and hyperparameters are chosen, the important features will always be more or less the same. The selected features were the age, the number of years as a professional player, the number of passing attempts and the number of successful passing two-point conversions (both from last 10 games). This is a very small subset of the features since important features like the touchdowns were not taken into consideration and rarely occurring, low-weighted features like two-point conversion stats are included. Therefore, I also tried two other ways: no feature selection and manual feature selection. For the manual feature selection, I simply removed the two-point conversion stats. The results are compared in Figure~\ref{tab:SVR}. Interestingly, the SVR with RFECV performed best, followed by SVR with manual feature selection and SVR without feature selection. The reason for this is most likely that the selected hyperparameters were $C=0.25$, $\epsilon=0.25$ and a linear kernel. Having a small regularization value $C$ means that the influence of the feature values on the prediction is reduced.
Even though the feature normalization did not influence the error, it sped up the running time immensely. This allowed for more configurations in the hyperparameter selection for SVR. All possible combinations of the following values were used:
\begin{align*}&\text{kernel} \in \{\text{radial basis function},\text{sigmoid}, \text{linear}, \text{polynomial}\}, \\&C \in \{0.25, 0.5, 0.75, 1.0\}, \quad\epsilon \in \{0.05, 0.1, 0.15, 0.2, 0.25\}, \\ &\gamma \in \{0, 0.05, 0.1, 0.15\}, \quad\text{degree}\in\{2,3\}\end{align*}
As already mentioned above, the best configuration was $C=0.25$, $\epsilon=0.25$ and $\text{kernel}=\text{linear}$, although it is worth mentioning that the difference is very small. $\gamma$ and the degree are only used for some of the kernels.
Figure~\ref{fig:abserrordist} shows both the distribution of the absolute error in the whole test set and when considering only the cases with the best $24$ players. The overall distribution is quite similar which is also represented in the numbers in Table~\ref{tab:SVR}. Therefore, predictions of all QBs seem to roughly as difficult as predictions of the top $24$ QBs only.
\begin{figure}
\includegraphics[scale=0.35]{absolute_error_distribution_none.pdf}
\includegraphics[scale=0.35]{absolute_error_distribution_none_all.pdf}
\caption{The figures show the distribution of the absolute errors of the predictions of SVR with no feature selection and hyperparameters set to $C=0.25$, $\epsilon=0.25$, $\text{kernel}=\text{linear}$. \textbf{Left:} Absolute error over all test cases. \textbf{Right:} Absolute error over test cases involving $24$ best players.}
\label{fig:abserrordist}
\end{figure}
Secondly, I used Neural Networks based on the PyBrain library~[16]. In order to determine the right number of epochs $n_{epochs}$, number of hidden units $n_{hidden}$ and the type of neurons in the hidden layer $t_{hidden}$, every combination of the following values was used:
\[n_{epochs} \in \{10,50,100,1000\}, \quad n_{hidden} \in \{10,25,50,100\}, \quad t_{hidden} \in \{\text{Sigmoid}, \text{Tanh}\}\]
The results are shown in the Appendix in Table~\ref{tab:neuralnetresult}. In comparison with the best SVR results, most Neural Nets performed worse on both on the whole test set and for only the $24$ best players in all error categories. The only configuration that achieved significantly better results than all others was $n_{epochs} = 50$, $n_{hidden} = 50$ and $t_{hidden}=\text{Sigmoid}$. Its RMSE, MAE and MRE were $7.868$, $6.235$ and $0.413$ on the cases involving only the $24$ best players. Interestingly, the errors for the best $24$ players were much lower than for the whole test set. The MAE and MRE are actually better than the best results of the SVRs.
\section{Discussion and Conclusions}
\label{sec:disc}
All in all, the errors were still very high. For example, the MAE of the best prediction was more than $6$ points, which can make a difference in most Fantasy Football games. Considering that I only predicted the scores for one position, Quarterback, there are still several more positions on a team thus potentially increasing the overall error. The comparison with other sources of predictions is hard because most websites just use experts. The ones that actually project with a model, e.g. ESPN, do not openly write about their accuracy. Reda and Stringer~[10] analyzed ESPN's accuracy, but they used multiple positions at once such that I could not compare it properly. It is encouraging, though, that their MAE histograms show the same Bell Curve shape and have approximately the same variance.
Apart from that, there are a few interesting observations from this experiment. The feature selection with RFECV selected only $4$ features. I assume that many of the features are correlated and therefore do not add much extra value to the predictions. The only ways the accuracy can be significantly improved is by adding new features or by enhancing existing ones. As the feature selection indicated, having both last week's statistics and the average over the last ten games does not provide much extra information. In order to take the trend better into account, the exponentially weighted moving average (EWMA) could be used to substitute the current game statistics. The EWMA is calculated as
\[
S_t = \alpha G_t + (1-\alpha) S_{t-1}
\]
where $S_t$ is the EWMA for all games up to $t$ and $G_t$ are the games statistics for game $t$. This could be done for the last few games or even over the whole career.\\
There are also several other interesting features that could be taken into account, such as the injury report status, suspensions, draft position and college statistics for first-year players, postseason and preseason performance, overall team performance and changes on the team such as trades or injuries. One could even go further and analyze Twitter posts on a specific player or factor in expert rankings and predictions from other sources. Most of these are not accessible with the used API and exceed the scope of this project, but seem promising for future work.
So far, only Neural Networks and Support Vector Regression were used. Other models are conceivable for this task, too. But specifically Neural Networks with multiple hidden layers could be useful since the Networks with only one layer performed already quite well.
Overall, as the total number of users of Fantasy Football Leagues and the amounts of invested money increase the demand for accurate predictions will grow and probably lead to more research on the topic.
\subsubsection*{References}
\small{
[1] Sierra, A., Fosco, J., Fierro, C., \& Tiger, V. T. S. Football Futures. Retrieved April 30, 2015, from http://cs229.stanford.edu/proj2011/SierraFoscoFierro-FootballFutures.pdf.
[2] Fokoue, E., \& Foehrenbach, D. (2013). A Statistical Data Mining Approach to Determining the Factors that Distinguish Championship Caliber Teams in the National Football League.
[3] Spann, M., \& Skiera, B. (2009). Sports forecasting: a comparison of the forecast accuracy of prediction markets, betting odds and tipsters. Journal of Forecasting, 28(1), 55-72.
[4] Cowgill, B. (2015). Putting crowd wisdom to work. Retrieved April 30, 2015, from http://googleblog.blogspot.com/2005/09/putting-crowd-wisdom-to-work.html
[5] Min, B., Kim, J., Choe, C., Eom, H., \& McKay, R. B. (2008). A compound framework for sports results prediction: A football case study. Knowledge-Based Systems, 21(7), 551-562.
[6] Rudy, K. (n.d.). The Minitab Blog. Retrieved April 27, 2015, from http://blog.minitab.com/blog/the-statistics-game
[7] Pardee, M. (1999). An artificial neural network approach to college football prediction and ranking. University of Wisconsin–Electrical and Computer Engineering Department.
[8] Harville, D. (1980). Predictions for National Football League games via linear-model methodology. Journal of the American Statistical Association, 75(371), 516-524.
[9] Stefani, R. T. (1980). Improved least squares football, basketball, and soccer predictions. IEEE transactions on systems, man, and cybernetics, 10(2), 116-123.
[10] Reda, G., \& Stringer, M. (2014). Are ESPN's fantasy football projections accurate? Retrieved April 28, 2015, from http://datascopeanalytics.com/what-we-think/2014/12/09/are-espns-fantasy-football-projections-accurate
[11] Powell-Morse, A. (2013). Evolution of the NFL Offense: An Analysis of the Last 80 Years. Retrieved April 27, 2015, from http://www.besttickets.com/blog/evolution-of-nfl-offense/
[12] Wyche, S. (2012) Passing league: Explaining the NFL's aerial evolution. Retrieved April 26, 2015, from http://www.nfl.com/news/story/09000d5d82a44e69/article/passing-league-explaining-the-nfls-aerial-evolution
[13] Mize, M. (2012) The NFL is a passing league. The statistics prove it and the rules mandate it. Retrieved April 26, 2015, from http://www.thevictoryformation.com/2012/09/17/the-nfl-is-a-passing-league-the-statistics-prove-it-and-the-rules-mandate-it/
[14] Rudnitsky, M. (2013) Today's NFL Really Is A 'Passing League,' As Confirmed By These Fancy Graphs That Chart The Last 80 Years Of Offense. Retrieved April 26, 2015, from http://www.sportsgrid.com/nfl/evolution-nfl-offense-charts-graphs-passing-rushing-peyton-manning/
[15] scikit-learn: Machine Learning in Python. Retrieved April 26, 2015, from http://scikit-learn.org/stable/index.html
[16] PyBrain: The Python Machine Learning Library. Retrieved April 26, 2015, from http://www.pybrain.org/
[17] Gallant, A. (2012-15) An API to retrieve and read NFL Game Center JSON data. Retrieved April 26, 2015, from https://github.com/BurntSushi/nflgame
[18] NFL.com Fantasy Football scoring settings: Retrieved April 26, 2015, from http://www.nfl.com/fantasyfootball/help/nfl-scoringsettings
}
\section*{Appendix}
\FloatBarrier
In the following, all $24$ Quarterbacks that were considered for the evaluation are listed. The selection was based on their 2014 Fantasy Football scores. For players that did not play all regular season games or were first-year players (Rookie) with no previous Fantasy Football statistics there is a note:\\
\label{listof24qbs}
Drew Brees, Ben Roethlisberger, Andrew Luck, Peyton Manning, Matt Ryan, Eli Manning, Aaron Rodgers, Philip Rivers, Matthew Stafford, Tom Brady, Ryan Tannehill, Joe Flacco, Jay Cutler (15 games), Tony Romo (15 games), Russell Wilson, Andy Dalton, Colin Kaepernick, Brian Hoyer (14 games, 13 starts), Derek Carr (Rookie), Alex Smith (15 games), Cam Newton (14 games), Kyle Orton, Teddy Bridgewater (13 games, 12 starts, Rookie), Blake Bortles (14 games, 13 starts, Rookie)
\begin{table}
\centering
\caption{The results from the Neural Networks on the Fantasy Football predictions. Every run is defined by the number of epochs $n_{epochs}$, the number of hidden units $n_{hidden}$ and the type of neurons in the hidden unit $t_{hidden}$. The results include the Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and Mean Relative Error (MRE) on the whole test data as well as the RMSE, MAE and MRE for the best $24$ players.}
\label{tab:neuralnetresult}
\vspace*{0.5cm}
\hspace*{-0.7cm}
\begin{tabular}{lllllllll}\hline
$n_{epochs}$ & $n_{hidden}$& $t_{hidden}$ & RMSE(all)& MAE(all) & MRE(all) & RMSE(24) & MAE(24) & MRE(24)\\\hline
10 &10 &Sigm & 8.060258 & 6.391866 & 0.461691 & 8.142453 & 6.381999 & 0.460978 \\
10 &10 &Tanh &8.058488 & 6.391081 & 0.460524 & 8.132701 & 6.375831 & 0.459425 \\
10 &25 &Sigm & 8.049170 & 6.387444 & 0.457333 & 8.105430 & 6.358798 & 0.455058 \\
10 &25 &Tanh &8.069334 & 6.396201 & 0.466794 & 8.185348 & 6.409744 & 0.467782 \\
10 &50 &Sigm & 8.054337 & 6.387369 & 0.472183 & 8.222789 & 6.433377 & 0.473597 \\
10 &50 &Tanh &8.057394 & 6.390635 & 0.459761 & 8.126329 & 6.371757 & 0.458403 \\
10 &100 &Sigm & 8.127506 & 6.424212 & 0.488398 & 8.371598 & 6.526674 & 0.496181 \\
10 &100 &Tanh &8.182758 & 6.450737 & 0.502695 & 8.499078 & 6.602691 & 0.514691 \\
50 &10 &Sigm & 8.059963 & 6.391739 & 0.461502 & 8.140871 & 6.381004 & 0.460727 \\
50 &10 &Tanh &8.103019 & 6.412570 & 0.480483 & 8.302668 & 6.484871 & 0.485900 \\
50 &25 &Sigm & 8.049401 & 6.389763 & 0.451924 & 8.060769 & 6.331248 & 0.447786 \\
50 &25 &Tanh &8.059079 & 6.391350 & 0.460923 & 8.136031 & 6.377946 & 0.459957 \\
50 &50 &Sigm & 8.061196 & 6.441132 & 0.429238 & 7.867709 & 6.235217 & 0.413366 \\
50 &50 &Tanh &8.049278 & 6.390949 & 0.450475 & 8.048565 & 6.323527 & 0.445747 \\
50 &100 &Sigm & 8.054849 & 6.385918 & 0.463524 & 8.157339 & 6.390617 & 0.463520 \\
50 &100 &Tanh &8.071710 & 6.392687 & 0.448577 & 8.060837 & 6.323155 & 0.444378 \\
100 &10 &Sigm & 8.087679 & 6.404901 & 0.474837 & 8.254096 & 6.453887 & 0.478469 \\
100 &10 &Tanh &8.048337 & 6.393241 & 0.445171 & 8.005743 & 6.296778 & 0.438454 \\
100 &25 &Sigm & 8.079818 & 6.411072 & 0.470735 & 8.199887 & 6.418849 & 0.470043 \\
100 &25 &Tanh &8.061363 & 6.383954 & 0.463902 & 8.167160 & 6.398136 & 0.464922 \\
100 &50 &Sigm & 8.060657 & 6.393333 & 0.460352 & 8.133239 & 6.377483 & 0.459263 \\
100 &50 &Tanh &8.038246 & 6.381052 & 0.446976 & 8.014688 & 6.297888 & 0.441000 \\
100 &100 &Sigm & 8.069325 & 6.404321 & 0.464389 & 8.149586 & 6.386675 & 0.462122 \\
100 &100 &Tanh &8.069874 & 6.408302 & 0.459283 & 8.118948 & 6.383348 & 0.457499 \\
1000 &10 &Sigm & 8.051943 & 6.389734 & 0.455206 & 8.087935 & 6.348112 & 0.452241 \\
1000 &10 &Tanh &8.056874 & 6.388611 & 0.450233 & 8.036326 & 6.302215 & 0.444779 \\
1000 &25 &Sigm & 8.066095 & 6.396302 & 0.467444 & 8.188647 & 6.411823 & 0.468297 \\
1000 &25 &Tanh &8.053281 & 6.389806 & 0.456502 & 8.098831 & 6.354800 & 0.454001 \\
1000 &50 &Sigm & 8.055530 & 6.381625 & 0.460907 & 8.134901 & 6.366414 & 0.459836 \\
1000 &50 &Tanh &8.073334 & 6.392310 & 0.470385 & 8.218570 & 6.435802 & 0.473622 \\
1000 &100 &Sigm & 8.091592 & 6.406337 & 0.479083 & 8.289962 & 6.476950 & 0.483967 \\
1000 &100 &Tanh &8.049693 & 6.385986 & 0.456491 & 8.089114 & 6.348840 & 0.452432 \\
\end{tabular}
\end{table}
\end{document}
| {
"attr-fineweb-edu": 1.529297,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUfqzxaKgQZUmnaYUA | \section{Introduction} \label{Sec1}
There are several views on what fairness in sports means \citep{Csato2021a, Pawlenka2005, Wright2014}. However, it is hard to debate that the prize allocation scheme should reward performance \citep{DietzenbacherKondratev2021}. Otherwise, a contestant might be strictly better off by losing, which can inspire tanking, the act of deliberately losing a game in order to gain other advantages. Because such behaviour threatens the integrity of sports, it is the responsibility of academic research to highlight every issue of incentive incompatibility and make proposals to reduce or eliminate perverse incentives.
The business model of the sports industry relies on the players exerting their best efforts to win. Therefore, incentive incompatibility may seem to be merely a theoretical curiosity without any relevance in practice. However, many misaligned rules exist around the world. \citet{KendallLenten2017} offer probably the first comprehensive survey of them. \citet{LentenKendall2021} overview the problem of reverse-order player drafts. \citet{Fornwagner2019} presents field evidence that teams exploit the weakness of this draft system with a concrete losing strategy. The standard format of multi-stage contests with carried over results is also vulnerable to manipulation, and some historical examples are known when a handball team has not been interested in winning by a high margin \citep{Csato2022d}.
The Union of European Football Association (UEFA) has often used incentive incompatible rules. A team could have been strictly better off by losing in the qualification of the UEFA Europa League until the 2015/16 season \citep{DagaevSonin2018}, in the qualification of the UEFA Champions League between 2015/16 and 2018/19 \citep{Csato2019c}, and in the European qualifiers for the 1996 \citep{Csato2018c} and 2018 FIFA World Cups \citep{Csato2020c}. There has been a football match where losing was dominated by playing a draw for one team \citep[Section~2.1]{Csato2021a}, and another with both teams being interested in playing 2-2 \citep{Csato2020d}. The current seeding regime of the UEFA Champions League is not strategy-proof \citep{Csato2020a}. Finally, \citet{Csato2022a} demonstrates how the incentive incompatibility of the qualification systems for the 2020 UEFA European Championship \citep{HaugenKrumer2021} and the 2022 FIFA World Cup can be substantially improved by adding specific draw constraints to the set of restricted team clashes \citep{Kobierecki2022}.
These examples might suggest that European football has fundamental problems in contest design but they emerge mainly due to complex tournament structures and detailed regulations, which increase transparency compared to other sports.
This note will reveal a similar issue of incentive incompatibility in the current revenue distribution system of the UEFA club competitions.
The main novelties of the research can be summarised as follows:
\begin{itemize}
\item
Even though the effects of the revenue distribution system of the UEFA Champions League have already been investigated \citep{Bullough2018}, the theoretical properties of the allocation rule are first analysed here;
\item
The incentive incompatibility of the coefficient-based pillar is verified;
\item
Straightforward solutions are provided to eliminate tanking opportunities;
\item
In contrast to previous works, now the unique setting allows us to exactly quantify the financial consequences of misaligned sporting rules.
\end{itemize}
In particular, Section~\ref{Sec2} uncovers why the English club Arsenal has lost about 132 thousand Euros because it won against West Ham United on 1 May 2022. Section~\ref{Sec3} explores the root of the problem and suggests two alternatives to reform the revenue distribution system used by UEFA.
\section{A real-world example of unfair revenue allocation} \label{Sec2}
The commercial revenue from UEFA club competitions (UEFA Champions League, UEFA Europa League, UEFA Europa Conference League) is distributed to the clubs according to a complex scheme \citep{UEFA2021b}. First, some money is allocated to the teams eliminated in the qualifying phases as solidarity payment. The net amount available to the clubs that play in the group stage is divided into four pillars:
\begin{itemize}
\item
Starting fees: a guaranteed payment shared equally among the 32 participants;
\item
Performance-related fixed amounts: bonuses provided for wins and draws played in the group matches, as well for qualifying to a given stage of the tournament;
\item
Coefficient-based amounts: paid on the basis of performances over the last ten years, including extra points for winning UEFA competitions in the past;
\item
Market pool: distributed in accordance with the proportional value of TV markets represented by the clubs.
\end{itemize}
All details can be found in a transparent format at \url{https://www.football-coefficient.eu/money/}.
We focus on the third pillar, the coefficient-based amounts, which sets out 30/15/10\% in the revenue distribution of the Champions League/Europa League/Europa Conference League, respectively. The 32 teams entering the group stage are ranked on the basis of the ten-year UEFA club coefficients. The lowest-ranked team receives one share and one share is added to every rank, so the highest-ranked club receives 32 shares \citep{UEFA2021b}.
The ten-year club coefficients used for the 2022/23 season of UEFA competitions are listed in \citet{Kassies2022b}. Among the 14 highest-ranked clubs, only Manchester United and Arsenal (both from England) have failed to qualify for the 2022/23 Champions League.
\begin{table}[t!]
\begin{threeparttable}
\centering
\caption{Final ranking of the top teams in the 2021/22 Premier League}
\label{Table1}
\rowcolors{1}{}{gray!20}
\begin{tabularx}{\linewidth}{Cl CCC CCC >{\bfseries}C} \toprule \hiderowcolors
Pos & Team & W & D & L & GF & GA & GD & Pts \\ \bottomrule \showrowcolors
1 & Manchester City & 29 & 6 & 3 & 99 & 26 & $+73$ & 93 \\
2 & Liverpool & 28 & 8 & 2 & 94 & 25 & $+68$ & 92 \\
3 & Chelsea & 21 & 11 & 6 & 76 & 33 & $+43$ & 74 \\
4 & Tottenham Hotspur & 22 & 5 & 11 & 69 & 40 & $+29$ & 71 \\
5 & Arsenal & 22 & 3 & 13 & 61 & 48 & $+13$ & 69 \\
6 & Manchester United & 16 & 10 & 12 & 57 & 57 & 0 & 58 \\
7 & West Ham United & 16 & 8 & 14 & 60 & 51 & $+9$ & 56 \\ \toprule
\end{tabularx}
\begin{tablenotes}
\item
\footnotesize{Pos = Position; W = Won; D = Drawn; L = Lost; GF = Goals for; GA = Goals against; GD = Goal difference; Pts = Points. All teams have played $38$ matches.}
\end{tablenotes}
\end{threeparttable}
\end{table}
Table~\ref{Table1} shows the final ranking of the 2021/22 English Premier League, which determines the qualification for the 2022/23 UEFA club competitions. Since Liverpool has won both the FA Cup and the EFL Cup, the first four teams advance to the Champions League group stage, the fifth- and sixth-placed teams advance to the Europa League group stage, and the seventh-placed team goes to the Europa Conference League play-off round. Consequently, Arsenal and Manchester United are the highest-ranked clubs in the Europa League based on ten-year coefficients: Arsenal receives 31 and Manchester United receives 32 shares from the third pillar of the revenue distribution system.
Consider the counterfactual that Arsenal would have lost away against West Ham United on 1 May 2022 instead of winning. Then Arsenal would have had 66 and West Ham United would have had 59 points. Therefore, Arsenal and West Ham United would have qualified for Europa League, and Arsenal would have received 32 shares in the third pillar as being the highest-ranked club in this competition. According to the amounts distributed in the 2021/22 season (prior to COVID-19 impact
deduction), one share corresponds to 132 thousand Euros here \citep{UEFA2021b}.
\section{Discussion} \label{Sec3}
Owing to the misaligned design of the revenue distribution system used in UEFA club competitions, Arsenal has lost approximately 132 thousand Euros because it has won against West Ham United in the 2021/22 Premier League. This means an unfair punishment for better performance.
What is the problem behind the allocation of coefficient-based amounts?
The ranking of participating clubs in the three UEFA competitions does not satisfy \emph{anonymity}, that is, the share of any club depends on the identity of other qualified teams. Consequently, if the place of a team is already secured (see the robust margin of Arsenal over Manchester United in Table~\ref{Table1}), it is interested to qualify together with a lower-ranked club (West Ham United is preferred to Manchester United). That can be achieved in the domestic championship if at least two teams qualify, which holds---according to the 2022/23 UEFA access list \citep{UEFA2022b}---for all UEFA member associations except for Liechtenstein. Similar instances of financial losses are not ubiquitous only because the ten-year UEFA club coefficients are good predictors of the final ranking in the European football leagues.
The problem can be solved by an anonymous allocation rule. Two alternatives are suggested for this purpose:
\begin{itemize}
\item
\emph{Rule A}: The coefficients-based amount is distributed on the basis of the position of the club among all teams from its national association.
\item
\emph{Rule B}: Modify the calculation of coefficients by labelling the teams with their country and domestic achievement instead of their name. The mathematical formula can remain unchanged but the values are ordered in a decreasing sequence for each association at the end. These coefficients are distributed among the qualified teams from a given country according to their way of qualification.
\end{itemize}
As an illustration, see how Rule A works for the English clubs in the 2022/23 season of UEFA club competitions. The first seven teams from this country in the ten-year club coefficient ranking are Chelsea (242), Manchester City (220), Liverpool (215), Manchester United (208), Arsenal (172), Tottenham Hotspur (148), and Leicester City (45). Therefore, in the third pillar of the revenue distribution system, the values $\left[ 242, 220, 215, 208, 172, 148, 45 \right]$ are assigned to the top seven teams in the 2021/22 Premier League. For example, the payment of Arsenal is based on its ``financial coefficient'' of $172$, independently of the identity of the sixth-placed team, whose ``financial coefficient'' is $148$. Some teams can qualify for the Europa League and Europa Conference League by winning a domestic cup. They can be considered in the appropriate place of the league ranking to acquire the ``financial coefficient''. As UEFA gives priority to cup winners in filling vacant slots, these teams essentially occupy the position immediately below the last club that enters the Champions League or its qualification.
Rule B has been recommended by the French mathematician \emph{Julien Guyon} for seeding in the UEFA Champions League \citep{Guyon2015b}, thus, it is only shortly outlined. For instance, four English clubs have entered the 2021/22 Champions League, where they have collected the following number of points: Manchester City (champion, 27), Manchester United (runner-up, 18), Liverpool (third-placed team, 33), and Chelsea (fourth-placed team, 25). Hence, the English champion can be said to have achieved 27 points in this season, and so on. These values are summarised as before, however, they are ordered in a decreasing sequence at the end to guarantee fairness. Otherwise, the runner-up may have a higher coefficient than the champion.
Rule A does not require a fundamental reform and does not affect other systems, while Rule B means a more radical step (but is able to solve the Champions League seeding problem, too). None of them can account for historic title points, which should depend on the identity of the teams by definition. Thus, the budget available to former titleholders needs to be distributed in a separate pillar that does not influence the share of clubs qualifying due to their latest achievements.
To conclude, our note has identified a flaw in the revenue distribution system of the UEFA club competitions: a potential negative ``reward'' for winning a match in a domestic league. Therefore, UEFA is strongly encouraged to choose an incentive compatible allocation mechanism, for example, by adopting one of the proposals above.
\section*{Acknowledgements}
\addcontentsline{toc}{section}{Acknowledgements}
\noindent
The research was supported by the MTA Premium Postdoctoral Research Program grant PPD2019-9/2019.
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 1.589844,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcCLxK0-nUh8iIu9g | \section{Introduction}
In this paper, we consider Bunimovich stadium billiards. This was
the first type of billiards having convex (focusing) components of the
boundary $\partial \Omega$, yet enjoying the hyperbolic behavior
\cite{Bun74,Bun79}. Such boundary consists of two semicircles at the
ends, joined by segments of straight lines (see Figure~\ref{fig0}).
For those billiards, ergodicity, K-mixing and Bernoulli
property were proved in \cite{CT} for the natural measure.
\begin{figure}[h]
\begin{center}
\includegraphics[width=100truemm]{fig0.eps}
\caption{Bunimovich stadium.}\label{fig0}
\end{center}
\end{figure}
We consider billiard maps (not the flow) for two-dimensional billiard
tables. Thus, the phase space of a billiard is the product of the
boundary of the billiard table and the interval $[-\pi/2,\pi/2]$ of
angles of reflection. We will use the variables $(r,\varphi)$, where $r$
parametrizes the table boundary by the arc length, and $\varphi$ is the
angle of reflection. We mentioned the natural measure; it is
$c\cos\varphi\;dr\;d\varphi$, where $c$ is the normalizing constant. This
measure is invariant for the billiard map.
As we said, we want to study topological entropy of the billiard map.
This means that we should look at the billiard as a topological
dynamical system. However, existence of the natural measure resulted
in most authors looking at the billiard as a measure preserving
transformation. That is, all important properties of the billiard were
proved only almost everywhere, not everywhere. Additionally, the
billiard map is only piecewise continuous instead of continuous. Often
it is even not defined everywhere. All this creates problems already
at the level of definitions. We will discuss those problems in the
next section.
In view of this complicated situation, we will not try to produce a
comprehensive theory of the Bunimovich stadium billiards from the
topological point of view, but present the results on their
topological entropy that are independent of the approach. For this we
will find a subspace of the phase space that is compact and invariant,
and on which the billiard map is continuous. We will find the
topological entropy restricted to this subspace. This entropy is a
lower bound of the topological entropy of the full system, no matter
how this entropy is defined. Finally, we will find the limit of our
estimates as the length of the billiard table goes to infinity.
The reader who wants to learn more on other properties of the
Bunimovich stadium billiards, can find it in many papers, in
particular \cite{BC, BK, Bun74, Bun79, Ch, CM, HC}. While some of them
contain results about topological entropy of those billiards, none of
those results can be considered completely rigorous.
The paper is organized as follows. In Section~\ref{sec-teb} we discuss
the problems connected with defining topological entropy for
billiards. In Section~\ref{sec-sac} we produce symbolic systems
connected with the Bunimovich billiards. In Section~\ref{sec-cote} we
perform actual computations of the topological entropy.
\section{Topological entropy of billiards}\label{sec-teb}
Let $\mathcal{M}=\partial\Omega\times[-\pi/2,\pi/2]$ be the phase space of a billiard
and let $\mathcal{F}:\mathcal{M}\to\mathcal{M}$ be the billiard map. We assume that the
boundary of the billiard table is piecewise $C^2$ with finite number
of pieces. In such a situation the map $\mathcal{F}$ is piecewise continuous
(in fact, piecewise smooth) with finitely many pieces. That is, $\mathcal{M}$
is the union of finitely many open sets $\mathcal{M}_i$ (of quite regular
shape) and a singular set $\mathcal{S}$, which is the union of finitely many
smooth curves, and on which the map is often even not defined. The map
$\mathcal{F}$ restricted to each $\mathcal{M}_i$ is a diffeomorphism onto its image.
This situation is very similar as for piecewise continuous piecewise
monotone interval maps. For those maps, the usual way of investigating
them from the topological point of view is to use \emph{coding}. We
produce the \emph{symbolic system} associated with our map by taking
sequences of symbols (numbers enumerating pieces of continuity)
according to the number of the piece to which the $n$-th image of our
point belongs. On this symbolic space we have the shift to the left.
In particular, the topological entropy of this symbolic system was
shown to be equal to the usual Bowen's entropy of the underlying
interval map (see~\cite{MZ}).
Thus, it is a natural idea to do the same for billiards. Thus, for a
point $x\in\mathcal{M}$, whose trajectory is disjoint from $\mathcal{S}$, we take its
\emph{itinerary} (code) $\omega(x)=(\omega_n)$, where $\omega_n=i$ if
and only if
$\mathcal{F}(x)\in\mathcal{M}_i$. The problem is that the set of itineraries obtained
in such a way is usually not closed (in the product topology). Therefore
we have to take the closure of this set. Then the question one has to
deal with is whether there is no essential dynamics (for example,
invariant measures with positive entropy) on this extra set. A
rigorous approach for coding, including the definition of topological
entropy and a proof of a theorem analogous to the one from~\cite{MZ},
can be found in the recent paper of Baladi and Demers~\cite{BD} about
Sinai billiards.
The Sinai billiard maps are simpler for
coding than the Bunimovich stadium maps. There are finitely many
obstacles on the torus, so the pieces of the boundary, used for the
coding, are pairwise disjoint. This property is
not shared by the Bunimovich stadium billiards. The stadium billiard
is hyperbolic, but not uniformly. Moreover, here we have to
deal with the trajectories that are bouncing between the straight line
segments of the boundary. To complete the list of problems, the coding
with four pieces of the boundary seems to be not sufficient (as has
been noticed in~\cite{BK}).
The papers dealing with the topological entropy of Bunimovich stadium
billiards use different definitions. In~\cite{BK} and~\cite{HC},
topological entropy is explicitly definied as the exponential growth
rate of the number of
periodic orbits of a given period. In~\cite{Ch}, first coding is
performed in a different way, using rectangles defines by stable and
unstable manifolds. This coding uses an infinite alphabet. Then
various definitions of topological entropy for the obtained symbolic
system are used. In~\cite{BD}, topological entropy is defined as the
topological entropy of the corresponding symbolic system, that is, as
the exponential growth rate of the number of nonempty cylinders of a
given length in the symbolic system. As we
mentioned, it is shown that the result is the same as when one is
using the classical Bowen's definition for the original billiard map.
In~\cite{BC}, topological entropy is not formally defined, but it
seems that the authors think of the entropy of the symbolic system.
In this paper, we will be considering a subsystem of the full billiard
map. This subsystem is a continuous map of a compact space to itself,
and is conjugate to a subshift of finite type. Thus, whether we define
the topological entropy of the full system as the entropy of the
symbolic system or as the growth rate of the number of periodic orbit,
our estimates will be always lower bounds for the topological entropy.
\section{Subsystem and coding}\label{sec-sac}
We consider the Bunimovich stadium billiard table, with the radius of
the semicircles 1, and the lengths of straight segments $\ell>1$. The
phase space of this billiard map will be denoted by $\mathcal{M}_\ell$, and
the map by $\mathcal{F}_\ell$. The subspace of $\mathcal{M}_\ell$ consisting of points
whose trajectories have no two consecutive collisions with the same
semicircle will be denoted by ${\mathcal K}_\ell$. The subspace of ${\mathcal K}_\ell$ consisting
of points whose trajectories have no $N+1$ consecutive collisions with
the straight segments will be denoted by ${\mathcal K}_{\ell,N}$. We will show that if
$\ell>2N+2$, then the map $\mathcal{F}_\ell$ restricted to ${\mathcal K}_{\ell,N}$ has very
good properties.
In general, coding for $\mathcal{F}_\ell$ needs at least six symbols. They
correspond to the four pieces of the boundary of the stadium, and
additionally on the semicircles we have to specify the orientation of
the trajectory
(whether $\varphi$ is positive or negative), see~\cite{BK}. However, in
${\mathcal K}_\ell$ this additional requirement is unnecessary, because there are no
multiple consecutive collisions with the same semicircle. This also
implies that in ${\mathcal K}_\ell$ for a given $\ell$ the angle $\varphi$ is uniformly
bounded away from $\pm\pi/2$.
While in~\cite{BC} the statements about
generating partition are written in terms of measure preserving
transformations, the sets of measure zero that have to be removed are
specified. In ${\mathcal K}_\ell$ the only set that needs to be removed is the set
of points whose trajectories are periodic of period 2, bouncing from
the two straight line segments. However, this set carries no
topological entropy, so we can ignore it. Thus, according
to~\cite{BC}, the symbolic system corresponding to $\mathcal{F}_\ell$ on ${\mathcal K}_\ell$
is a closed subshift $\Sigma_\ell$ of a subshift of finite type with 4
symbols. We say that there is a \emph{transition} from a state $i$ to
$j$ if it is possible that $\omega_n=i$ and $\omega_{n+1}=j$. In our
subshift here are some transitions that are forbidden: one cannot go
from a symbol corresponding to a semicircle to the same symbol. There
are of course also some transitions in many steps forbidden; they
depend on $\ell$.
For every element of $\Sigma_\ell$ there is a unique point of ${\mathcal K}_\ell$
with that itinerary. However, the same point of ${\mathcal K}_\ell$ may have more than one
itinerary, because there are four points on the boundary of the stadium
that belong to two pieces of the boundary each. Thus, the coding is
not one-to-one, but this is unavoidable if we want to obtain a compact
symbolic system. Another solution would be to remove codes of all
trajectories that pass through any of four special points, and at the
end take the closure of the symbolic space.
This problem disappears when we pass to ${\mathcal K}_{\ell,N}$ with $\ell>2N+2$.
Namely, then the angle $\varphi$ at any point of ${\mathcal K}_{\ell,N}$ whose first
coordinate is on the straight line piece, is larger than $\pi/4$ in
absolute value.
Let us look at the geometry of this situation. Let $C$ be the right
unit semicircle in ${\mathbb R}^2$ (without endpoints), $A\in C$, and let
$L_1,L_2$ be half-lines emerging from $A$, reflecting from $C$ (like a
billiard flow trajectory) from inside at $A$ (see Figure~\ref{fig3}).
Assume that for $i=1,2$ the angles between $L_i$ and the horizontal
lines are less than $\pi/4$, and that $L_i$ intersects $C$ only at
$A$. Consider the argument $\arg(A)$ of $A$ (as in polar coordinates
on in the complex plane).
\begin{figure}[h]
\begin{center}
\includegraphics[width=40truemm]{fig3.eps}
\caption{Situation from Lemma~\ref{geom}.}\label{fig3}
\end{center}
\end{figure}
\begin{lemma}\label{geom}
In the above situation, $|\arg(A)|<\pi/4$. Moreover, neither $L_1$ nor
$L_2$ passes through an endpoint of $C$.
\end{lemma}
\begin{proof}
If $|\arg(A)|\ge\pi/4$, then both lines $L_1$ and $L_2$ are on the
same side of the origin, so the incidence and reflection angle cannot
be the same. Therefore, $|\arg(A)|<\pi/4$.
Suppose that $L_1$ passes through the lower endpoint of $C$ (the other
cases are similar). Then $\arg(A)<0$, so $L_2$ intersects the
semicircle also at the point with argument
\[
\arg(A)+(\arg(A)-(-\pi/2))=2\arg(A)+\pi/2,
\]
a contradiction.
\end{proof}
In view of the above lemma, the collision points on the semicircles
cannot be too close to the endpoints of the semicircles (including
endpoints themselves). Thus, the
correspondence between ${\mathcal K}_{\ell,N}$ and its coding system $\Sigma_{\ell,N}$
is a bijection. Standard considerations of topologies in both systems
show that this bijection is a homeomorphism, say
$\xi_{\ell,N}:{\mathcal K}_{\ell,N}\to\Sigma_{\ell,N}$. If $\sigma$ is the left shift
in the symbolic system, then by the construction we have
$\xi_{\ell,N}\circ\mathcal{F}_\ell=\sigma\circ\xi_{\ell,N}$. In such a way we
get the following lemma.
\begin{lemma}\label{conj}
If $\ell>2N+2$ then the systems $({\mathcal K}_{\ell,N},\mathcal{F}_\ell)$ and
$(\Sigma_{\ell,N},\sigma)$ are topologically conjugate.
\end{lemma}
We can modify our codings, in order to simplify further proofs. The
first thing is to identify the symbols corresponding to two
semicircles. This can be done due to the symmetry, and will result in
producing symbolic systems $\Sigma'_\ell$ and $\Sigma'_{\ell,N}$,
which are 2-to-1 factors of $\Sigma_\ell$ and $\Sigma_{\ell,N}$
respectively. Since the operation of taking a 2-to-1 factor preserves
topological entropy, this will not affect our results.
With this simplification, $\Sigma'_\ell$ is a closed, shift-invariant
subset of the phase space of a subshift of finite type $\widetilde\Sigma$. Subshift
of finite type $\widetilde\Sigma$ looks as follows. There are three states, $0,A,B$
(where 0 corresponds to the semicircles), and the only forbidden
transitions are from $A$ to $A$ and from $B$ to $B$.
Then $\Sigma'_{\ell,N}$ is a closed, shift-invariant subset of
$\Sigma'_\ell$, where additionally $n$-step transitions involving only
states $A$ and $B$ are forbidden if $n>N$. However, it pays to recode
$\Sigma'_{\ell,N}$. Namely, we replace states $A$ and $B$ by
$1,2,\dots,N$ and $-1,-2,\dots,-N$ respectively. If
$(\omega_n)\in\Sigma'_{\ell,N}$, and $\omega_k=\omega_{k+m+1}=0$, while
$\omega_n\in\{A,B\}$ for $n=k+1,k+2,\dots,k+m$, then for the recoded
sequence $(\rho_n)$ we have $\rho_k=\rho_{k+m+1}=0$ and
$(\rho_{k+1},\rho_{k+2}\dots,\rho_{k+m})$ is equal to $(1,2,\dots,m)$
if $\omega_{k+1}=A$ and $(-1,-2,\dots,-m)$ if $\omega_{k+1}=B$.
Geometric meaning of the recoding is simple. We unfold the
stadium by using reflections from the straight parts (see
Figure~\ref{fig1}). We will label the levels of the semicircles by
integers.
\begin{figure}[h]
\begin{center}
\includegraphics[width=100truemm]{fig1.eps}
\caption{Unfolded stadium.}\label{fig1}
\end{center}
\end{figure}
Our new coding translates to this picture as follows. We start at a
semicircle, then go to a semicircle on the other side and $m$ levels
up or down, etc.
For symbolic systems, recoding in such a way amounts to the
topological conjugacy of the original and recoded systems (see~\cite{K}).
This means that the system $(\Sigma'_{\ell,N},\sigma)$ is
topologically conjugate to a subsystem of $\widetilde\Sigma_N$, which is the
subshift of finite type defined as follows. The states are
$-N,-N+1,\dots,N-1,N$, and the transitions are: from 0 to every state,
from $i$ to $i+1$ and 0 if $1\le i\le N-1$, from $N$ to 0, from $-i$
to $-i-1$ and 0 if $1\le i\le N-1$, and from $-N$ to 0.
\begin{lemma}\label{level-c}
If $\ell>2N+2$ then $(\Sigma'_{\ell,N},\sigma)$ is topologically
conjugate to $(\widetilde\Sigma_N,\sigma)$.
\end{lemma}
\begin{proof}
Both sets $\Sigma'_{\ell,N}$ and $\widetilde\Sigma_N$ are closed and
$\Sigma'_{\ell,N}\subset\widetilde\Sigma_N$. Therefore, it is enough to prove that
$\Sigma'_{\ell,N}$ is dense in $\widetilde\Sigma_N$. For this we show that for
every sequence $(\rho_0,\rho_1,\dots,\rho_k)$ appearing as a block in
an element of $\widetilde\Sigma_N$ there is a point $(r_0,\varphi_0)\in{\mathcal K}_{\ell,N}$ for which
after coding and recoding a piece of trajectory of length $k+1$, we
get $(\rho_0,\rho_1,\dots,\rho_k)$. By taking a longer sequence, we
may assume that $\rho_0=\rho_k=0$.
Consider all candidates for such trajectories in the unfolded stadium,
when we do not care whether the incidence and reflection angles are
equal. That is, we consider all curves that are unions of straight
line segments from $x_0$ to $x_1$ to $x_2$ $\dots$ to $x_k$ in the
unfolded stadium, such that $x_0$ is in the left semicircle at level
0, $x_1$ is in the right semicircle at level $n_1$, $x_2$ is in the
left semicircle at level $n_1+n_2$, etc. Here $n_1,n_2,\dots$ are the
numbers of non-zero elements of the sequence
$(\rho_0,\rho_1,\dots,\rho_k)$ between a zero element and the next
zero element, where we also take into account the signs of those
non-zero elements. In other words, this curve is an approximate
trajectory (of the flow) in the unfolded stadium that would have the
recoded itinerary $(\rho_0,\rho_1,\dots,\rho_k)$. Additionally we require that
$x_0$ and $x_k$ are at the midpoints of their semicircles. The class
of such curves is a compact space with the natural topology, so there
is the longest curve in this class. We claim that this curve is a
piece of the flow trajectory corresponding to the trajectory we are
looking for.
If we look at the ellipse with foci at $x_i$ and $x_{i+2}$ to which
$x_{i+1}$ belongs, then $x_{i+1}$ has to be a point of tangency of
that elipse and the semicircle. Since for the ellipse the angles of
incidence and reflection are equal, the same is true for the
semicircle.
Now we have to prove three properties of our curve. The first one is that
any small movement of one of the points $x_1,\dots,x_{m-1}$ gives us a
shorter curve. The second one is that none of those points lies at an
endpoint of a semicircle. The third one is that none of the segments
of the curve intersects any semicircle at any other point.
The first property follows from the fact that any ellipse with foci on
the union of the left semicircles at levels $-N$ through $N$, which is
tangent to any right semicircle, is tangent from outside. This is
equivalent to the fact that the maximal curvature of such ellipse is
smaller than the curvature of the semicircles (which is 1). The
distance between the foci of our ellipse is not larger than $2(2N+1)$,
and the length of the large semi-axis is larger than $\ell$.
Elementary computations show that the maximal curvature of such
ellipse is smaller than $\frac{\ell}{\ell^2-(2N+1)^2}$. Thus, this
property is satisfied if $\ell^2-\ell>(2N+1)^2$. However, by the
assumption, $\ell^2-\ell=\ell(\ell-1)\ge(2N+2)(2N+1)>(2N+1)^2$.
The second property is clearly satisfied, because if $x_i$ lies at an
endpoint of a semicircle, then an infinitesimally small movement of
this point along the semicircle would result in both straight segments
of the curve that end/begin at $x_i$ to get longer.
The third property follows from the observation that if $\ell\ge 2N+2$
then the angles between the segments of our curve and the straight
parts of the billiard table boundary are smaller than $\pi/4$. Suppose
that the segment from $x_i$ to $x_{i+1}$ intersects the semicircle $C$
to which $x_{i+1}$ belongs at some other point $y$ (see
Figure~\ref{fig2}). Then $x_{i+1}$ and $y$ belong to the same half of
$C$. By the argument with the ellipses, at $x_{i+1}$ the incidence and
reflection angles of our curve are equal. Therefore, the segment from
$x_{i+1}$ to $x_{i+2}$ also intersects $C$ at some other point, so
$x_{i+1}$ should belong to the other half of $C$, a contradiction.
This completes the proof.
\end{proof}
\begin{figure}[h]
\begin{center}
\includegraphics[width=40truemm]{fig2.eps}
\caption{Two intersections.}\label{fig2}
\end{center}
\end{figure}
\begin{remark}\label{Markov}
By Lemmas~\ref{conj} and~\ref{level-c} (plus the way we obtained
$\Sigma'_{\ell,N}$ from $\Sigma_{\ell,N}$) it follows that if
$\ell>2N+2$ then the natural partition of ${\mathcal K}_{\ell,N}$ into four sets is a
Markov partition.
\end{remark}
\section{Computation of topological entropy}\label{sec-cote}
In the preceding section we obtained some subshifts of finite type. Now
we have to compute their topological entropies. If the alphabet
of a subshift of finite type is $\{1,2,\dots,k\}$, then we
can write the \emph{transition matrix} $M=(m_{ij})_{i,j=1}^n$, where
$m_{ij}=1$ if there is a transition from $i$ to $j$ and $m_{ij}=0$
otherwise. Then the topological entropy of our subshift is the
logarithm of the spectral radius of $M$ (see~\cite{K, ALM}).
\begin{lemma}\label{ent-kl}
Topological entropy of the system $(\Sigma_\ell',\sigma)$ is
$\log(1+\sqrt2)$.
\end{lemma}
\begin{proof}
The transition matrix of $(\Sigma_\ell',\sigma)$ is
\[
\begin{bmatrix}
1&1&1\\
1&1&0\\
1&0&1\\
\end{bmatrix}.
\]
The characteristic polynomial of this matrix is $(1-x)(x^2-2x-1)$, so
the entropy is $\log(1+\sqrt2)$.
\end{proof}
In the case of larger, but not too complicated, matrices, in order to
compute the spectral radius one can use the \emph{rome method}
(see~\cite{BGMY, ALM}). For the transition matrices of $\widetilde\Sigma_N$ this
method is especially simple. Namely, if we look at the paths given by
transitions, we see that 0 is a rome: all paths lead to it. Then we
only have to identify the lengths of all paths from 0 to 0 that do not
go through 0 except at the beginning and the end. The spectral radius
of the transition matrix is then the largest zero of the function
$\sum x^{-p_i}-1$, where the sum is over all such paths and $p_i$ is
the length if the $i$-th path.
\begin{lemma}\label{ent-kln}
Topological entropy of the system $(\widetilde\Sigma_N,\sigma)$ is the logarithm of
the largest root of the equation
\begin{equation}\label{eq0}
(x^2-2x-1)=-2x^{1-N}.
\end{equation}
\end{lemma}
\begin{proof}
The paths that we mentioned before the lemma, are: one path of length
1 (from 0 directly to itself), and two paths of length $2,3,\dots,N$
each. Therefore, our entropy is the logarithm of the largest zero of
the function $2(x^{-N}+\dots+x^{-3}+x^{-2})+x^{-1}-1$. We have
\[
x(1-x)\big(2(x^{-N}+\dots +x^{-3}+x^{-2})+x^{-1}-1\big)=
(x^2-2x-1)+2x^{1-N},
\]
so our entropy is the logarithm of the largest root of
equation~\eqref{eq0}.
\end{proof}
Now that we computed topological entropies of the subshifts of
finite type involved, we have to go back to the definition of the
topological entropy of billiards (and their subsystems). As we
mentioned earlier, the most popular definitions either employ the
symbolic systems or use the growth rate of the number of periodic
orbits of the given period. For subshifts of finite type that does not
make difference, because the exponential growth rate of the number of
periodic orbits of a given period is the same as the topological
entropy (if the systems are topologically mixing, which is the case
here). As the first step, we get the following result, that follows
immediately from Lemmas~\ref{conj}, \ref{level-c} and~\ref{ent-kln}.
\begin{theorem}\label{t-ent-kln}
If $\ell>2N+2$ then topological entropy of the system
$({\mathcal K}_{\ell,N},\mathcal{F}_\ell)$ is the logarithm of the largest root of the
equation~\eqref{eq0}.
\end{theorem}
Now, independently of which definition of the entropy $h(\mathcal{F}_\ell|_{{\mathcal K}_\ell})$ of
$({\mathcal K}_\ell,\mathcal{F}_\ell)$ we choose, we get the next theorem.
\begin{theorem}\label{main}
We have
\begin{equation}\label{eq1}
\liminf_{\ell\to\infty}h(\mathcal{F}_\ell|_{{\mathcal K}_\ell})\ge 1+\sqrt2.
\end{equation}
\end{theorem}
\begin{proof}
On one hand, ${\mathcal K}_{\ell,N}$ is a subset of ${\mathcal K}_\ell$, so $h(\mathcal{F}_\ell|_{{\mathcal K}_\ell})\ge h(f_\ell|_{{\mathcal K}_{\ell,N}})$ for
every $N$. Therefore, by Theorem~\ref{t-ent-kln},
\[
\liminf_{\ell\to\infty}h(\mathcal{F}_\ell|_{{\mathcal K}_\ell})\ge\lim_{N\to\infty}\log y_N,
\]
where $y_N$ is the largest root of the equation~\eqref{eq0}. The
largest root of $x^2-2x-1=0$ is $1+\sqrt2$. In its neighborhood the
right-hand side of~\eqref{eq0} goes uniformly to 0 as $N\to\infty$.
Thus, $\lim_{N\to\infty}y_N=1+\sqrt2$, so we get~\eqref{eq1}.
\end{proof}
If we choose the definition of the entropy via the entropy of the
corresponding symbolic system, then, taking into account
Lemma~\ref{ent-kl}, we get a stronger theorem.
\begin{theorem}\label{main1}
We have
\begin{equation}\label{eq2}
\lim_{\ell\to\infty}h(\mathcal{F}_\ell|_{{\mathcal K}_\ell})=1+\sqrt2.
\end{equation}
\end{theorem}
Of course, the same lower estimates hold for the whole billiard.
| {
"attr-fineweb-edu": 1.734375,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUaSzxK03BfNelUVZW | \section*{Abstract}
\noindent \textbf{Abstract} In this work, we compare several different modeling approaches for count data applied to the scores of handball matches with regard to their predictive performances
based on all matches from the four previous IHF World Men's Handball Championships 2011 -- 2017: {\em (underdispersed) Poisson regression models}, {\em Gaussian response models} and {\em negative binomial models}. All models are based on the teams' covariate information. Within this comparison, the Gaussian response model turns out to be the best-performing prediction method on the training data and is, therefore, chosen as the final model. Based on its estimates, the IHF World Men's Handball Championship 2019 is simulated repeatedly and winning probabilities are obtained for all teams. The model clearly favors Denmark before France. Additionally, we provide survival probabilities for all teams and at all tournament stages as well as probabilities for all teams to qualify for the main round.
\bigskip
\noindent\textbf{Keywords}:
IHF World Men's Handball Championship 2019, Handball, Lasso, Poisson regression, Sports tournaments.
\section{Introduction}
Handball, a popular sport around the globe, is particularly important in Europe and South America.
As there are many different aspects that can be analyzed, in the last years handball had also raised an increasing interests among researchers. For example, in \cite{UhrBro:18} a group of statisticians and sports scientists selected 59 items from the play-by-play reporting of all games of the 2017 IHF World Men's Handball Championship and the involved players were compared based on their individual game actions independently of game systems, concepts and tactical tricks. The data were clustered and collected in a matrix, to add up to a ``PlayerScore".
In another scientific work, the activity profile of elite adolescent players during
regular team handball games was examined and the physical and
motor performance of players between the first and second
halves of a match were compared \citep{CheEtAl:2011}.
In this project we elaborate on a statistical model to evaluate the chances for all teams to become champion of the upcoming IHF Handball World Cup 2019 in Denmark and Germany. For this purpose, we launched a collaboration of professional statisticians and handball experts. While this task is rather popular for soccer (see, e.g., \citealp{GroSchTut:2015} or \citealp{Zeil:2014}), to the best of our knowledge this idea is new in handball.
In the following, we will compare several (regularized) regression approaches modeling the number of goals the two competing handball teams score in a match regarding their predicitve performances. We start with the classical model for count data, namely the Poisson regression model. Next, we allow for under- or overdispersion, where the latter can be captured by the {\em negative binomial model}. Furthermore, as for large values of the Poisson mean $\lambda$ the corresponding Poisson distribution converges to a Gaussian distribution (with $\mu=\sigma^2=\lambda$) due to the central limit theorem, this inspired us to also apply a {\em Gaussian response model}.
Through this comparison, a best-performing model is chosen
using the mathces of the IHF World Cups 2011 -- 2017 as the training data. Based on its estimates, the IHF World Cup 2019 is
simulated repeatedly and winning probabilities are calculated for all teams.
The rest of the manuscript is structured as follows: in Section~\ref{sec:data}
we describe the underlying (training) data set covering (almost) all matches of
the four preceding IHF World Cups 2011 -- 2017. Next, in Section~\ref{sec:methods} we briefly explain
four different regression approaches and compare them based on their predictive performance on the training data set. The best-performing model is then fitted to the data and used to simulate and forecast the IHF World Cup 2019 in Section~\ref{sec:prediction}.
Finally, we conclude in Section~\ref{sec:conclusion}
\section{Data}\label{sec:data}
In this section, we briefly describe the underlying data set covering all
matches of the four preceding IHF World Men's Handball Championships 2011 -- 2017 together with several
potential influence variables\footnote{Principally, a larger data set containing more IHF World Cups together with the below-mentioned covariate information could have been constructed. However, for World Cups earlier than 2011 these data were much harder or impossible to find. For this reason we rerstrict the present analysis on the four IHF World Cup 2011 -- 2017.}. Basically, we use a similar set of
covariates as \citet{GroSchTut:2015} do for their soccer FIFA World Cup analysis, with certain modifications that are necessary for handball. For each participating team,
the covariates are observed either for the year of the respective World Cup
(e.g.,\ GDP per capita) or shortly before the start of the World Cup (e.g.,\ a team's IHF ranking), and, therefore, vary from one World Cup to another.
Some of the variables contain information about the recent performance and sportive success of national teams, as the current form of a national team should have an influence on the team's success in the upcoming tournament. Beside these sportive variables, also certain economic factors as well as variables describing the structure of a team's squad are collected. We shall now describe in more detail these variables.
\begin{description}
\item \textbf{Economic Factors:}
\begin{description}
\item[\it GDP per capita.] To account for the general
increase of the gross domestic product (GDP) during 2011 -- 2017, a ratio of the GDP per capita of the respective country and the worldwide average GDP per capita is used (source: \url{http://www.imf.org/external/pubs/ft/weo/2018/01/weodata/index.aspx}).
\item[\it Population.] The population size is
used in relation to the respective global population to account for the general world population growth during 2011 -- 2017 (source: \url{https://population.un.org/wpp/Download/Standard/Population/}).
\end{description}\bigskip
\item \textbf{Sportive factors:}
\begin{description}
\item[\it ODDSET probability.] We convert bookmaker odds provided by the German state betting agency ODDSET into winning probabilities. The variable hence reflects the probability for each team to win the respective World Cup.
\item[\it IHF ranking.] The IHF ranking is a ranking table of national handball federations published by the IHF (source: \url{http://ihf.info/en-us/thegame/rankingtable}). The full ranking includes results of men's, women's as well as junior and youth teams and even beach handball. The points a team receives are determined from the final rankings of World Cups of the respective sub-groups and Olympic games and strictly increase over the years, so the ranking system displays an all-time ranking of the national federations. All those results can be regarded totaled or separated for each team's section. Since this project only examines men's World Cups, merely the men's ranking table will be further disposed.
\item[\it IHF points.] In addition to the IHF ranking, we also include the precise number of IHF points the ranking is based on. This provides an even more exact all-time ranking of the national federations' historic performances.
\end{description}\bigskip
\item \textbf{Home advantage:}
\begin{description}
\item[\it Host.] It can be assumed that the host of a Word Cup might have a home advantage, since the players' experience a stronger crowd support in the arena and are more conversant with the host country's cultural circumstances. Hence, a dummy is included indicating if a national team is a hosting country. Since the World Cup 2019 is jointly hosted by Germany and Denmark, both are treated equally.
\item[\it Continental federation.] The IHF is the parent organization of the different continental federations, the African Handball Confederation (CAHB), the Asian Handball Federation (AHF), the European Handball Federation (EHF), the
Oceania Continent Handball Federation (OCHF) and the Pan-American Team Handball Federation (PATHF).
The nation's affiliation to the same continental federation as the host could on the one hand influence the team's performance similar to the Word Cup's host by their better habituation with the host's conventions. Additionally, supporters of those teams have a shorter arrival. On the other, hand handball is not equally prevalent on every continent, especially European club handball is most popular. To capture potential performance differences between the continental federations, two variables are added to the data set. A dummy determining whether a nation is located in {\it Europe}, and a dummy seizing whether a nation belongs to the {\it same umbrella organization as the Word Cup host}.
\end{description}
\bigskip
\item \textbf{Factors describing the team's structure:}
The following variables describe the structure of the teams.
They were observed with the 16-player-squad
nominated for the respective World~Cup.\medskip
\begin{description}
\item[\it (Second) maximum number of teammates.] For each squad, both the maximum
and second maximum number of teammates playing together in the same national club are counted.
\item[\it Average age.] The average age of each squad is collected.
However, very young players might be rather inexperienced at big tournaments and some older players might lack a bit concerning their condition. For this reason we assume an ideal athlete's age, here represented by the average age of all squads that participated in World Cups throughout the last eight years, so that the absolute divergence between a national team's average age and that ideal age is surveyed.
\item[\it Average height.] The average height of a team can possibly impact the team's power. Tall players might have an advantage over short players, as they can release a shot on goal above a defender more easily. Therefore, we include the team's average height in meters.
\item[\it Number of EHF Champions League (EHF-cup) players.]
As club handball is mainly based on the European continent, the EHF Champions League is viewed as the most attractive competition, as numerous of the best club teams in the world participate and only the best manage to reach the final stages of the competition. Hence, also the best players play for these clubs. For this reason we include the number of players of each country that reached the EHF Champions League semifinals in the previous year of the respective World Cup. The same data is collected for the second biggest European club competition, the EHF-cup.
\item[\it Number of players abroad/Legionnaires.] For each squad, the number of players
playing in clubs abroad is counted.
\end{description}\bigskip
\item \textbf{Factors describing the team's coach:}
The players of course extinguish the most important part of a squad, but every team additionally needs an eligible coach to instruct the players. Therefore, some observable trainer characteristics are gathered, namely \textit{Age} and \textit{Tenure} of the coach plus a dummy variable that determines whether he shares the same \textit{Nationality} as his team.
\end{description}
\noindent In total, this adds up to 18 variables which were collected separately for each World Cup and each participating team. As an illustration, Table~\ref{data1} shows the results (\ref{tab:results}) and (parts of) the covariates (\ref{tab:covar}) of the respective teams, exemplarily for the first four matches of the IHF World Cup 2011. We use this data excerpt to illustrate how the final data set is constructed.
\begin{table}[h]
\small
\caption{\label{data1} Exemplary table showing the results of four matches and parts of the covariates of the involved teams.}
\centering
\subfloat[Table of results \label{tab:results}]{
\begin{tabular}{lcr}
\hline
& & \\
\hline
FRA \includegraphics[width=0.4cm]{FRA.png} & 32:19 & \includegraphics[width=0.4cm]{TUN.png} \;TUN\\
ESP \includegraphics[width=0.4cm]{ESP.png} & 33:22 & \includegraphics[width=0.4cm]{BAH.png} \;BAH\\
BAH \includegraphics[width=0.4cm]{BAH.png} & 18:38 & \includegraphics[width=0.4cm]{GER.png} \;GER\\
TUN \includegraphics[width=0.4cm]{TUN.png} & 18:21 & \includegraphics[width=0.4cm]{ESP.png} \;ESP\\
\vdots & \vdots & \vdots \\
\hline
\end{tabular}}
\hspace*{0.8cm}
\subfloat[Table of (original) covariates \label{tab:covar}]{
\begin{tabular}{llrrrrr}
\hline
World Cup & Team & Age & Rank & Oddset & \ldots \\
\hline
2011 & France & 29.0 & 5 & 0.291 & \ldots \\
2011 & Tunisia & 26.4 & 17 & 0.007 & \ldots \\
2011 & Germany & 26.9 & 1 & 0.007 & \ldots\\
2011 & Bahrain & 29.0 & 48 & 0.001 & \ldots\\
2011 & Spain & 26.8 & 7 & 0.131 & \ldots\\
\vdots & \vdots & \vdots & \vdots & \vdots & $\ddots$ \\
\hline
\end{tabular}
}
\end{table}
\noindent For the modeling techniques that we shall introduce in the following sections, all of the metric covariates are incorporated in the form of differences. For example, the final variable {\it Rank} will be the difference between the IHF ranks of both teams. The categorical variables {\it Host},
{\it Nationality} as well as the two continental federation variables, however, are included as separate variables for both competing teams.
For the variable {\it Host}, for example, this results in two columns of the corresponding design matrix denoted by
{\it Host} and {\it Host.Oppo}, where {\it Host} is indicating whether the first-named team
is a World Cup host and {\it Host.Oppo} whether its opponent is.
As we use the number of goals of each team directly as the response variable, each match corresponds to two different observations, one per team. For the covariates, we consider differences which are computed from the perspective of the first-named team. For illustration, the resulting final data structure for the exemplary matches from Table~\ref{data1} is displayed in Table~\ref{data2}.
\begin{table}[!h]
\small
\centering
\caption{Exemplary table illustrating the data structure.}\label{data2}
\begin{tabular}{rllrrrr}
\hline
Goals & Team & Opponent & Age & Rank & Oddset & ... \\
\hline
32 & France & Tunisia & 0.81 & 12 & 0.284 & ... \\
19 & Tunesia & France & - 0.81 & -12 & -0.284 & ... \\
33 & Spain & Bahrain & 1.21 & -41 & 0.129 & ... \\
22 & Bahrain & Spain & -1.21 & 41 & -0.129 & ... \\
18 & Bahrain & Germany & 0.10 & 47 & -0.064 & ... \\
38 & Germany & Bahrain & -0.10 & -47 & 0.064 & ... \\
18 & Tunisia & Spain & -0.81 & 10 & -0.124 & ... \\
21 & Spain & Tunisia & 0.81 & -10 & 0.124 & ... \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & $\ddots$ \\
\hline
\end{tabular}
\end{table}
\noindent Due to some missing covariate values for a few games, altogether the final data set contains 334 out of 354 matches from the four handball World Cups 2011 -- 2017. Note that in all the models described in the next section, we incorporate all of the above mentioned covariates. However, not all of them will be selected by the introduced penalization technique. Instead, rather sparse models will be prefered.
\section{Methods}\label{sec:methods}
In this section, we briefly describe several different regression approaches
that generally come into consideration when the goals scored in single handball matches are
directly modeled. Actually, most of them (or slight modifications thereof)
have already been used in former research on soccer data and, generally, all yielded satisfactory results. However, some adjustments are necessary for handball.
All methods described in this section can be directly applied to data in the format of Table~\ref{data2} from Section~\ref{sec:data}. Hence, each score is treated as a single observation and one obtains two observations per match.
We aim to choose the approach that has the best performance regarding prediction and then use it to predict the IHF World Men's Handball Championship 2019.
\subsection{Poisson model}\label{subsec:pois}
A traditional approach which is often applied, for example, to model soccer results is based on Poisson regression. In this case, the scores of the competing teams are treated as (conditionally) independent variables following a Poisson distribution (conditioned on certain covariates), as introduced in the seminal works of \citealp{Mah:82} and \citealp{DixCol:97}.
As already stated, each score from a match of two handball teams is treated as a single observation. Accordingly, similar to the regression approach investogated in \cite{GroEtAl:WM2018}, for $n$ teams the respective model has the form
\begin{eqnarray}
Y_{ijk}|\boldsymbol{x}_{ik},\boldsymbol{x}_{jk}&\sim &Po(\lambda_{ijk})\,,\nonumber\\
\label{lasso:model}\log(\lambda_{ijk})&=&\eta_{ijk}\,:=\,\beta_0 + (\boldsymbol{x}_{ik}-\boldsymbol{x}_{jk})^\top\boldsymbol{\beta}+\boldsymbol{z}_{ik}^\top\boldsymbol{\gamma}+\boldsymbol{z}_{jk}^\top\boldsymbol{\delta}\,,
\end{eqnarray}
where $Y_{ijk}$ denotes the score of handball team $i$ against team $j$ in tournament $k$ with $i,j\in\{1,\ldots,n\},~i\neq j$ and $\eta_{ijk}$ is the corresponding linear predictor. The metric characteristics of both competing teams are captured in the $p$-dimensional vectors $\boldsymbol{x}_{ik}, \boldsymbol{x}_{jk}$, while $\boldsymbol{z}_{ik}$ and $\boldsymbol{z}_{jk}$ capture dummy variables for the categorical covariates {\it Host},
{\it Nationality} as well as the two continental federation variables (built, for example, by reference encoding), separately for the considered teams and their respective opponents. For these variables, it is not sensible to build differences between the respective values. Furthermore, $\boldsymbol{\beta}$ is a parameter vector which captures the linear effects of all metric covariate differences and $\boldsymbol{\gamma}$ and $\boldsymbol{\delta}$ collect the effects of the dummy variables corresponding to the teams and their opponents, respectively. For notational convenience, we collect all covariate effects in the $\tilde p$-dimensional real-valued vector $\boldsymbol{\theta}^\top=(\boldsymbol{\beta}^\top, \boldsymbol{\gamma}^\top, \boldsymbol{\delta}^\top)$.
If, as in our case, several covariates of the competing teams are included into the model it is sensible to use regularization techniques when estimating the models to allow for variable selection and to avoid overfitting. In the following, we will introduce such a basic regularization approach, namely the conventional Lasso (least absolute shrinkage and selection operator; \citealp{Tibshirani:96}).
For estimation, instead of the regular likelihood $l(\beta_0,\boldsymbol{\theta})$ the penalized likelihood
\begin{eqnarray}\label{eq:lasso}
l_p(\beta_0,\boldsymbol{\theta}) = l(\beta_0,\boldsymbol{\theta}) - \xi P(\beta_0,\boldsymbol{\theta})
\end{eqnarray}
is maximized, where $P(\beta_0,\boldsymbol{\theta})=\sum_{v=1}^{\tilde p}|\theta_v|$ denotes the ordinary Lasso penalty with tuning parameter $\xi$. The optimal value for the tuning parameter $\xi$ will be determined by 10-fold cross-validation (CV). The model will be fitted using the function \texttt{cv.glmnet} from the \texttt{R}-package \texttt{glmnet} \citep{FrieEtAl:2010}. In contrast to the similar ridge penalty \citep{HoeKen:70}, which penalizes squared parameters instead of absolute values, Lasso does not only shrink parameters towards zero, but is able to set them to exactly zero. Therefore, depending on the chosen value of the tuning parameter, Lasso also enforces variable selection.
\subsection{Overdispersed Poisson model / negative binomial model}\label{subsec:NegBin}
The Poisson model introduced in the previous section is built on the rather strong assumption
$E\left[Y_{ijk}|\boldsymbol{x}_{ik},\boldsymbol{x}_{jk}\right] = Var\left(Y_{ijk}|\boldsymbol{x}_{ik},\boldsymbol{x}_{jk}\right) = \lambda_{ijk}$, i.e.\ that the expectation of the distribution equates the variance. For the case of World Cup handball matches, the (marginal) average number of goals is around 30 (for example, $\bar{y} = 27.33$ for the matches of the IHF World Cups 2011 -- 2017) and supposably the corresponding variance could differ substantially.
A case often treated in the literature is the case when $Var(Y)>E[Y]$, the so-called overdispersion. But for handball matches, also the contrary could be possible, namely that $Var(Y)<E[Y]$ holds. In both cases, one typically assumes that $Var(Y)=\phi\cdot E[Y]$ holds, where $\phi$ is called {\it dispersion parameter} and can be estimated via
\begin{equation}\label{eq:dispersion}
\hat\phi = \frac{1}{N-df} \sum\limits_{i=1}^N r_i^2,
\end{equation}
where $N$ is the number of observations and $r_i$ the model's Pearson residuals.
We will first focus on the (more familiar) case of overdispersion. It is well known that the overdispersed Poisson model can be obtained by using the negative binomial model. To combine this model class with the Lasso penalty from equation~\eqref{eq:lasso}, the \texttt{cv.glmregNB} function from the R-package \texttt{mpath} \citep{Zhu:2018} can be used (see also, for example, \citealp{ZhuEtAl:2018}).
\subsection{Underdispersed Poisson model}\label{subsec:under_pois}
If we fit the (regularized) Poisson model from Section~\ref{subsec:pois} to our IHF World Cup data and then estimate the dispersion parameter via equation~\eqref{eq:dispersion}, we obtain a value for $\hat\phi$ clearly smaller than one ($\hat\phi=0.74$), i.e.\ substantial underdispersion. Hence, the variance of the goals in IHF World Cup matches seems to be smaller than their mean.
To be able to simulate from an underdispersed Poisson distribution (which we would need later on to simulate matches from the IHF World Cup 2019), the \texttt{rdoublepois} function from the \texttt{rmutil}-package (\citep{SwiLin:2018}) can be used.
\subsection{The Gaussian response model}\label{subsec:gauss}
It is well-known that for large values of the Poisson mean $\lambda$ the corresponding Poisson distribution converges to a Gaussian distribution (with $\mu=\sigma^2=\lambda$) due to the central limit theorem. In practice, for values $\lambda\approx 30$ (or larger) the approximation of the Poisson via the Gaussian distribution is already quite satisfactory.
As we have already seen in Section~\ref{subsec:NegBin} that the average number of goals in handball World Cup matches is close to 30, this inspired us to also apply a Gaussian response model.
However, instead of forcing the mean to equate the variance, we again allow for $\mu\neq\sigma^2$, i.e.\ for potential (constant) over- or underdispersion. Note here that the main difference to the over- and underdispersion models from the two preceding sections is that there each observation obtains its own variance via $Var\left(Y_{ijk}|\boldsymbol{x}_{ik},\boldsymbol{x}_{jk}\right) = \hat\phi\cdot\lambda_{ijk}$, where in the Gaussian response model all observations have the same
variance $\hat\sigma^2$. On our World Cup 2011 -- 2017 data, we obtain $\hat\sigma^2=20.13$, which compared to the average number of goals $\bar{y} = 27.33$ indicates a certain amount of (constant) underdispersion.
We also want to point out here that in order to be able to simulate a precise match result from the
model's distribution (and then, successively, to calculate probabilities for the three match results {\it win}, {\it draw} or {\it loss}), we round results to the next natural number. In general, the Lasso-regularized Gaussian response model will again be fitted using the function \texttt{cv.glmnet} from the \texttt{R}-package \texttt{glmnet} based on the linear predictor $\eta_{ijk}$ defined in equation~\eqref{lasso:model}.
\subsection{Increase model sparsity}\label{subsec:sparse}
Note that in addition to the conventional Lasso solution minimizing the 10-fold CV error, a second, sparser solution could be used. Here, the optimal value for the tuning parameter $\xi$ is chosen by a different strategy: instead of choosing the model with the minimal CV error the most restrictive model is chosen which is within one standard error of the minimum of the CV error.
While it is directly provided by the \texttt{cv.glmnet} function from the \texttt{glmnet} package, for the \texttt{cv.glmregNB} function it had to be calculated manually.
In the following section, where all the different models from above are compared, for each model class also this sparser solution is calculated and included in the comparison.
\subsection{Comparing methods}\label{subsec:compare}
The four different approaches introduced in Sections~\ref{subsec:pois} - \ref{subsec:gauss} are now compared with regard to their predictive performance. For this purpose, we apply the following general procedure to the World Cup 2011 -- 2017 data which had already been applied to soccer World Cup data in \cite{GroEtAl:WM2018}:
\begin{enumerate}{\it
\item Form a training data set containing three out of four World Cups.\vspace{0.1cm}
\item Fit each of the methods to the training data.\vspace{0.1cm}
\item Predict the left-out World Cup using each of the prediction methods.\vspace{0.1cm}
\item Iterate steps 1-3 such that each World Cup is once the left-out one.\vspace{0.1cm}
\item Compare predicted and real outcomes for all prediction methods.}\vspace{-0.1cm}
\end{enumerate}
This procedure ensures that each match from the total data set is once part of the test data and we obtain out-of-sample predictions for all matches. In step~{\it 5}, several different performance measures for the quality of the predictions are investigated.
Let $\tilde y_i\in\{1,2,3\}$ be the true ordinal match outcomes for all $i=1,\ldots,N$ matches from the four considered World Cups. Additionally, let $\hat\pi_{1i},\hat\pi_{2i},\hat\pi_{3i},~i=1,\ldots,N$, be the predicted probabilities for the match outcomes obtained by one of the different methods introduced in Sections~\ref{subsec:pois} - \ref{subsec:gauss}. Further, let $G_{1i}$ and $G_{2i}$ denote the random variables representing the number of goals scored by two competing teams in match $i$. Then, the probabilities $\hat \pi_{1i}=P(G_{1i}>G_{2i}), \hat \pi_{2i}=P(G_{1i}=G_{2i})$ and $\hat \pi_{3i}=P(G_{1i}<G_{2i})$ can be computed/simulated
based on the respective underlying (conditionally) independent response distributions $F_{1i},F_{2i}$ with $G_{1i}\sim F_{1i}$ and $G_{2i}\sim F_{2i}$. The two distributions $F_{1i},F_{2i}$ depend on the corresponding linear predictors $\eta_{ijk}$ and $\eta_{jik}$ from equation~\eqref{lasso:model}.
Based on these predicted probabilities, following \cite{GroEtAl:WM2018} we use three different performance measures to compare the predictive power of the methods:
\begin{itemize}
\item the multinomial {\it likelihood}, which for a single match outcome is defined as $\hat \pi_{1i}^{\delta_{1\tilde y_i}} \hat \pi_{2i}^{\delta_{2\tilde y_i}} \hat \pi_{3i}^{\delta_{3 \tilde y_i}}$, with $\delta_{r\tilde y_i}$ denoting Kronecker's delta. It reflects the probability of a correct prediction. Hence, a large value reflects a good fit.\vspace{0.1cm}
\item the {\it classification rate}, based on the indicator functions $\mathbb{I}(\tilde y_i=\underset{r\in\{1,2,3\}}{\mbox{arg\,max }}(\hat\pi_{ri}))$, indicating whether match $i$ was correctly classified.
Again, a large value of the classification rate reflects a good fit.\vspace{0.1cm}
\item the {\it rank probability score} (RPS) which, in contrast to both measures introduced above, explicitly accounts for the ordinal structure of the responses.
For our purpose, it can be defined as $\frac{1}{3-1} \sum\limits_{r=1}^{3-1}\left( \sum\limits_{l=1}^{r}(\hat\pi_{li} - \delta_{l\tilde y_i})\right)^{2}$. As the RPS is an error measure, here a low value represents a good fit.
\end{itemize}
Odds provided by bookmakers serve as a natural benchmark for these predictive performance measures. For this purpose, we collected the so-called ``three-way'' odds for (almost) all matches of the IHF World Cups 2011 -- 2017\footnote{Three-way odds consider only the match tendency with possible results \emph{victory team 1}, \emph{draw} or \emph{defeat team 1} and are usually fixed some days before the corresponding match takes place. The three-way odds were obtained from the website \url{https://www.betexplorer.com/handball/world/}.}.
By taking the three quantities $\tilde \pi_{ri}=1/\mbox{odds}_{ri}, r\in\{1,2,3\}$, of a match $i$ and by normalizing with $c_i:=\sum_{r=1}^{3}\tilde \pi_{ri}$ in order to adjust for the bookmaker's margins, the odds can be directly transformed into probabilities using $\hat \pi_{ri}=\tilde \pi_{ri}/c_i$
\footnote{The transformed probabilities implicitely assume that the bookmaker's margins are equally distributed on the three possible match tendencies.}.
As we later want to predict both winning probabilities for all teams and the whole tournament course for the IHF World Cup 2019,
we are also interested in the performance of the regarded methods with respect to the prediction of the exact number of goals. In order to identify the teams that qualify during both group stages, the precise final group standings need to be determined. To be able to do so, the precise results of the matches in the group
stage play a crucial role\footnote{\label{foot:mode}The final group standings are determined by (1) the number of points, (2) head-to-head points (3) head-to-head goal difference, (4) head-to-head number of goals scored, (5) goal difference and (6) total number of goals. If no distinct decision can be taken, the decision is taken by lot.}.
For this reason, we also evaluate the different regression models' performances
with regard to the quadratic error between the observed and predicted
number of goals for each match and each team, as well as between the observed and predicted goal difference. Let now $y_{ijk}$, for $i,j=1,\ldots,n$ and $k\in\{2011,2013,2015,2017\}$,
denote the observed numbers of goals scored by team $i$ against team $j$ in tournament $k$ and
$\hat y_{ijk}$ a corresponding predicted value, obtained by one of the models from Sections~\ref{subsec:pois} - \ref{subsec:gauss}.
Then we calculate the two quadratic errors $(y_{ijk}-\hat y_{ijk})^2$ and $\left((y_{ijk}-y_{jik})-(\hat y_{ijk}-\hat y_{jik})\right)^2$ for all $N$ matches of the four IHF World Cups 2011 -- 2017. Finally, per method we calculate (mean) quadratic errors.
Table~\ref{tab:probs_old} displays the results for these five performance measures
for the models introduced in Sections~\ref{subsec:pois} - \ref{subsec:gauss} as well as for the bookmakers, averaged over 334 matches from the four IHF World Cups 2011 -- 2017. While the bookmakers serve as a benchmark and yield the best results with respect to all ordinal critera, the second-best method's results are highlighted in bold text. It turns out that the Poisson and the underdispersed Poisson model yield very good results with respect to the classification rate, while the Gaussian response model is (in some cases clearly) the best performer regarding all other criteria. As no overdispersion (and, actually, underdispersion) is found in the data, the negative binomial model's results are almost indistinguishable from those of the (conventional) Poisson model.
The more sparse Lasso estimators introduced in Section~\ref{subsec:sparse} perform substantially worse in terms of prediction accuracy compared to the conventional Lasso solution.
Based on these results, we chose the regularized Gaussian response model with constant (and rather low) variance as our final model and shall use it in the next section to simulate the IHF World Cup 2019.
\begin{table}[H]
\small
\caption{\label{tab:probs_old}Comparison of the different methods for ordinal match outcomes; the second-best method's results are highlighted in bold text.}\vspace{0.2cm}
\centering
\input{result_probs2}
\end{table}
\section{Prediction of the IHF World Cup 2019}\label{sec:prediction}
Now we apply the best-performing model from Section~\ref{sec:methods}, namely the regularized Gaussian response model with constant underdispersion, to the full World Cup 2011 -- 2017 training data and will then use it to calculate winning probabilities for the World Cup 2019. For this purpose, the covariate information from Section~\ref{sec:data} has to be collected for all teams participating at the 2019 World~Cup.
It has to be stated that at the time this analysis has been performed, namely at the first tournament day (June 10, 2019) right before the start of the tournament, the teams of Bahrein and Sweden had listed squads consisting of 15 players only and there will probably be one more player moving up soon. Hence, for those two teams the covariates corresponding to natural-numbered team structure variables (such as, e.g., the {\it number of legionnaires}) have been normalized to be comparable to 16-player squads by multiplying them with the factor $16/15$. Another special case concerns the team of Korea. As this team is fromed by a selection of players from both South and North Korea, the federation was given the special approval to nominate 20 players. As it actually might be an advantage to have a larger squad we abstained here from normalizing the covariate values from the Korean team.
The optimal tuning parameter $\xi$ of the L1-penalized Gaussian response model, which minimizes the deviance shown in Figure~\ref{fig:lasso} (left), leads to a model with
16 (out of possibly 22) regression coefficients different
from zero. The paths illustrated
in Figure~\ref{fig:lasso} (right) show that three covariates enter the model rather early. These are the {\it Rank}, the {\it Height} and the {\it Odds}, which seem to be rather important when determining the score in a handball World Cup match. The corresponding fixed effects estimates for the (scaled) covariates are shown in Table~\ref{tab:lasso_coefs}.
\begin{figure}
\includegraphics[width=0.55\textwidth]{lasso_dev_log.pdf}\hspace{-0.5cm}
\includegraphics[width=0.55\textwidth]{lasso_paths_log.pdf}\vspace{-0.3cm}
\caption{Left: 10-fold CV deviance for Gaussian response model on
IHF World Cup data 2011 -– 2017; Right: Coefficient paths vs. (logarithmized) penalty parameter $\xi$; optimal value of the penalty
parameter $\xi$ shown by vertical line.}\label{fig:lasso}
\end{figure}
\begin{table}[H]
\small
\caption{\label{tab:lasso_coefs}Estimates of the covariate effects for the IHF World Cups 2011 -- 2017.}\vspace{0.2cm}
\centering
\input{lasso_coefs.tex}
\end{table}
Based on the estimates from Table~\ref{tab:lasso_coefs} and the covariates of all teams for the IHF World Cup 2019,
we can now simulate all matches from the preliminary round. Next, we can simulate all resulting matches in the main round and determine those teams that reach the semi-finals and, finally, those two teams that reach the final and the World Champion. We repeat the simulation of the whole tournament 100,000 times. This way, for each of the 24 participating teams probabilities to reach the different tournament stages and, finally, to win the tournament are obtained.
\subsection{Probabilities for IHF World Cup 2019 Winner}
For each match in the World Cup 2019, the model can be used to predict an expected number of goals for both teams. Given the expected number of goals, a real result is drawn by assuming two (conditionally) independent Gaussian distributions for both scores, which are then rounded to the closest natural number. Based on these results, all 60 matches from the preliminary round can be simulated and final group standings can be calculated. Due to the fact that real results are simulated, we can precisely follow the official IHF rules when
determining the final group standings (see footnote~\ref{foot:mode}). This enables us to determine the matches in the main round and we can continue by simulating those matches. Again, if the final group standings
are calculated, the semi-final is determined. Next, the semi-final can be simulated and the final is determined. In the case of draws in ``knockout" matches, we simulate extra-time by a second simulated result. However, here we multiply the expected number of goals by the factor 1/6 to account for the shorter time to score (10 min instead of 60 min). In the case of a further draw in the first extra-time, we repeat this procedure. If the second extra time still ends in a draw we simulate the penalty shootout by a (virtual) coin flip.
\begin{table}[!h]
\small
\caption{\label{winner_probs}Estimated probabilities (in \%) for reaching (at least) the main round or the given final ranks in the IHF World Cup 2019 for all 24 teams based on 100,000 simulation runs of the IHF World Cup together with winning probabilities based on the ODDSET odds.}\vspace{0.4cm}
\centering
\input{Winner_probs}
\end{table}
Following this strategy, a whole tournament run can be simulated, which we repeat 100,000
times. Based on these simulations, for each of the 24 participating
teams probabilities to reach (at least) the main round or the given final rank and,
finally, to win the tournament are obtained. These are summarized
in Table~\ref{winner_probs} together with the winning probabilities
based on the ODDSET odds for comparison.
Apparently, the resulting winning probabilities show some
discrepancies from the probabilities based on the bookmaker's odds.
Though the upper and lower half of the teams according to our calculated probabilities
seem to coincide quite well with the overall ranking according to the bookmaker's odds,
for single teams from the upper half, in particular, Denmark, Spain and Hungary,
the differences between our approach and the bookmaker are substantial. Based on our model, Denmark is
the clear favorite for becoming IHF World Champion~2019.
These discrepancies could be mostly explained by the fact that
the Lasso coefficient estimates from Table~\ref{tab:lasso_coefs}
include several other covariate effects beside the bookmaker's odds.
\subsection{Group rankings}
Finally, based on the 100,000 simulations, we also provide
for each team the probability to reach the main round. The results together
with the corresponding probabilities are presented in Table~\ref{tab:group}.
Obviously, there are large differences with respect
to the groups' balances. While the model forecasts for example Spain and Croatia in Group~B, Denmark and Norway in Group~C and Hungary and Sweden in Group~D with probabilities clearly larger than $90\%$ to reach the second group stage, in Group~A France followed by Germany are the main favorites, but with lower probabilities
of $89.05\%$ and $80.16\%$, respectively. Hence,
Group~A seems to be more volatile.
\begin{table}[h!]
\begin{center}
\small
\caption{Probabilities for all teams to reach the main round at the IHF World Cup 2019 based on 100,000 simulation runs.}\label{tab:group}\vspace{0.4cm}
\begin{tabular}{|cr||cr||cr||cr|}
\bottomrule
\multicolumn{2}{|c||}{\parbox[0pt][1.5em][c]{0cm}{} Group A} & \multicolumn{2}{|c||}{Group B} & \multicolumn{2}{|c||}{Group C} & \multicolumn{2}{|c|}{Group D} \\
\toprule \bottomrule
&&&&&&& \\
1. \cellcolor{lightgray} \includegraphics[width=0.4cm]{FRA.png}\enspace FRA & \cellcolor{lightgray}$89.05\%$ &
1. \cellcolor{lightgray}\includegraphics[width=0.4cm]{ESP.png}\enspace ESP & \cellcolor{lightgray}$95.96\%$ &
1. \cellcolor{lightgray}\includegraphics[width=0.4cm]{DEN.png}\enspace DEN & \cellcolor{lightgray}$99.36\%$ &
1. \cellcolor{lightgray}\includegraphics[width=0.4cm]{HUN.png}\enspace HUN & \cellcolor{lightgray}$95.52\%$\\
&&&&&&& \\[-3pt]
2. \cellcolor{lightgray}\includegraphics[width=0.4cm]{GER.png}\enspace GER & \cellcolor{lightgray}$80.16\%$ &
2. \cellcolor{lightgray}\includegraphics[width=0.4cm]{CRO.png}\enspace CRO & \cellcolor{lightgray}$91.8\%$ &
2. \cellcolor{lightgray}\includegraphics[width=0.4cm]{NOR.png}\enspace NOR & \cellcolor{lightgray}$93.49\%$ &
2. \cellcolor{lightgray}\includegraphics[width=0.4cm]{SWE.png}\enspace SWE & \cellcolor{lightgray}$93.81\%$ \\
&&&&&&& \\[-3pt]
3. \cellcolor{lightgray}\includegraphics[width=0.4cm]{RUS.png}\enspace RUS & \cellcolor{lightgray}$68.99\%$ &
3. \cellcolor{lightgray}\includegraphics[width=0.4cm]{ICE.png}\enspace ICE & \cellcolor{lightgray}$80.16\%$ &
3. \cellcolor{lightgray}\includegraphics[width=0.4cm]{AUT.png}\enspace AUT & \cellcolor{lightgray}$53.75\%$ &
3. \cellcolor{lightgray}\includegraphics[width=0.4cm]{EGY.png}\enspace EGY & \cellcolor{lightgray}$46.52\%$ \\
&&&&&&& \\[-3pt]
4. \cellcolor{lightgray}\includegraphics[width=0.4cm]{SRB.png}\enspace SRB & \cellcolor{lightgray}$52.83\%$ &
4. \cellcolor{lightgray}\includegraphics[width=0.4cm]{MAC.png}\enspace MAC & \cellcolor{lightgray}$21.38\%$ &
4. \cellcolor{lightgray}\includegraphics[width=0.4cm]{TUN.png}\enspace TUN & \cellcolor{lightgray}$50.66\%$ &
4. \cellcolor{lightgray}\includegraphics[width=0.4cm]{ARG.png}\enspace ARG & \cellcolor{lightgray}$28.45\%$ \\
&&&&&&& \\[-3pt]
5. \cellcolor{lightgray}\includegraphics[width=0.4cm]{BRA.png}\enspace BRA & \cellcolor{lightgray}$7.91\%$ &
5. \cellcolor{lightgray}\includegraphics[width=0.4cm]{JPN.png}\enspace JPN & \cellcolor{lightgray}$10.5\%$ &
5. \cellcolor{lightgray}\includegraphics[width=0.4cm]{KSA.png}\enspace KSA & \cellcolor{lightgray}$1.41\%$ &
5. \cellcolor{lightgray}\includegraphics[width=0.4cm]{KAT.png}\enspace KAT & \cellcolor{lightgray}$24.75\%$\\
&&&&&&& \\[-3pt]
6. \cellcolor{lightgray}\includegraphics[width=0.4cm]{KOR.png}\enspace KOR & \cellcolor{lightgray}$1.06\%$ &
6. \cellcolor{lightgray}\includegraphics[width=0.4cm]{BAH.png}\enspace BAH & \cellcolor{lightgray}$0.2\%$ &
6. \cellcolor{lightgray}\includegraphics[width=0.4cm]{CHI.png}\enspace CHI & \cellcolor{lightgray}$1.34\%$ &
6. \cellcolor{lightgray}\includegraphics[width=0.4cm]{ANG.png}\enspace ANG & \cellcolor{lightgray}$10.95\%$ \\
&&&&&&& \\[-3pt]
\bottomrule
\end{tabular}\vspace{0.4cm}
\end{center}
\end{table}
\section{Concluding remarks}\label{sec:conclusion}
In this work, we first compared four different regularized regression models for the scores of handball matches with regard to their predictive performances
based on all matches from the four previous IHF World Cups 2011 -- 2017, namely {\em (over- and underdispersed) Poisson regression models} and {\em Gaussian response models}.
We chose the Gaussian response model with constant
and rather low variance (indicating a tendency of underdispersion) as our final model as the most promising candidate and fitted it to a training data set containing all matches of the four previous IHF World Cups 2011 -- 2017. Based on the corresponding estimates, we repeatedly simulated the IHF World Cup 2019 100,000 times. According to these simulations, the teams from Denmark ($37.2\%$) and France ($19.4\%$)
turned out to be the top favorites
for winning the title, with a clear advantage for Denmark.
\bibliographystyle{agsm}
| {
"attr-fineweb-edu": 1.900391,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUazi6NNjgB1scWKqL | \section{Introduction}
Tour recommendation is an important task for tourists to visit unfamiliar places~\cite{he2017category,lim2019tour}. Tour recommendation and planning are challenging problems due to time and locality constraints faced by the tourists visiting unfamiliar cities~\cite{brilhante-ipm15,chen-cikm16,gionis-wsdm14}.
Most visitors usually follow guide books/websites to plan their daily itineraries or use recommendation systems that suggest places-of-interest (POIs) based on popularity~\cite{lim2019tour}. However, these are not optimized in terms of time feasibility, localities and users' preferences. Tourists visiting an unfamiliar city are usually constrained by time, such as hotel bookings or air flight itineraries.
In this paper, we propose a word embedding model to recommend POIs based on historical data and their popularity with consideration of the locations and traveling time between these POIs. We combine tour recommendation with various word-embedding model, namely Skip-Gram~\cite{mikolov2013distributed}, Continuous ~Bag~of~Words~\cite{mikolov2013efficient} and \textsl{FastText}~\cite{bojanowski2017enriching}, in the tour recommendation problem. The results show our algorithm can achieve \textsl{F1}-scores of up to 59.2\% accuracy in our experiments.
\section{Related Work}
Tour planning is an essential task for tourists.
Most visitors rely on guide books or websites to plan their daily itineraries which can be time-consuming. Next POI prediction~\cite{he2017category,zhao2020go} and tour planning~\cite{sohrabi2020greedy,lim2019tour} are two related problems: Next-location prediction and recommendation aim to identify the next POI that is most likely to visit based on historical trajectories. Tour itinerary recommendation aims to recommend multiple POIs or locations in the form of a trajectory. On the other hand, Top-$k$ location recommendation provides recommendation to multiple POIs, but they do not provide a solution to these POIs as a connected itinerary. Furthermore, tour itinerary recommendation has the additional challenges of planning an itinerary of connected POIs that appeal to the interest preferences of the users, satisfying tourists' temporal and spatial constraints in the form of a limited time budget. Various works have utilized geotag photos to determine POI related information for making various types of tour recommendation~\cite{lim2018personalized,cai2018itinerary,kurashima2013travel,sun2017tour}.
\section{Problem Formulation and Algorithm}
\textbf{Formulation~}{
We denote a traveler, $u$, visiting $k$ POIs in a city, in a sequence of $(poi,time)$~tuples,~$S_u = [ (p_1,t_1),(p_2,t_2)...$ $(p_k,t_k)]$, where $k=|S_u|$,~for all~$p_i \in POIs$ and ${t_i}$ as the timestamps of the photos taken. Given also, a starting POI-${s_0} \in {POIs}$, the problem in this paper is to recommend a sequence of POIs which travelers are more \emph{likely} to visit using word embedding methods.
} \\
\textbf{Algorithm~~~~}{
Our algorithm is adapted from the \emph{word2vec}~model where we treat POIs to be analogous to words in its typical NLP application, i.e., POIs are akin to words, and itineraries to sentences. To measure POI-POI similarity, we first convert travel trajectories to a \textit{word2vec}~model, by analyzing the past activities of $n$~users moving from $p_i$ to $p_{i+1}$ by considering first event that is at least eight hours from his/her previous activity~(i.e. minimum rest time of 8 hours.) We next construct a set of \emph{sentences} of POIs in our embedding model, as an input to the $word2vec$~model~(also known as \emph{corpus}), as shown in Algorithm~1. The model is then trained using different \emph{word2vec}/\textsl{FastText}~(FT) models and different hyper-parameters~(such as dimensionalities/~Epoches) to describe their POI-POI similarities in our travel recommendation model. We then outline the prediction algorithm using \emph{word2vec} embedding models, with some initial starting location~($POI_1$).
\begin{figure*}[h]
\centering
\includegraphics[
trim=2mm 3mm 83mm 18mm, clip, width=0.49\textwidth, clip=true] {algorithms_1-2}
\includegraphics[
trim=78mm 2mm 5mm 1mm, clip, width=0.49\textwidth, clip=true] {algorithms_1-2}
\end{figure*}
Algorithm~2 recommends popular POIs based on previously visited POIs and present location by \emph{iteratively} recommending the \emph{closest} POI in terms of \textsl{cosine} similarities, to its present location, but \emph{farthest} from the past POIs.
We also evaluated $FastText$ which uses \emph{character-based} $n$-grams for measuring similarity between POI-vectors. Since $FastText$ considers sub-words (i.e.~partial POI~names) in its embedding model, using Skip-Gram or CBOW at character level, it can capture the meanings of suffixes/prefixes in POI names more accurately. Moreover, it is also useful in handling situations for POIs that are not found in the \emph{corpus}.
\section{Experiments and Results}
\subsection{Datasets and Baseline Algorithm}
We use the dataset from~\cite{lim2018personalized} comprising the travel histories of 5,654 users from Flickr in~4 different cities with meta information, such as the date/time and geo-location coordinates.
Using this dataset, we constructed the travel histories by chronologically sorting the photos, which resulted in the users' trajectories. These trajectories are then regarded as sentences as inputs to our POI-embedding models for training.}
As a baseline algorithm for comparison, we use a greedy heuristic algorithm that commences a trip from a starting POI $p_1$ and iteratively choose to visit an \emph{un-visited} POI with the most number of photos posted~\cite{Liu-ECMLPKDD20}. The sequence of selected POIs forms the recommended itinerary based on the \emph{popularities} of the POIs. In our experiment, we used daily itineraries from users; by considering the start of a day tour that is at least 8 hours from the last photo posted.
\begin{figure*}[h]
\label{fig:route}
\centering
\subfloat{{
\includegraphics[
trim=135mm 82mm 60mm 87mm, clip, width=0.47\textwidth, clip=true] {perth_history}}}
\qquad
\subfloat{{
\includegraphics[
trim=135mm 82mm 60mm 87mm, clip, width=0.47\textwidth, clip=true] {perth_recommended}}}
\label{fig:example}%
\par
\caption{
A user's actual travel trajectory~(left) vs. Route recommended by the proposed algorithm~(right) in Perth.
\small
\emph{User's route:}
Wellington station~$\rhd $
Queens gardens~$\rhd$
Supreme court gardens~$\rhd$
Stirling gardens~$\rhd$
Perth town hall~$\rhd$
West Aust. museum~$\rhd$
Railway station~/
\emph{Recommended route:}
Wellington station~$\rhd$
West Aust. museum~$\rhd$
State Library~$\rhd$
Stirling gardens~$\rhd$
Supreme court gardens~$\rhd$
Esplanade station.
}
\end{figure*}
\subsection{Experiments and Results}
We describe the algorithm of getting the context words to train our \textit{word2vec}~models in Algorithm~1. These travel histories regarded as sequences of POI names are trained with \textit{word2vec}~models, i.e. \textsl{Skip-Gram}, \textsl{CBOW} and \textsl{FastText} by treating each POI-name composed of character-level \emph{n-}grams.
We evaluate the effectiveness of our prediction algorithm in terms of \textsl{precision}~($T_P$), \textsl{recall}~($T_R$) and \textsl{F1-scores} of our predicted POIs again the actual POIs in the actual travel sequences. Let $S_p$ be the predicted sequence of POIs from the algorithm and $S_u$ be the actual sequence from the users, we evaluate our algorithms based on:
$T_R(S_u,S_p)$ = $ \frac{|S_u \cap S_p|}
{|S_p|}$,
$T_P(S_u,S_p) = \frac{|S_u \cap S_p|}{|S_u|}$, and
$F1\_score(S_u,S_p) = \frac{2 T_R(\bullet) T_P(\bullet)}
{T_R(\bullet) + T_P(\bullet)}$.
We summarize the results of our method using different \textit{word2vec} models and hyper-parameters by varying epoch, windows~size and dimensionality our data datasets.
%
As a use case of our recommended tour itinerary, Fig.1 shows a route recommended by our algorithm~(right) in the city of Perth, while one of the users' itinerary is plotted on the left. In addition to recommending an itinerary with more relevant POIs, our recommendation results in an itinerary that is more compact in terms of travelling path, which translates to less time spent travelling for the tourist. Figure~2 shows a more detailed break-down of our experimental results in terms of recall, precision and F1 scores for the 4 cities. Our proposed algorithm can optimally suggest POIs based on users' present and past locations against historical dataset.
\begin{figure*}[h]
\label{fig:results}
\includegraphics[
trim=5 5 5 1, clip, width=\textwidth, clip=true]
{word2vectable}
\caption{Average F1/Recall/Precision-scores of embedding methods \
{\small
(with different hyper-parameters) and baseline algorithm in 4~cities (Win~:~window size of \textit{word2vec} in sentence. Dim.~:~maximum dimensionality of the \textit{word2vec} vector). } }
\end{figure*} \vspace{-10px}
\section{Conclusion}
In this paper, we study the problem of tour itinerary recommendation. We propose an algorithm that translates travel trajectories into word-vector space, followed by an iterative heuristic algorithm that constructs itineraries constrained by time and space. Our prediction algorithm reliably uncover user preference in a tour by using one POI of users' preference. Our preliminary experiments show efficient and reliable methods for predicting popular POIs in terms of precision, recall, and \textsl{F1}-scores. In our experiments on 4~cities, our proposal POI-embedding algorithm outperforms baseline algorithm based on POI~popularity. Future work would include further evaluation of scoring the prediction algorithms and experiment on more cities and POI sets. In future, we will extend the algorithms to incorporate more dimensions into the \textit{word2vec} models such as spatiality and popularity.
\begin{acks}
This research is funded in part by the Singapore University of Technology and Design under grant SRG-ISTD-2018-140.
The computational work was partially performed on resources of the National Supercomputing Centre, Singapore.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 1.855469,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdrs5qsBDH_zJ-AlW | \section{Introduction}\label{sec:Intro}
Composing a team of players is among the most crucial decisions a football club's manager is required to make.
In fact, the main component of a football club's costs is expenditure on players, through both wages and transfer fees \citep{DobG01}.
However, the analysis performed by \citet{KupS18} illustrates that transfer decisions are often based on myopic objectives, impulsive reactions, and are overly influenced by factors such as recent performances \citep{Lew04}, particularly in big tournaments, nationality, and even hair color. With few exceptions, football clubs are in general bad businesses, rarely making profits and most often accumulating considerable debts. The authors explain that poor management of football clubs is tolerated because, unlike in most industries, clubs in practice never go bankrupt. History shows that creditors rarely claim their credits to the extent of causing the default of the club. In other words: \textit{``no bank manager or tax collector wants to say: The century-old local club is closing. I am turning off the lights''} \citep{KupS18}. In practice, clubs eventually find someone who bails them out, change management, narrow the budget, and forces them to restart by competing at lower standards (e.g., being relegated).
According to UEFA, this dramatic scenario is changing for the better. In their \textit{Club Licensing Benchmarking Report} for the financial year 2016 \citep{UEFA18}, UEFA reports that the combined debt of Europe's top-division clubs (which includes net loaning and net player transfer balance) has decreased from 60\% of revenue in 2010 to 35\% of revenue in 2016, partially as a consequence of the introduction of UEFA Financial Fair Play rules. In addition, the report shows that profits are also increasing, generating resources to be reinvested in the football business, e.g., to reinforce the team. Furthermore, a number of football clubs are listed in stock exchanges (see, \citet{KPMG17}), with the notable case of Juventus FC entering the FTSE-MIB index which tracks the performance of the 40 main shares in the Italian stock market \citep{Sole18,Van18}. In this paper, we also advocate for the stability and good management of football clubs. Even if traditionally poorly managed clubs have often found a benefactor, more solid decision making would prevent financial distress, improve the capacity to generate resources to reinvest in the team, and eventually spare worries to the club's supporters. In particular, in this paper we introduce analytic methods for supporting transfer decisions.
In the research literature, \citet{Pan17} proposed a stochastic programming model with the scope of maximizing the expected market value of the team. The findings of the study confirm analytically the recipe provided by \citet{KupS18}: a steady growth in the team value is associated with fewer transfers, timely selling old players (they are often overrated), and investments in your prospects. However, \citet{Pan17} did not explicitly account for the players' performances. While the strategy of maximizing the expected market value of the team might benefit the club in the long-run, it may as well contrast with the short-term requirement of meeting competitive goals. In fact, most managers are often evaluated by matches won. \citet{PaZh19} presented a deterministic integer programming model, maximizing a weighted sum of player values, adjusted for age and rating, minus net transfer expenses. Their model covered several time periods, and was tested on twelve teams from the English Premier League.
The fact that not all football clubs act as profit maximizers has been mentioned in scientific literature on many occasions, starting with \citet{Sl71} who suggested that European clubs behave as utility maximizers, with a utility function that contains other variables in addition to profit. \citet{Ke96} introduced win maximization, with the consequence that clubs should hire the best players within the limits of their budget. \citet{Ra97} considered clubs to maximize a linear combination of wins and the profit level, with different clubs having a different weight to balance the two criteria. \citet{Ke06} concluded that most clubs are interested in more than making profit, but also that they do not want to win at any cost. This in turn translates into the requirement of hiring top-performers who can immediately contribute to on-field successes, but at the same time keeping an eye on the financial performance of the club.
Measuring on-field performance means associating a numerical value to the contribution given by the player to the team. This can been done using different methods \citep{Sz15}, one of which is the plus-minus rating. These initially consisted of recording the goals scored minus the goals conceded from the perspective of each player, and were applied to ice hockey players. \citet{winston} showed how the principle could be applied in basketball by calculating adjusted plus-minus ratings, which are determined by multiple linear regression. This allows the ratings to compensate for the teammates and opponents of each player. The next important development was to use ridge regression instead of the method of ordinary least squares to estimate the regression model, as proposed by \citet{Si10} for basketball, and later \citet{macdonald12} for ice hockey. This is sometimes known as regularized plus-minus, and was adapted to association football by \citet{SaHv15}, with later improvements by \citet{SaHv17}. \citet{Hv19} provided an overview of the different developments made for plus-minus ratings, covering association football as well as other team sports. One of the contributions of this paper is to present an improved regularized plus-minus for association football, obtained by adding several novel features.
To account for both on-field and financial performances when composing a football team, this paper provides a chance-constrained \citep{ChaC59} mixed-integer optimization model. The objective of the model is that of finding the mix of players with the highest sum of individual player ratings. The selected mix of players must provide the skills required by the coach. In addition, the total net expenditure in transfer fees must respect the given budget. Finally, the future value of the players in the team must remain above a specified threshold with a given probability. The latter condition is enforced by a chance constraint. In contrast to the model of \citet{Pan17}, the new model does not primarily maximize the market value of the team, but rather a performance-based rating.
The proposed model can support football managers in the course of transfer market sessions. In this phase, managers engage in negotiations with other clubs with the scope of transferring target players. Key parameters such as player values, transaction costs and salaries are typically updated as negotiations progress. The proposed model may be employed at every stage of the negotiation to assess the risk of the potential move and its impact on the remaining transfers. By using such models, football managers may obtain an analytical validation of their decisions and avoid the myopic and biased decision making which, as we commented above, has often characterized transfer market decisions. In order to use a similar model, a football club needs some sort of analytical expertise. As an example, the club must be able to compute ratings, and have an outlook on the future value of the players negotiated. While it is very likely that the majority of the professional clubs are currently not ready for such a transition, it also true that an increasing number of football clubs are starting to use some sort of analytics and data science techniques, and that vast volumes of data are being collected both on match events as well as on transactions. In this sense, the model we propose is one of the first attempts to build a decision support tool for aiding football clubs financial decisions. As football clubs will reach some maturity in the adoption of analytics techniques, the present model can be further extended to fully capture the complexity faced by football managers.
The contributions of this paper can be summarized as follows:
\begin{itemize}
\item A novel optimization problem which supports team composition decisions while accounting both for the on-field and financial performance of the club.
\item An improved player-rating system which significantly improves on state-of-the-art plus-minus ratings for football players.
\item An extensive computational study based on real transfer market data which highlights the results achievable with the new optimization model and rating system, as well as the differences between the solutions provided by our model and the solution to an existing model from the literature. Furthermore, the case studies used in the computational study are made available online at \url{https://github.com/GioPan/instancesFTCP} in order to facilitate future research on the topic.
\end{itemize}
This paper is organized as follows. In \Cref{sec:Problem} we provide a more thorough description of the problem and provide a mathematical formulation in the form of a chance-constrained mixed-integer program. In \Cref{sec:PlayerRatings} we introduce a novel plus-minus player-rating system. In \Cref{sec:casestudies} we introduce and explain the case studies. In \Cref{sec:results} we analyze the decisions obtained with our model based on historical English Premier League data, and compare the decisions to those obtained from the model provided by \citet{Pan17}. Finally, conclusions are drawn in \Cref{sec:Conclusions}.
\section{Problem Definition}\label{sec:Problem}
The manager of a football club has to decide how to invest the available budget $B$ to compose
a team of football players. Particularly, the manager's decisions include which players
to buy or loan from other clubs, and which players to sell or loan out to other clubs.
Let $\mc{P}$ be the set of all players considered, both those currently in the team and those
the club is considering buying or loaning. The latter will be referred to as \textit{target players}. Let parameter $Y_p$ be $1$ if the player $p$
belongs to the club at the planning phase, and $0$ otherwise. As decided by (inter)national football associations, each team must be composed of at least $N$ players.
The players in excess cannot participate in competitions and we therefore assume they must be sent to other teams on a loan agreement. The alternative would be that a player remains at the club but is not registered for official competitions. Though possible, this situation is undesirable.
The players in the team must cover a set of roles $\mc{R}$.
In particular, let $\underline{N}_r$ and $\overline{N}_r$ be the minimum and maximum number of players,
respectively, in role $r$. A role is, in general, a well defined set of technical and personal
characteristics of the player, such as the position on the field of play, the nationality, the speed, or strength.
The players required in a given role are typically decided by the coach when the role corresponds to
a technical characteristic. However, when the role defines a personal characteristic such as age or nationality,
national or international regulations may specify how many players with those characteristics a club may employ.
As an example, clubs competing in the Italian Serie A may not employ more than three non-EU citizens, and must employ at least
four players trained in the academy of an Italian club. Let $\mc{P}_r$ be the set of players having role $r\in\mc{R}$.
Notice that players might have more than one role so that $\bigcap_{r\in \mc{R}}\mc{P}_r\neq \emptyset$. Also, while in reality a player performs better in one role rather than another, here we consider the role a binary trait, that is, a player may occupy a given role or not, independently of how well they perform in that role. It is then a decision of the coach to employ the player in the role where they perform best.
For each target player $p$ the club is assumed to know the current purchase price $V^P_{p}$ and loan fee $V^{B}_{p}$. A target player may also be a player currently in the club's own youth team or second team who is considered for a promotion. In this case, the purchase price might be set to either zero or the opportunity cost generated by the lost sale of the player. Similarly,
for each player $p$ currently in the team, the club knows the current selling price $V^S_{p}$ and loan fee $V^{L}_{p}$. Observe that current purchase and selling prices as well as loan fees are known to the decision maker. They are either the result of a negotiation or is information which a player's agent can obtain from the owning club. In certain cases they are even explicitly stated in contracts. Notice also that today we observe several forms of payments, such as in instalments, and with bonuses conditional on the achievement of sports results. These, more involved, forms of payment may play an important role for the outcome of a negotiation, for a club's financial stability and for complying with regulations such as the UEFA Financial Fair Play. However, in our context we simply consider the actualized sum of payments since we are only concerned with ensuring, with a certain probability, that the value of investments exceeds a given threshold in the future.
However, the future market value of the player is uncertain and dependent on several unpredictable factors such as fitness,
injuries and successes. Let random variable $\tilde{V}_p$ represent the market value of player $p$ at a selected future point in time, e.g., at the beginning of next season. We assume that the probability distribution of $\tilde{V}_p$ is known to the decision maker. Such a distribution may be the result of a forecast based on historical data as done in \Cref{subsec:regression}, or more simply the outcome of expert opinions.
Let $W_p$ be the rating of player $p$, corresponding to a measure of the on-field performance of the player.
The objective of the club is that of composing a team with the highest \textit{rating}, such that
the size of the team is respected, the number of players in each role is respected, the budget is not
exceeded, and that the probability that the market value of the team exceeds a threshold $V$ is higher than $\alpha$.
Let decision variable $y_p$ be equal to $1$ if player $p$ belongs to the club (not on a loan agreement) at the end of the focal \textit{Transfer Market Window} (TMW), and $0$ otherwise. A TMW is a period of time, decided by national and international football associations, during which clubs are allowed to transfer players. Variables $y^B_{p}$ and $y^S_{p}$ are equal to $1$ if player $p$ is bought or sold during the TMW, respectively, and $0$ otherwise. Similarly, variables $x^L_{p}$ and $x^B_{p}$ are equal to $1$ if player $p$ is respectively leaving or arriving in a loan agreement during the TMW, and $0$ otherwise. The \textit{Football Team Composition Problem} (FTCP) is thus expressed by the following optimization problem.
\begin{subequations}
\label{eq:FTCP}
\begin{align}
\label{eq:obj1}\max &\sum_{p\in \mc{P}}W_p(y_p+x^B_p-x^L_p)\\
\label{eq:balance0}\text{s.t. }&y_{p} - y^B_{p} + y^S_{p} = Y_p & p \in \mc{P},\\
\label{eq:max_squad_size}&\sum_{p\in \mc{P}}(y_{p}+x^B_p-x^L_{p}) = N, & \\
\label{eq:role_covering1}&\sum_{p\in \mc{P}_r}(y_{p}+x_p^B-x^L_p)\geq \underline{N}_r & r\in \mc{R},\\
\label{eq:role_covering2}&\sum_{p\in \mc{P}_r}(y_{p}+x_p^B-x^L_p)\leq \overline{N}_r & r\in \mc{R},\\
\label{eq:loan_in_if_not_owned}&x^B_p + y^B_p\leq 1-Y_{p} & p\in \mc{P},\\
\label{eq:loan_out_if_owned}&x^L_{p} +y^S_p\leq Y_p & p\in \mc{P},\\
\label{eq:budget_limit}&\sum_{p \in \mc{P}} \left(V^P_{p}y^B_{p}+V^{B}_{p}x^B_{p}-V^S_{p}y^S_{p}-V^{L}_{p}x^L_{p}\right) \leq B, & \\
\label{eq:chance}&P\bigg(\sum_{p\in \mc{P}}\tilde{V}_py_p \geq V\bigg)\geq \alpha,&\\
\label{eq:range}&y_{p},y^B_{p},y^S_{p},x^B_{p},x^L_{p} \in \{0,1\} & p\in \mc{P}.
\end{align}
\end{subequations}
The objective function \eqref{eq:obj1} represents the performance of the team
as described by the sum of the ratings of the players competing for the team. The model maximizes the total rating of the entire team, irrespective of the lineup chosen for each specific match. That is, the model provides the coach the best possible team, and then it is the coach's duty to decide lineups for each match day.
Constraints \eqref{eq:balance0} ensure that a player belongs to the team if he
has been bought or if the player was in the team before the opening of the TMW
and has not been sold. Constraints \eqref{eq:max_squad_size} ensure that the club
registers exactly $N$ players for competitions, while constraints \eqref{eq:role_covering1}
and \eqref{eq:role_covering2} ensure that each role is covered by the necessary number of players.
Constraints \eqref{eq:loan_in_if_not_owned} and \eqref{eq:loan_out_if_owned} ensure that
a player is loaned only if not owned, and sent on a loan only if owned, respectively.
Constraints \eqref{eq:budget_limit} ensure that expenses for obtaining
players can be financed by players sold, players loaned out, or a separate budget. Constraint \eqref{eq:chance}
ensures that the probability that the future value of the team exceeds a threshold $V$
(e.g., the current value of the team) is higher than $\alpha$, with $\alpha$ reflecting
the financial risk attitude of the club. Finally, constraints \eqref{eq:range} set the
domain for the decision variables. In addition, similarly to \citet{Pan17}, it is
possible to fix the value of a subset of the variables to indicate whether a player cannot
be sold, bought, or moved on loan.
A possible limitation of model \eqref{eq:FTCP} is that it does not explicitly take into account the salary of players, which may also play an important role in the composition of a team. As an example, salaries may consume the available budget $B$. In this case, it is possible to modify constraints \eqref{eq:budget_limit} by subtracting from the budget the salaries of the players in the team (purchased or in a loan agreement), and adding the salaries of the players sold or leaving in a loan agreement for the next season. Similarly, salaries may affect the chance constraint \eqref{eq:chance}. As an example, a club may want to enforce, with a certain probability, that the future value of the players, net of the salaries paid, exceeds a certain threshold $V$. Furthermore, in principle, the model allows an uneven distribution of ratings among the team. That is, it is possible that the resulting team is composed of a few highly rated players while the remaining players have very low ratings. However, this scenario is extremely unlikely under the reasonable assumption that target lists are somewhat homogeneous in terms of rating. That is, we do not expect that a team targets both a top performer and a very poor performer. Nevertheless, extensions of the model might be considered which either distinguish between the ratings of the ideal starting line-up and the ratings of the available substitutes, or ensures a fair distribution of ratings across the entire team, or across roles, in order to foster internal competition. We leave these possible extensions to future research and, in what follows, we employ model \eqref{eq:FTCP} without any modifications.
\section{Player Ratings}\label{sec:PlayerRatings}
Multiple linear regression models, as used to calculate adjusted plus-minus ratings, are typically stated using $y$ to denote the dependent variable, and $y_i$ being the value of the dependent variable in observation $i$. A set $\mc{V}$ of independent variables, denoted by $x_{j}$ for $j \in \mc{V}$ and with values $x_{ij}$ for observation $i$, are assumed to be related to the dependent variable such that
\begin{align}
\sum_{j \in \mc{V}} \beta_{j} x_{ij} = y_{i} + \epsilon_{i} \nonumber
\end{align}
\noindent where $\beta_j$ are parameters describing the relationship between the independent variables and the dependent variable, and $\epsilon_i$ is an error term. When using ordinary least squares to estimate the values of $\beta_i$, one is essentially solving the following unconstrained quadratic program, with $n$ being the number of observations:
\begin{align}
\min_{\beta} \left \{ \sum_{i=1}^n ( \sum_{j \in \mc{V}} x_{ij} \beta_{j} - y_{i} )^2 \right \} \nonumber
\end{align}
For regularized plus-minus ratings, Tikhonov regularization, also known as ridge regression, is employed instead of ordinary least squares. The main purpose of this is to avoid overfitting the model as a result of collinearity, by shrinking all regression coefficients towards zero \citep{Si10}. For example, with standard adjusted plus-minus ratings, players with few minutes played are prone to being assigned very high or very low ratings. Using a regularization coefficient $\lambda$, the estimation can be performed by solving the following unconstrained quadratic program:
\begin{align}
\min_{\beta} \left \{ \sum_{i=1}^n ( \sum_{j \in \mc{V}} x_{ij} \beta_{j} - y_{i} )^2 + \sum_{j \in \mc{V}} (\lambda \beta_j)^2 \right \} \nonumber
\end{align}
In the context of plus-minus ratings for soccer players, let $\mc{M}$ be a set of matches. Each match $m \in \mc{M}$ can be divided in a number of segments $s \in \mc{S}_m$, where each player on the field during the segment is playing for the whole segment. One possibility is to split into segments for each substitution and for each time a player is sent off with a red card. The duration of segment $s$ of match $m$ is $d(m,s)$ minutes. For a given segment, let $f^{LHS}(m,s)$ be the left hand side of a row in the regression model, and let $f^{RHS}(m,s)$ be the right hand side. Let the regularization term for variable $j$ be denoted by $f^{REG}(\beta_j)$. Furthermore, let $w(m,s)$ be the importance of segment $s \in \mc{S}_m$ of match $m \in \mc{M}$, allowing different segments to be weighted differently when estimating the parameters of the model. Regularized plus-minus ratings can then be described by the following unconstrained quadratic program:
\begin{align}
\min Z(\beta) = \sum_{m \in \mc{M}, s \in \mc{S}_m} \left ( w(m,s) f^{LHS}(m,s) - w(m,s) f^{RHS}(m,s) \right ) ^2 + \sum_{j \in \mc{V}} \left ( f^{REG}(\beta_j) \right )^2 \nonumber
\end{align}
\noindent which by specifying the details of $f^{LHS}$, $f^{RHS}$, $f^{REG}$, and $w(m,s)$, provides a specific variant of adjusted plus-minus or regularized plus-minus ratings.
\subsection{Regularized plus-minus ratings}
\label{sec:rapm}
To obtain a plain regularized plus-minus rating, define the following. Let $h(m)$ and $a(m)$ be the two teams involved in match $m$, and let $\mc{P}_{m,s,h}$ and $\mc{P}_{m,s,a}$ be the sets of players involved in segment $s$ of match $m$ for team $h = h(m)$ and $a = a(m)$, respectively. During segment $s$ of match $m$, the number of goals scored by team $a$ and $h$ is given by $g(m,s,a)$ and $g(m,s,h)$, and the goal difference $g(m,s) = g(m,s,h) - g(m,s,a)$ is measured in favor of the home team. Then define:
\begin{align}
f^{LHS}(m,s) & = \frac{d(m,s)}{90} \left ( \sum_{p \in \mc{P}_{m,s,h}} \beta_p- \sum_{p \in \mc{P}_{m,s,a}} \beta_p \right ) \nonumber \\
f^{RHS}(m,s) & = g(m,s) \nonumber \\
f^{REG}(\beta_j) & = \lambda \beta_j \nonumber \\
w(m,s) & = 1 \nonumber
\end{align}
The above regularized plus-minus rating does not take into account players sent off. Hence, it seems fair to discard segments where any team does not have a full set of eleven players. This can be done by a simple redefinition of $\mc{S}_m$. A version of regularized plus-minus taking into account red cards, home advantage, and the recency of observations was presented by \citet{SaHv17}.
\subsection{Novel regularized plus-minus ratings} \label{sec:PlayerRatings:novel}
\label{sec:pmpr}
The following describes a novel regularized plus-minus rating for football players, using an improved method to model home field advantages, an improved method to take into account red cards, and letting the ratings of players depend on their age. The method is also extended in a new way to improve the handling of players appearing in different leagues or divisions, and by introducing a more effective scheme for setting segment weights. The rating model aims to explain the observation defined as $f^{RHS}(m,s)$ using variables and parameters expressed as $f^{LHS}(m,s)$. The latter can be divided into 1) components that depend on the players $p$ involved in the segment, $f^{PLAYER}(m,s,p)$, 2) components that depend on the segment $s$ but not the players, $f^{SEGMENT}(m,s)$, and 3) components that depend on the match $m$, but not the segment, $f^{MATCH}(m)$. Thus we can write:
\begin{align}
\frac{90}{d(m,s)} f^{LHS}(m,s) = & \sum_{p \in \mc{P}_{m,s,h}} f^{PLAYER}(m,s,p) \nonumber \\
& - \sum_{p \in \mc{P}_{m,s,a}} f^{PLAYER}(m,s,p) \nonumber \\
& + f^{SEGMENT}(m,s) \nonumber \\
& + f^{MATCH}(m) \nonumber
\end{align}
\noindent where setting $f^{SEGMENT}(m,s) = f^{MATCH}(m) = 0$ and $f^{PLAYER}(m,s,p) = \beta_p$ gives the plain regularized plus-minus as defined earlier. However, instead of just following the structure of a regularized regression model and making improvements to the variables included, the novel ratings also exploit that the ratings are described by an unconstrained quadratic program. In particular, the regularization terms for some of the variables are replaced by more complex expressions.
The home field advantage may vary between different league systems. For example, since the home field advantage is measured in terms of the goal difference per 90 minutes, it may be that the advantage is different in high scoring and low scoring tournaments. Let $c(m)$ be the country or competition type in which match $m$ takes place. Home field advantage is then modelled by setting
\begin{align}
f^{MATCH}(m) & = \left \{
\begin{array}{lr}
\beta^H_{c(m)} & \textrm{if team~} h(m) \textrm{~has home advantage} \\
0 & \textrm{otherwise}
\end{array}
\right. \nonumber
\end{align}
To correctly include the effect of players being sent off after red cards, the average rating of the players left on the pitch is used as the baseline to which additional variables corresponding to the effect of red cards on the expected goal differences are added. To this end, $f^{LHS}$ is first redefined as follows:
\begin{align}
\frac{90}{d(m,s)} f^{LHS}(m,s) = & \frac{11}{|\mc{P}_{m,s,h}|} \sum_{p \in \mc{P}_{m,s,h}} f^{PLAYER}(m,s,p) \nonumber \\
& - \frac{11}{|\mc{P}_{m,s,a}|} \sum_{p \in \mc{P}_{m,s,a}} f^{PLAYER}(m,s,p) \nonumber \\
& + f^{SEGMENT}(m,s) \nonumber\\
& + f^{MATCH}(m) \nonumber
\end{align}
Now, define $r(m,s,n)= 1$ if team $h$ has received $n$ red cards and team $a$ has not, $r(m,s,n)= -1$ if team $a$ has received $n$ red cards and team $h$ has not, and $r(m,s,n)= 0$ otherwise. Then, red card variables are introduced, where a difference is made between the value of a red card for the home team and for the away team, by rewriting $f^{SEGMENT}(m,s)$ as:
\begin{align}
f^{SEGMENT}(m,s) & = \sum_{n = 1}^4 r(m,s,n) \beta^{HOMERED}_n, & \textrm{if~} \sum_{n = 1}^4 r(m,s,1) \geq 0 \nonumber \\
f^{SEGMENT}(m,s) & = \sum_{n = 1}^4 r(m,s,n) \beta^{AWAYRED}_n, & \textrm{if~} \sum_{n = 1}^4 r(m,s,1) < 0 \nonumber
\end{align}
Playing strength is not constant throughout a player's career. In particular, being too young and inexperienced or too old and physically deteriorated, may both be seen as disadvantageous. In a paper devoted to studying the peak age of football players, \citet{De16} took performance ratings as given (calculated by a popular web page for football statistics), and fit different models to estimate the age effects. In that study, the peak age of players was estimated to between 25 and 27 years, depending on the position of the players.
To include an age effect, the player rating component of the model, $f^{PLAYER}(m,s,p)$, is modified. Let $t = t(m)$ denote the time when match $m$ is played, and let $T$ denote the time that the ratings are calculated. Let $t(m,p)$ be the age of player $p$ at time $t(m)$. The ages of players, $t(m,p)$, are measured in years. In addition to considering quadratic and cubic functions to describe the effect of a player's age, \citet{De16} introduced separate dummy variables for each age, year by year. In the regularized plus-minus model, this can be mimicked by representing the age effect as a piecewise linear function. To accomplish this, define a set of ages, $\mc{Y} = \{y^{min}, y^{min}+1, \ldots, y^{max}\}$. For a given match $m$ and player $p$, let $\max\{\min\{t(m,p),y^{max}\}, y^{min}\} = \sum_{y \in \mc{Y}} u(y,t(m),p) y$, where $\sum_{y \in \mc{Y}} u(y,t(m),p) = 1$, $0 \leq u(y,t(m),p) \leq 1$, at most two values $u(y,t(m),p)$ are non-zero, and if there are two non-zero values $u(y,t(m),p)$ they are for consecutive values of $y$.
The player component can then be stated as:
\begin{align}
f^{PLAYER}(m,s,p) = & \beta_p + \sum_{y \in \mc{Y}} u(y,t(m),p) \beta^{AGE}_y \nonumber
\end{align}
If all players in a match have the exact same age, the age variables cancel out. However, when players are of different age, the corresponding effects of the age difference can be estimated. As a result, players are not assumed to have a constant rating over the entire time horizon of the data set, but are instead assumed to have a rating that follows an estimated age curve.
The regularization terms are not strictly necessary for variables other than the player rating variables, $\beta_p$. However, for smaller data sets, it seems beneficial to include the regularization terms also for additional variables, such as for the home field advantage and the red card effects. For the age variables, $\beta^{AGE}_y$, a different scheme is chosen, as it seems beneficial to make sure that the estimates for each age are somehow smoothed. This can be accomplished by the following replacements for the regularization terms:
\begin{align}
f^{REG}(\beta^{AGE}_y) & = \lambda \left(\beta^{AGE}_y - (\beta^{AGE}_{y-1} + \beta^{AGE}_{y+1})/2 \right), & y \in \mc{Y} \setminus \{y^{min}, y^{max}\} \nonumber \\
f^{REG}(\beta^{AGE}_y) & = 0, & y \in \{y^{min}, y^{max}\} \nonumber
\end{align}
For players with few minutes of recorded playing time, the standard regularization ensures that the players' ratings are close to zero. \citet{SaHv17} included a tournament factor in the player ratings, thus allowing players making their debut in a high level league to obtain a higher rating than players making their debut in low level leagues. This tournament factor is generalized here, as follows. Let $\mc{B}$ be a set of different leagues, and let $\mc{B}_p \subseteq \mc{B}$ be the set of leagues in which player $p$ has participated. The player component is then further refined to become
\begin{align}
f^{PLAYER}(m,s,p) = & \beta_p + \sum_{y \in \mc{Y}} u(y,t(m),p) \beta^{AGE}_y + \frac{1}{|\mc{B}_p|} \sum_{b \in \mc{B}_p} \beta^{B}_b \nonumber
\end{align}
This helps to discriminate players from different leagues. However, a further refinement of this is achieved by modifying the regularization terms. Instead of always shrinking a player's individual rating component $\beta_p$ towards 0, as in the plain regularized plus-minus ratings, the whole expression providing the current rating of a player is shrunk towards a value that depends on a set of similar players. Let $\mc{P}^{SIMILAR}_p$ be a set of players that are assumed to be similar to player $p$. In this work, the set is established by using the teammates of $p$ that have been on the pitch together with $p$ for the highest number of minutes. Let $t(p, p \prime)$ be the time of the last match where players $p$ and $p \prime$ appeared on the pitch for the same team. Now, define the following auxiliary expression, where $w^{AGE}$ is a weight for the influence of the age factor:
\begin{align}
f^{AUX}(p,t,w^{AGE}) = & \beta_p + w^{AGE} \sum_{y \in \mc{Y}} u(y,t,p) \beta^{AGE}_y + \frac{1}{|\mc{B}_p|} \sum_{b \in \mc{B}_p} \beta^{B}_b \nonumber
\end{align}
The rating of player $p$ at time $T$ is then equal to $f^{AUX}(p,T,1)$, and it is this value that will be shrunk towards a value that depends on the teammates of $p$, rather than towards 0. To this end, the regularization term for player $p$ is replaced by the following:
\begin{align}
f^{REG}(\beta_p) & = \lambda \left ( f^{AUX}(p,T,1) - \frac{ w^{SIMILAR} }{|\mc{P}^{SIMILAR}_p|} \left ( \sum_{p \prime \in \mc{P}^{SIMILAR}_p} f^{AUX} (p \prime, t(p,p \prime), w^{AGE}) \right ) \right ) \nonumber
\end{align}
\noindent where $w^{SIMILAR} \leq 1$ is another weight that controls the emphasis of shrinking the rating of player $p$ towards the rating of similar players versus shrinking towards 0. This replacement of the regularization terms for player rating components, together with the modified regularization terms for the age components, makes the full model incompatible with the framework of regularized linear regression models. Instead, the model is interpreted as an unconstrained quadratic program.
The model estimation is performed by minimizing the sum of squared deviations between observed goal differences and a linear expression of player ratings and additional factors. The sum is taken over all segments from all matches included in the data. However, not all of these segments are equally informative, and better ratings can be obtained by changing the relative weight $w(m,s)$ of different segments.
The weights used here have three components. The first component emphasizes that more recent matches are more representative for the current strength of players. Hence, a factor $w^{TIME}(m) = e^{\rho_1(T-t(m))}$ is included, which leads to smaller weights for older matches. The second component focuses on the duration of a segment, with longer segments being more important than shorter segments. Given two parameters $\rho_2$ and $\rho_3$, and the duration of a segment, $d(m,s)$, a factor on the form $w^{DURATION}(m,s) = (d(m,s) + \rho_2) / \rho_3$ is included. The third component takes into account the goal difference at the beginning of the segment, $g^0(m,s)$, as well as the goal difference at the end of the segment, $g^1(m,s) = g^0(m,s)+g(m,s)$, introducing the factor $w^{GOALS}(m,s) = \rho_{4}$ if $|g^0(m,s)| \geq 2$ and $|g^1(m,s)| \geq 2$, and $w^{GOALS}(m,s) = 1$ otherwise. The weight of a segment can then be stated as:
\begin{align}
w(m,s) = w^{TIME}(m,s) w^{DURATION}(m,s) w^{GOALS}(m,s). \nonumber
\end{align}
\section{Case Studies}
\label{sec:casestudies}
In this section we describe a number of case studies used to test model \eqref{eq:FTCP}. The case studies consist of the 20 clubs competing in the English Premier League (EPL) during the 2013/14 season. Each club is characterized by the current team composition and a list of target players, and we use model \eqref{eq:FTCP} to address the transfer market of summer 2014, in preparation for season 2014/15. The data of the case studies is made available online at \url{https://github.com/GioPan/instancesFTCP}.
In \Cref{subsec:Instances} we describe the clubs and their current and target players. In \Cref{subsec:regression} we introduce a model of the market value of the player which allows us to obtain an empirical probability distribution. Given the complexity of solving model \eqref{eq:FTCP} with the original empirical distribution, in \Cref{subsec:model:saa} we introduce its Sample Average Approximation. In \Cref{subsec:ratings} we provide some statistics about the ratings of the players in the case studies. The case studies are subsequently used to perform a number of tests which will be thoroughly described in \Cref{sec:results}.
\subsection{Clubs and players}
\label{subsec:Instances}
The case studies used for testing are adapted from those introduced by \citet{Pan17} based on the English Premier League (EPL). The case studies describe the 20 teams in the EPL 2013/2014 dealing with the summer 2014 transfer market. Each team is characterized by a budget, a set of players currently owned and the set of target players. Given a focal team among the 20 available, the set $\mc{P}$ consists of the set of current player and the set of target players. Each player is characterized by age, role, current value, purchase and sale price, loan fees, and whether the player can be purchased, sold, or temporarily change club in a loan agreement.
In addition to the above mentioned data, we set $N=25$ in accordance with EPL rules. Furthermore, we test different formations, where a formation determines the number of players required for each role. Thus, for each role $r\in \mc{R}$ we set $\underline{N}_r$ according to \Cref{tab:cs:formations}, and $\overline{N}_r=\infty$, implying that it is allowed to have more than $\underline{N}_r$ players covering role $r$. The roles used here are simply player positions: goalkeeper (GK), right-back (RB), centre-back (CB), left-back (LB), right winger (RW), centre midfielder (CM), left winger (LW), attacking midfielder (AM), and forward (FW). Finally, we set $V$ equal to the initial market value of the team (i.e., the club wishes to ensure a non-decreasing value of the team) and we use a $7\%$ discount factor. Uncertain values and ratings are discussed in \Cref{subsec:regression} and \Cref{subsec:ratings}, respectively.
\begin{table}[htb]
\centering
\caption{Formations and players required in each role ($\underbar{N}_r$). }
\label{tab:cs:formations}
\begin{tabular}{cccccccccc}
\toprule
& \multicolumn{9}{|c}{$r\in\mc{R}$} \\
Formation & GK & RB & CB & LB & RW & CM & LW & AM & FW \\
\midrule
442 & 3 & 2 & 4 & 2 & 2 & 4 & 2 &0 &4 \\
433 & 3 & 2 & 4 & 2 & 0 & 6 & 0 &0 &6 \\
4312 & 3 & 2 & 4 & 2 & 0 & 6 & 0 &2 &4 \\
352 & 3 & 0 & 6 & 0 & 2 & 6 & 2 &0 &4 \\
343 & 3 & 0 & 6 & 0 & 2 & 4 & 2 &0 &6 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Modeling the Uncertainty}
\label{subsec:regression}
Uncertain future player values are modeled using a linear regression model whose parameters were estimated using English Premier League data for the seasons 2011/2012 through 2016/2017. Consistently with \citet{Pan17} we use the current value of the player and its role as
explanatory variables and the value after one season as the dependent variable. Unlike in \citet{Pan17}, different
regression models are used for different age intervals, as this allows us to capture the higher volatility in the value of younger players.
In this exercise, we aimed to obtain a sufficiently good model of the available data, indicated by high values of the $R^2$ value and no evidence of non-linear patterns in the residual analysis. Therefore, the resulting model should not be understood as a good prediction tool, as no out-of-sample analysis was performed.
For each player $p\in \mc{P}$, and scenario $s\in\mc{S}$, the future value
$V_{ps}$ is thus obtained as
\begin{equation}
\label{eq:cs:regressionmodel}
V_{ps}=\bigg(\alpha_{a}\sqrt[4]{V_{p}^C}+\sum_{r\in\mc{R}}\beta_{ar}\delta(p,\mc{P}_r)\bigg)^4*(1+\epsilon_{as})
\end{equation}
where $V^C_p$ is the current value of player $p$, $\delta(p,\mc{P}_{r})$
is an indicator function of the membership of player $p$ in $\mc{P}_r$, i.e., it is equal to $1$ if player
$p$ has role $r$, $0$ otherwise, and $\epsilon_{as}$ is an i.i.d. sample from the empirical prediction error distribution of the regression model for the specific age group. Notice that $\alpha_a$, $\beta_{ar}$ and $\epsilon_a$ are estimated for different age
intervals $a\in \{(\cdot,20],[21,22],[23,24],[25,26],[27,28],[29,30],[31,32],[33,\cdot)\}$. While model \eqref{eq:cs:regressionmodel} does not include a constant term, the indicator function is used to determine a role-specific constant term. \Cref{tab:R2} reports the $R^2$ coefficient for the regression model in each age group.
\begin{table}[h!]
\centering
\caption{$R^2$ for regression model \eqref{eq:cs:regressionmodel} in each age group.}
\label{tab:R2}
\begin{tabular}{cc}
\toprule
Age group&$R^2$\\
\midrule
$(\cdot,20]$&$0.97$\\
$[21,22]$&$0.98$\\
$[23,24]$&$0.98$\\
$[25,26]$&$0.99$\\
$[27,28]$&$0.99$\\
$[29,30]$&$0.99$\\
$[31,32]$&$0.98$\\
$[33,\cdot)$&$0.98$ \\
\bottomrule
\end{tabular}
\end{table}
Not all roles were statistically significant as predictors ($p$-value smaller than $0.05$). Particularly, the roles which were statistically insignificant for some age group are reported in \Cref{tab:pvalues}.
\begin{table}[h!]
\centering
\caption{Roles statistically insignificant as predictors.}
\label{tab:pvalues}
\begin{tabular}{cc}
\toprule
Role & Age group in which the role is statistically insignificant as a predictor \\
\midrule
Secondary striker & $(0,20]$, $[29,30]$ \\
Left midfielder & $[21,22]$, $[27,28]$ ,$[29,30]$ $[31,32]$\\
Right midfielder & $[23,24]$, $[25,26]$, $[27,28]$, $[31,32]$\\
Left wing &$[31,32]$\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Sample Average Approximation}
\label{subsec:model:saa}
The empirical prediction error distribution for regression model \eqref{eq:cs:regressionmodel} is a discrete distribution with a very large support. This makes model \eqref{eq:FTCP} a mixed-integer linear program. However, solving the model using directly the original discrete distribution is impractical due to the large number of realizations of the player values. Therefore, we solve its \textit{Sample Average Approximation} (SAA), see e.g., \citet{KleS02,Sha03,PagAS09}.
Let $\mc{S}=\{1,\ldots,S\}$ and let $(V_{ps})_{s\in \mc{S}}$ be an $|\mc{S}|$-dimensional i.i.d. sample of $\tilde{V}_p$ for all $p\in \mc{P}$. Furthermore, for all $s\in \mc{S}$ let $w_s$ be a binary variable which is equal to $1$ if the team value exceeds the threshold $V$ under scenario $s$, $0$ otherwise. That is $w_s = 1 \implies \sum_{p\in \mc{P}}V_{ps}y_p \geq V$.
Constraint \eqref{eq:chance} can be approximated by constraints
\eqref{eq:chanceSAA1}-\eqref{eq:chanceSAA3}
\begin{align}
\label{eq:chanceSAA1} &\sum_{p\in \mc{P}}V_{ps}y_p + M(1-w_s) \geq V &~~ s\in \mc{S}\\
\label{eq:chanceSAA2} &\sum_{s\in \mc{S}}\frac{1}{|\mc{S}|}w_s \geq \alpha &\\
\label{eq:chanceSAA3} & w_s \in \{0,1\} & ~~s\in \mc{S}
\end{align}
where $M$ is a suitable upper bound to $V-\sum_{p\in \mc{P}}V_{ps}y_p$. The quantity $V-\sum_{p\in \mc{P}}V_{ps}y_p$ is bounded above by $M=V$, achieved when $\sum_{p\in \mc{P}}V_{ps}y_p = 0$ (which yields an infeasible solution).
We approximate the original empirical distribution by means of a sample of size $70$. Numerical tests show that this sample size ensures both in-sample stability of the objective function (i.e., a negligible standard deviation of the SAA optimal objective value across different samples of size $70$) and out-of-sample satisfaction of the chance constraint, assessed on a sample of $1000$ scenarios.
That is, we observe that the change of optimal objective value determined by different samples of size $70$ is relatively small. In \Cref{tab:iss}, which reports the results for the cases with $\alpha=0.2$ and $0.8$, we notice that the standard deviation of the optimal objective value is typically at least two orders of magnitude smaller than the mean. We also observe that the solutions which satisfy the chance constraint with an i.i.d. sample of size $70$, also satisfy the chance constraint with a sample of size $1000$. \Cref{tab:oos} reports some statistics about the out-of-sample probability obtained with $\alpha=0.2$ and $0.8$. It can be seen that the out-of-sample probability is always greater than $\alpha$. Stability tests have been run with formation 443 which, as explained later, is used throughout the computational study.
\begin{table}[htb]
\centering
\caption{Results of the in-sample stability test for SAA obtained with samples of size $70$ and $\alpha=0.2$ and $0.8$. The mean and standard deviation of the objective values were calculated by solving $10$ different SAAs, each obtained with a sample of size $70$.}
\label{tab:iss}
\begin{tabular}{c|cccc}
\toprule
& \multicolumn{2}{c}{$\alpha=0.2$} & \multicolumn{2}{c}{$\alpha=0.8$}\\
Team & Avg. obj. value & St. dev & Avg. obj. value & St. dev\\
\midrule
Arsenal FC & $4.02$ & $ 0.00 $ & $ 4.01 $ & $7.72 \times 10^{-3}$\\
Aston Villa & $2.68$ & $1.98 \times 10^{-4} $ & $2.67$ & $ 3.81 \times 10^{-3}$\\
Cardiff City & $2.08 $ & $0.00 $ & $2.05 $ & $6.11 \times 10^{-4}$\\
Chelsea FC & $4.21$ &$ 8.56 \times 10^{-3} $& $ 4.03$ & $2.74 \times 10^{-2}$\\
Crystal Palace & $1.95 $ & $2.34 \times 10^{-16}$ & $ 1.95 $& $2.34 \times 10^{-16}$\\
Everton FC & $2.81 $ & $4.68 \times 10^{-16}$ & $2.81$ & $4.68 \times 10^{-16}$\\
Fulham FC & $2.45 $ & $1.73 \times 10^{-3}$ & $2.45$ & $ 2.82 \times 10^{-3}$\\
Hull City & $2.18$ & $0.00 $ & $2.18 $ & $0.00$\\
Liverpool FC & $3.74 $ & $1.49 \times 10^{-3}$ & $3.74 $ &$ 0.00$\\
Manchester City & $4.50 $ & $6.59 \times 10^{-3} $ & $4.016$ & $1.28 \times 10^{-1}$\\
Manchester United& $4.40 $ & $3.54 \times 10^{-3} $& $ 4.40$ & $ 6.62 \times 10^{-16}$\\
Newcastle United & $2.22$ & $4.31 \times 10^{-3}$ & $ 2.22$ & $ 0.00$\\
Norwich City & $2.37$ & $4.22 \times 10^{-3} $ & $2.22$ & $2.69 \times 10^{-2}$\\
Southampton FC & $2.74$ & $0.00 $ & $2.69 $ & $6.86 \times 10^{-3}$\\
Stoke City & $2.16 $ & $ 4.68 \times 10^{-16}$ & $ 2.14$& $ 5.79 \times 10^{-3}$\\
Sunderland AFC & $2.26 $ & $0.00 $ &$ 2.26$ &$0.00$\\
Swansea City & $2.70 $ & $ 8.28 \times 10^{-4}$ & $2.67$& $8.14 \times 10^{-3}$\\
Tottenham Hotspur & $3.03 $ & $0.00$ & $2.90$ & $2.09 \times 10^{-2}$\\
West Bromwich Albion& $2.18 $ & $0.00$ & $ 2.18 $ & $0.00$\\
West Ham United & $2.23$ & $4.68 \times 10^{-16}$ & $2.23$ &$4.68 \times 10^{-16}$\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htb]
\centering
\caption{Statistics on the out-of-sample probability of satisfying the $\alpha=0.2$ and $\alpha=0.8$ chance constraint, measured with a sample of size $1000$. The tests were run with formation $433$.}
\label{tab:oos}
\begin{tabular}{c|cccc}
\toprule
& \multicolumn{4}{c}{Out-of-sample probability}\\
$\alpha$ & Min & Average & St. dev. & Max \\
\midrule
$0.2$ &0.27 & 0.79 & 0.23 & 1.00\\
$0.8$ & 0.80 & 0.94& 0.04 &1.00\\
\bottomrule
\end{tabular}
\end{table}
When solving the multistage stochastic program introduced by \citet{Pan17}, scenario trees are also sampled from the empirical prediction error distribution based on regression model \eqref{eq:cs:regressionmodel}. The in-sample stability test according to \cite{KauW07} was performed by \citet{Pan17}, who report that the model was in-sample stable when drawing 18 conditional realizations per stage.
That is, given current player information (age, value, and role), the market value distribution at the next TMW (second stage) is approximated by 18 realizations. Following, for each of the 18 realizations in the second stage we draw 18 conditional realizations for the third stage, and so on. The procedure is sketched in \Cref{fig:tree}. We adopt the same scenario tree structure.
\begin{figure}[h!]
\hspace*{-4cm}
\includegraphics[width=1.4\textwidth]{tree.pdf}
\caption{Qualitative description of a three-stage scenario tree. Each node represents a possible realization of the player value at the corresponding TMW, and is obtained using the age, role, and value at the previous TMW, according to model \eqref{eq:cs:regressionmodel}.}
\label{fig:tree}
\end{figure}
\clearpage
\subsection{Ratings}
\label{subsec:ratings}
The player ratings used in this study are calculated using the model outlined in \Cref{sec:PlayerRatings:novel}. Data from more than 84,000 matches to be used in the calculations were collected from online sources. The matches come from national leagues of 25 different countries, as well as from international tournaments for club teams and national teams. There are in total 60,484 players in the data set. When calculating ratings as of July 1 2014, the unconstrained quadratic program has 60,592 variables and 598,697 squared terms. \Cref{tab:top10} shows the ten highest ranked players as of July 1 2014, consider only players with at least one match played during the last year.
\begin{table}[h!]
\centering
\caption{Ten highest ranked players at July 1 2014.}
\begin{tabular}{rrccrrr}
\toprule
\multicolumn{1}{c}{Rank} & \multicolumn{1}{c}{Player} & Age & Position & \multicolumn{1}{c}{Nationality} & \multicolumn{1}{c}{Team} & \multicolumn{1}{c}{Rating} \\
\midrule
1 & Cristiano Ronaldo & 29 & FW & Portugal & Real Madrid & 0.317 \\
2 & Ga\"{e}l Clichy & 28 & LB & France & Manchester City & 0.296 \\
3 & Lionel Messi & 27 & FW & Argentina & Barcelona & 0.286 \\
4 & Karim Benzema & 26 & FW & France & Real Madrid & 0.286 \\
5 & Thomas M\"{u}ller & 24 & FW & Germany & Bayern M\"{u}nchen & 0.281 \\
6 & Mesut \"{O}zil & 25 & AM & Germany & Arsenal & 0.277 \\
7 & Arjen Robben & 30 & FW & Netherlands & Bayern M\"{u}nchen & 0.270 \\
8 & J\'{e}r\^{o}me Boateng & 25 & LB & Germany & Bayern M\"{u}nchen & 0.270 \\
9 & Cesc F\`{a}bregas & 27 & CM & Spain & Barcelona & 0.269 \\
10 & Marcelo & 26 & RB & Brazil & Real Madrid & 0.267 \\
\bottomrule
\end{tabular}%
\label{tab:top10}%
\end{table}%
Parameter values for the rating calculations were determined using a different data set, containing more recent results but much fewer leagues. Using an ordered logit regression model to predict match results based on the difference in average player ratings for the two teams involved, parameters were set to minimize the quadratic loss of predictions from out-of-sample matches. This resulted in the following parameter values. The age variables are defined for $\mc{Y} = \{y^{min}=16, \ldots, y^{max}=42\}$. Observations are discounted over time with a factor of $\rho_1 = 0.1$, and are weighted using $\rho_2 = 300.0$, $\rho_3 = 300.0$, and $\rho_{4} = 2.5$. The general regularization parameter is $\lambda = 16.0$, and to make sure each player is assumed to be somewhat similar to the most common teammates, we end up with $w^{SIMILAR} = 0.85$, $w^{AGE} = 0.35$, and with the maximum number of teammates considered being 35 (for $|\mc{P}^{SIMILAR}_p|$). Deviating from these parameter settings led to worse predictions for match outcomes on the selected out-of-sample matches.
The new rating system is compared to two previous versions of regularized adjusted plus-minus ratings and a naive benchmark in \Cref{fig:evaluation}. The evaluation is performed along two axes: the first is the average quadratic loss on 13,800 match forecasts based on an ordered logit regression model, and the second is the Pearson correlation coefficient for ratings calculated after randomly splitting the training data in two halves, averaged over twenty repetitions. While the former represents the validity of the ratings, with lower prediction loss indicating more meaningful ratings, the latter represents the reliability of the ratings, with higher correlation indicating a more accurate rating.
\begin{figure}[htb]
\centering
\includegraphics[width=0.9\textwidth]{ratings_evaluation_graph.pdf}
\caption{Evaluation of the new plus-minus ratings compared to previously published ratings. The axes are oriented so that better values are found in the upper right.}\label{fig:evaluation}
\end{figure}
\Cref{fig:ageProfile} shows how the rating model is estimating the effect of a player's age on his performance. As in \citep{De16}, it is found that the peak age is around 25--27 years. There are few observations with players aged above 40 years, and in combination with a survival bias, the estimated age curve is unreliable for relatively old players.
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\textwidth]{age_profiles.pdf}
\caption{The effect of age on player ratings, as estimated by the rating model.}\label{fig:ageProfile}
\end{figure}
The rating model also provides estimates for the value of the home field advantage and the effect of red cards. The home field advantage is allowed to vary between countries, and the average effect corresponds to 0.25 goals per 90 minutes. The home field advantage is highest in the Champions League and the Europa League, with 0.41 goals per 90 minutes. The values of red cards differ between home teams and away teams in the rating model. The first red card is worth more when the away team has a player sent off. In that case, the advantage for the home team is 1.07 goals per 90 minutes, whereas the effect is 0.83 goals per 90 minutes when the home team is reduced by one man. As in \citep{SaHv17}, it is found that subsequent red cards have smaller consequences.
\section{Results and discussion}
\label{sec:results}
In this section we present the results of a number of tests performed on the case studies based on the EPL 2013/14. The scope of the tests is to illustrate the team composition strategies obtainable with model \eqref{eq:FTCP}, and particularly compare those with the strategies of a club that maximizes the team value. A team value-maximizer is modeled by means of the multistage stochastic program from \citet{Pan17}.
Note that it is expected that the two models provide different team composition strategies. Therefore, the scope of the computational study is that of examining such differences.
Furthermore, we assess the impact of different financial risk tolerances of the clubs. Unless otherwise specified we show the results obtained using a 4-4-3 formation. In \Cref{app:formation} we show that our findings are to a large extent insensitive to the formation chosen. In what follows, we refer to the chance constrained model \eqref{eq:FTCP} as CC and to the multistage stochastic program from \citet{Pan17} as MSP.
\subsection{Maximizing team value vs maximizing ratings}\label{sec:results:risk}
The solutions to the CC model are compared to those obtained by solving the MSP model. This corresponds to comparing the maximization of the ratings (subject to probabilistic constraints on the market value) to the maximization of market values regardless of player ratings. For the MSP model we consider three stages and generate 18 conditional realizations at each stage as in \citep{Pan17}, resulting in 324 scenarios (see \Cref{subsec:model:saa} for details on the scenario generation).
\begin{figure}[htb]
\includegraphics[width=\textwidth]{ccVSmsp2}
\caption{Total rating of the team composed by the MSP and by the CC model with different levels of $\alpha$.}\label{fig:ccVSmspRating}
\end{figure}
\Cref{fig:ccVSmspRating} reports, for a sample of four clubs, the total rating of the teams composed by the MSP model and by the CC model with different values of $\alpha$. The same findings apply for the teams not shown in the figure. The rating for the MSP model is calculated for the team provided after the first TMW. The rating of the team obtained by the CC model is consistently higher than the rating of the MSP, and in most cases significantly higher. The MSP model does not find any value in signing top performers per se. Rather, the MSP model looks for players whose value is likely to increase in the future as a consequence of their age, role, and current evaluation. Very often, these players are not yet top performers.
On the other hand, the CC model looks primarily for top rated players, that is players whose performances have provided a solid contribution to their respective team's victories in past matches.
For several clubs, similar to the case of Arsenal-FC in \Cref{fig:ccVSmspRating}, the rating is insensitive to the value of $\alpha$. This issue is properly discussed in \Cref{sec:results:ratingvstolerance}.
\begin{figure}[htb]
\includegraphics[width=\textwidth]{ccVSmspETV2}
\caption{Expected value after one year of the team composed maximizing rating and the team composed maximizing profits.}\label{fig:ccVSmspValue}
\end{figure}
Let us now turn our attention to the expected market value of the teams provided by the two models after one season, reported, for a sample of four clubs, in \Cref{fig:ccVSmspValue}. We can observe that the MSP model yields a significantly higher expected team value after one season. This is to be expected since the MSP maximizes market values. On the other hand, the CC model simply ensures that the market value of the team does not decrease after one season. Thus, the decision maker does not seek a return on the capital employed in the team, but simply wants to ensure that the investment keeps its value. In \Cref{fig:ccVSmspValue} we can also find a case for which the CC model provides a higher one-year expected team value than the MSP model (see Tottenham-Hotspur). This is due to the fact that the MSP model maximizes the average expected team value over a three-year period. Therefore, it is possible that the model suggests investments that do not necessarily yield the highest team value after one season, as long as the average over three seasons is maximized.
\subsection{Team rating and risk tolerance}\label{sec:results:ratingvstolerance}
We illustrate the impact of the risk tolerance $\alpha$ on team ratings. We consider both the standard case in which the club wants to ensure a non-decreasing team value, and the case in which the club wants to ensure a growth of the value of the team of either $10$, $20$, or $30\%$. This corresponds to multiplying the constant $V$ by a factor $R=1.1$, $1.2$, and $1.3$, respectively, in constraint \eqref{eq:chance}. In the default case, $V$ represents the initial market value of the team.
\begin{figure}[htb]
\includegraphics[width=\textwidth]{RatingVsAlpha2}
\caption{Total rating of the resulting team for different degrees of risk tolerance $\alpha$ and for different values of $R$.}\label{fig:RatingVsAlpha}
\end{figure}
\Cref{fig:RatingVsAlpha} shows, for a sample of six clubs, an intuitive general trend: as the required probability of meeting financial goals increases, the total team rating tends to decrease. As $\alpha$ increases we impose a higher probability of satisfying a purely financial measure. Consequently, the club has less freedom to sign top-performers, and is bound to find players that ensure a sufficient growth of the team value. Small $\alpha$ values represent clubs that are primarily interested in the here-and-now performance, and less concerned about the financial aspects. In this case, the decision maker has more freedom to choose top performers.
For a number of clubs, pursuing a team value growth is incompatible with ensuring top-performers to the team, see, e.g., the case of Manchester City in \Cref{fig:RatingVsAlpha} with $R$ greater than $10\%$). However, a few clubs are rather insensitive to $\alpha$, especially for low values of $R$.
In the latter case, the players that ensure the highest rating are, in general, the same player that ensure financial goals are met with sufficiently high probability. This is indeed a favorable situation, and it depends on the initial composition of the team as well as on the list of targets, and thus on the players available on the market. Notice, for example how Chelsea FC, Manchester City, and Manchester United show a similar high sensitivity to $\alpha$ as they share, in our case studies, the same list of target players. The same applies to Liverpool FC, Newcastle United, and Everton FC.
\begin{figure}[h!]
\centering
\begin{subfigure}{0.8\textwidth}
\includegraphics[width=\textwidth]{own.pdf}
\caption{Transfers of own players}
\label{fig:own02}
\end{subfigure}\\
\begin{subfigure}{0.8\textwidth}
\includegraphics[width=\textwidth]{tar.pdf}
\caption{Transfers of target players}
\label{fig:tar02}
\end{subfigure}
\caption{Distribution of transfers of Chelsea FC $\alpha=0.2$ and $0.8$, and $R=1.2$. EVI represents the Expected Value Increase of a player, and is calculated as the expectation of \texttt{(future value-current value)/current value}.}\label{fig:chelsea}
\end{figure}
Let us zoom in on the case of Chelsea FC as a representative case for the clubs that are most sensitive to the probability $\alpha$. \Cref{fig:chelsea} reports the rating and the expected market value increase for the suggested transfers with $\alpha=0.2$ and $\alpha=0.8$ (assuming $R=1.2$). With $\alpha=0.2$ the club keeps most of the high-rating players, loans out most of the player with high expected growth that are not yet top performers, and sells some top performers with low expected value increase. On the other hand, when $\alpha=0.8$ the club will keep more of the high-expected-growth players and sell most of the high-rating players with low expected growth. Regarding inbound transfers, when $\alpha=0.2$ the club tends to buy, or sign on a loan agreement, players with above-average rating, and relatively low expected growth. However, with $\alpha=0.8$ the club signs the players with the highest expected value increase and fewer players with high ratings. That is, as the club becomes more concerned with financial stability, it will tend to build a team of high-potential players in spite of a reduced here-and-now performance. However, when the club is less concerned about finances, it will tend to keep its top performers and sign new high-rating players, in spite of the limited expected market value growth.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{PurchaseAgeVsAlpha2.pdf}
\caption{Average purchase age for different values of $\alpha$ and $R$.}\label{fig:purchaseVsAlpha}
\end{figure}
\clearpage
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{SalesAgeVsAlpha2.pdf}
\caption{Average sales age for different values of $\alpha$ and $R$.}\label{fig:salesVsAlpha}
\end{figure}
\clearpage
As the requirement of meeting financial goals becomes stricter, the model suggests that Chelsea FC should buy younger players and sell older players, as illustrated in \Cref{tab:cs:chelseaAge}. The market value of younger players is, in general, expected to grow more than that of older players. Consequently, the model will tend to discard players that do not contribute to fulfilling the financial constraints. The insight we obtain from the solution for Chelsea FC is consistent with a more general trend that sees the average purchase age decrease with $\alpha$, and the average sales age increase with $\alpha$. These trends, illustrated in \Cref{fig:purchaseVsAlpha} and \Cref{fig:salesVsAlpha} for a sample of four clubs, are however erratic as purchases and sales need to generate a team composition feasible with respect not only to the chance constraints, but also to the other constraints of the problem.
\begin{table}
\centering
\caption{Average age for the solution of Chelsea FC for $\alpha=0.2$ and $0.8$ and $R=1.2$. }
\label{tab:cs:chelseaAge}
\begin{tabular}{ccccc}
\toprule
$\alpha$ & Purchases & Hired on a loan agreement & Sales & Sent on a loan agreement \\
\midrule
0.2 & 26.0 & 28.0 &28.2 &20.6\\
0.8 & 24.3 & -- &30.7 &23.0\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Loaning strategies}\label{sec:results:loans}
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{BudgetVsLoans.pdf}
\caption{Number of players hired on a loan agreement for different budgets and values of $R$.}\label{fig:budget-loan}
\end{figure}
Hiring players on a loan agreement is a typical strategy for mid- and low-tier clubs to ensure a team of acceptable quality with a low budget to spend on the market. On the other hand, top clubs tend to purchase the players they need, very often due to more generous budgets. As shown in \Cref{fig:budget-loan}, the results obtained with model \eqref{eq:FTCP} are consistent with this general trend. The clubs that hire most players on a loan agreement are those with smaller budgets.
\subsection{Problem size and complexity}\label{sec:results:size}
Finally, we report on the size and complexity of the resulting optimization models.
The size of the problems we solved, which is determined by the size of the sets $\mc{P}$, $\mc{R}$ and $\mc{S}$, is reported in \Cref{tab:size}.
\begin{table}[h]
\centering
\caption{Size of the problems. All variables are binary.}
\label{tab:size}
\begin{tabular}{c|cc}
\toprule
Team & \# Variables & \# Constraints \\
\midrule
Arsenal-FC&275&272\\
Aston-Villa&345&329\\
Cardiff-City&375&348\\
Chelsea-FC&335&308\\
Crystal-Palace&370&344\\
Everton-FC&295&293\\
Fulham-FC&325&314\\
Hull-City&345&331\\
Liverpool-FC&300&296\\
Manchester-City&255&261\\
Manchester-United&280&275\\
Newcastle-United&320&308\\
Norwich-City&320&315\\
Southampton-FC&320&314\\
Stoke-City&320&312\\
Sunderland-AFC&325&314\\
Swansea-City&345&330\\
Tottenham-Hotspur&310&302\\
West-Bromwich-Albion&325&319\\
West-Ham-United&325&315 \\
\bottomrule
\end{tabular}
\end{table}
All problems models were solved using the Java libraries of Cplex 12.6.2 on a machine equipped with CPU 2x2.4GHz AMD Opteron 2431 6 core and 24 Gb RAM. All problems were solved to a target 0.5\% optimality gap (parameter \texttt{EpGap} $0.5/100$) and using Cplex's default $10^{-6}$ feasibility tolerance and $10^{-5}$ integrality tolerance. These tolerances assume that values below $10^{-6}$ or $10^{-5}$ are either caused by rounding errors, or do not significantly change the optimization results. We set a time limit of $3600$ seconds (parameter \texttt{TimeLimit} $3600$) and put emphasis on proving optimality (parameter \texttt{MIPEmphasis} $3$). Descriptive statistics about the solution time in all our tests are reported in \Cref{tab:solution_time}. It can be noticed that the solution time is relatively contained for the great majority of the instances, offering space for further enhancements to the model.
\begin{table}[h]
\centering
\caption{Solution time statistics for the CC model across all the tests performed (640 model runs).}
\label{tab:solution_time}
\begin{tabular}{cc}
\toprule
Statistic & Value\\
\midrule
Average& 8.21 sec.\\
St. dev.& 16.16 sec.\\
Minimum & 0.04 sec.\\
25th percentile & 0.25 sec.\\
50th percentile & 1.32 sec.\\
75th percentile & 10.53 sec.\\
Maximum & 224.72 sec.\\
\bottomrule
\end{tabular}
\end{table}
\section{Concluding remarks}
\label{sec:Conclusions}
This article introduced a chance-constrained optimization model for assisting football clubs during transfer market decision. Furthermore it presented a new rating system which is able to measure numerically the on-field performance of football players. Such measure is necessary in order to arrive at an objective assessment of football players and thus limit the bias in the observers.
The model and rating system have been extensively tested on case-studies based on real-life English Premier League marked data. The results illustrate that the model contribute to reduce bias in transfer market decision by supporting football managers' expertise with analytic support and validation. Furthermore, the model can adapt to different levels of financial concern and thus support football decision makers with tailor-made analytic suggestions.
There is still room for improvements and enhancements of the model and the rating system. For example, the rating system only considers the performance of players when appearing on the pitch. However, some players may be prone to injuries or suspensions, so that their use in a team may be limited.
\subsection*{Acknowledgements}
The authors wish to thank two anonymous reviewers, whose comments helped to improve the contents and presentation of this manuscript.
| {
"attr-fineweb-edu": 1.677734,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUaQHxK4sA-7sDcYqR | \section{Introduction}
In our last work on the topic of NCAA basketball \cite{zimmermann2013predicting}, we speculated about the existence of a ``glass ceiling'' in (semi-)professional sports match outcome prediction, noting that season-long accuracies in the mid-seventies seemed to be the best that could be achieved for college basketball, with similar results for other sports. One possible explanation for this phenomenon is that we are lacking the attributes to properly describe sports teams, having difficulties to capture player experience or synergies, for instance. While we still intend to explore this direction in future work,\footnote{Others in the sports analytics community are hard at work doing just that, especially for ``under-described sports such as European soccer or NFL football.} we consider a different question in this paper: \emph{the influence of chance on match outcomes}.
Even if we were able to accurately describe sports teams in terms of their performance statistics, the fact remains that athletes are humans, who might make mistakes and/or have a particularly good/bad day, that matches are refereed by humans, see before, that injuries might happen during the match, that the interaction of balls with obstacles off which they ricochet quickly becomes too complex to even model etc. Each of these can affect the match outcome to varying degrees and especially if we have only static information from before the match available, it will be impossible to take them into account during prediction.
While this may be annoying from the perspective of a researcher in sports analytics, from the perspective of sports leagues and betting operators, this is a feature, not a bug. Matches of which the outcome is effectively known beforehand do not create a lot of excitement among fans, nor will they motivate bettors to take risks.
Intuitively, we would expect that chance has a stronger effect on the outcome of a match if the two opponents are roughly of the same quality, and if scoring is relatively rare: since a single goal can decide a soccer match, one (un)lucky bounce is all it needs for a weaker team to beat a stronger one. In a fast-paced basketball game, in which the total number of points can number in the two hundreds, a single basket might be the deciding event between two evenly matched teams but probably not if the skill difference is large.
For match outcome predictions, a potential question is then: ``\emph{How strong is the impact of chance for a particular league?}'', in particular since quantifying the impact of chance also allows to identify the ``glass ceiling'' for predictions. The topic has been explored for the NFL in \cite{burke07LuckNFL01}, which reports
\begin{quote}
The actual observed distribution of win-loss records in the NFL is indistinguishable from a league in which 52.5\% of the games are decided at random and not by the comparative strength of each opponent.
\end{quote}
Using the same methodology, \ea{Weissbock} \cite{DBLP:conf/ai/WeissbockI14} derive that 76\% of matches in the NHL are decided by chance. As we will argue in the following section, however, the approach used in those works is not applicable to NCAA basketball.
\section{Identifying the impact of chance by Monte Carlo simulations}
The general idea used by Burke and Weissbock\footnote{For details for Weissbock's work, we direct the reader to \cite{weissbock13mlForNHL02}.} is the following:
\begin{enumerate}
\item A chance value $c \in [0,1]$ is chosen.
\item Each out of a set of virtual teams is randomly assigned a strength rating.
\item For each match-up, a value $v \in [0,1]$ is randomly drawn from a uniform distribution.
\begin{itemize}
\item If $v \geq c$, the stronger team wins.
\item Otherwise, the winner is decided by throwing an unweighted coin.
\end{itemize}
\item The simulation is re-iterated a large number of times (e.g. $10,000$) to smooth results.
\end{enumerate}
Figure \ref{pure-curves} shows the distribution of win percentages for $340$ teams, $40$ matches per team (roughly the settings of an NCAA basketball season including playoffs), and $10,000$ iterations for $c = 0.0$ (pure skill), $c = 1.0$ (pure chance), and $c=0.5$.
\begin{figure}[ht]
\centering
\includegraphics[angle=270,width=0.7\linewidth]{pure-curves.eps}
\caption{MC simulated win percentage distributions for different amounts of chance\label{pure-curves}}
\end{figure}
By using a goodness of fit test -- $\chi^2$ in the case of Burke's work, \emph{F-Test} in the case of Weissbock's -- the $c$-value is identified for which the simulated distribution fits the empirically observed one best, leading to the values reproduced in the introduction. The identified $c$-value can then be used to calculate the upper limit on predictive accuracy in the sport: since in $1-c$ cases the stronger team wins, and a predictor that predicts the stronger team to win can be expected to be correct in half the remaining cases in the long run, the upper limit lies at:
$$(1-c) + c/2\text{,}$$
leading in the case of
\begin{itemize}
\item the NFL to: $0.475 + 0.2625 = 0.7375$, and
\item the NHL to: $0.24 + 0.36 = 0.62$
\end{itemize}
Any predictive accuracy that lies above those limits is due to the statistical quirks of the observed season: theoretically it is possible that chance always favors the stronger team, in which case predictive accuracy would actually be $1.0$. As we will argue in the following section, however, NCAA seasons (and not only they) are likely to be quirky indeed.
\section{Limitations of the MC simulation for NCAA basketball}
\label{limitations}
A remarkable feature of Figure \ref{pure-curves} is the symmetry and smoothness of the resulting curves. This is an artifact of the distribution assumed to model the theoretical distribution of win percentages -- the Binomial distribution -- together with the large number of iterations. This can be best illustrated in the ``pure skill'' setting: even if the stronger team were always guaranteed to win a match, real-world sports schedules do not guarantee that any team actually plays against representative mix of teams both weaker and stronger than itself. A reasonably strong team could still lose every single match, and a weak one could win at a reasonable clip. One league where this is almost unavoidable is the NFL, which consists of 32 teams, each of which plays 16 regular season matches (plus at most 4 post-season matches), and ranking ``easiest'' and ``hardest'' schedules in the NFL is an every-season exercise. Burke himself worked with an empirical distribution that showed two peaks, one before $0.5$ win percentage, one after. He argued that this is due to the small sample size (five seasons).
\begin{figure}[ht]
\centering
\includegraphics[angle=270,width=0.7\linewidth]{2008-2013-observed.eps}
\caption{Observed distribution of win percentages in the NCAA, 2008--2013\label{observed-ncaa-curves}}
\end{figure}
The situation is even more pronounced in NCAA basketball, where 340+ Division I teams play at most 40 matches each. Figure \ref{observed-ncaa-curves} shows the empirical distribution for win percentages in NCAA basketball for six season (2008--2013).\footnote{The choice of seasons is purely due to availability of data at the time of writing and we intend to extend our analysis in the future.} While there is a pronounced peak for a win percentage of $0.5$ for 2008 and 2012, the situation is different for 2009, 2010, 2011, and 2013. Even for the former two seasons, the rest of the distribution does not have the shape of a Binomial distribution. Instead it seems to be that of a \emph{mix} of distributions -- e.g. ``pure skill'' for match-ups with large strength disparities overlaid over ``pure chance'' for approximately evenly matched teams.
NCAA scheduling is subject to conference memberships and teams will try to pad out their schedules with relatively easy wins, violating the implicit assumptions made for the sake of MC simulations.
This also means that the ``statistical quirks'' mentioned above are often the norm for any given season, not the exception. Thought to its logical conclusion, the results that can be derived from the Monte Carlo simulation described above are purely theoretical: if one could observe an {\bf effectively unlimited} number of seasons, during which schedules are {\bf not systematically imbalanced}, the overall attainable predictive accuracy were bound by the limit than can be derived by the simulation. For a given season, however, and the question how well a learned model performed w.r.t. the specificities of that season, this limit might be too high (or too low).
\begin{figure}
\centering
\includegraphics[angle=270,width=0.7\linewidth]{2008-versions.eps}
\caption{Distribution of win percentages 2008\label{2008}}
\end{figure}
As an illustration, consider Figure \ref{2008}.\footnote{Other seasons show similar behavior, so we treat 2008 as a representative example.} The MC simulation that matches the observed proportion of teams having a win percentage of $0.5$ is derived by setting $c=0.42$, implying that a predictive accuracy of $0.79$ should be possible. The MC simulation that fits the observed distribution best, according to the Kolmogorov-Smirnov (KS) test (overestimates the proportion of teams having a win percentage of $0.5$ along the way), is derived from $c=0.525$ (same as Burke's NFL analysis), setting the predictive limit to $0.7375$. Both curves have visually nothing in common with the observed distribution, yet the null hypothesis -- that both samples derive from the same distribution -- is not rejected at the 0.001 level by the KS test for sample comparison. This hints at the weakness of using such tests to establish similarity: CDFs and standard deviations might simply not provide enough information to decide whether a distribution is appropriate.
\section{Deriving limits for specific seasons}
The ideal case derived from the MC simulation does not help us very much in assessing how close a predictive model comes to the best possible prediction. Instead of trying to answer the theoretical question: \emph{What is the expected limit to predictive accuracy for a given league?},
\noindent we therefore want to answer the practical question: \emph{Given a specific season, what was the highest possible predictive accuracy?}.
To this end, we still need to find a way of estimating the impact of chance on match outcomes, while \emph{taking the specificities of scheduling into account}. The problem with estimating the impact of chance stays the same, however: for any given match, we need to know the relative strength of the two teams but if we knew that, we would have no need to learn a predictive model in the first place. If one team has a lower adjusted \underline{offense} efficiency than the other (i.e. scoring less), for example, but also a lower adjusted \underline{defensive} efficiency (i.e. giving up fewer points), should it be considered weaker, stronger, or of the same strength?
Learning a model for relative strength and using it to assess chance would therefore feed the models potential errors back into that estimate. What we \emph{can} attempt to identify, however, is which teams are \emph{similar}.
\subsection{Clustering team profiles and deriving match-up settings}
\begin{table}[ht]
\centering
\begin{scriptsize}
\begin{tabular}{c|c||c|c}
\multicolumn{2}{c||}{Offensive stats}&\multicolumn{2}{c}{Defensive stats}\\\hline
AdjOEff & Points per 100 possessions scored, & AdjDEff & Points per 100 possessions allowed,\\
& adjusted for opponent's strength & & adjusted for opponent's strength\\\hline
OeFG\% & Effective field goal percentage& DeFG\% & eFG\% allowed\\\hline
OTOR & Turnover rate& DTOR& TOR forced\\\hline
OORR& Offensive rebound rate&DORR& ORR allowed\\\hline
OFTR&Free throw rate&DFTR& FTR allowed\\
\end{tabular}
\end{scriptsize}
\caption{Team statistics\label{profile-stats}}
\end{table}
We describe each team in terms of their adjusted efficiencies, and their Four Factors, adopting Ken Pomeroy's representation \cite{kenpom}. Each statistic is present both in its offensive form -- how well the team performed, and in its defensive form -- how well it allowed its opponents to perform (Table \ref{profile-stats}). We use the averaged end-of-season statistics, leaving us with approximately 340 data points per season. Clustering daily team profiles, to identify finer-grained relationships, and teams' development over the course of the season, is left as future work. As a clustering algorithm, we used the WEKA \cite{weka} implementation of the EM algorithm with default parameters. This involves EM selecting the appropriate number of clusters by internal cross validation, with the second row of Table \ref{number-clusters} showing how many clusters have been found per season.
\begin{table}[ht]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
Season & 2008 & 2009 & 2010 & 2011 & 2012 & 2013\\\hline
Number of Clusters & 5 & 4 & 6 & 7 & 4 & 3\\
Cluster IDs in Tournament & 1,5 & 4 & 2,6 & 1,2,5 & 3,4 & 2\\
\end{tabular}
\caption{Number of clusters per season and clusters represented in the NCAA tournament\label{number-clusters}}
\end{table}
As can be seen, depending on the season, the EM algorithm does not separate the 340 teams into many different statistical profiles. Additionally, as the third row shows, only certain clusters, representing relatively strong teams, make it into the NCAA tournament, with the chance to eventually play for the national championship (and one cluster dominates, like Cluster 5 in 2008). These are strong indications that the clustering algorithm does indeed discover similarities among teams that allow us to abstract ``relative strength''. Using the clustering results, we can re-encode a season's matches in terms of the clusters to which the playing teams belong, capturing the specificities of the season's schedule.
\begin{table}[ht]
\centering
\begin{tabular}{c|c|c|c|c|c||c}
& Cluster 1 & Cluster 2 & Cluster 3 & Cluster 4 & Cluster 5 & Weaker opponent\\\hline
Cluster 1 & 76/114 & 161/203 & 52/53 & 168/176 & 65/141 & 381/687 (0.5545)\\
Cluster 2&100/176&298/458&176/205&429/491&91/216&705/1546 (0.4560)\\
Cluster 3&7/32&55/170&47/77&119/194&4/40&119/513 (0.2320)\\
Cluster 4&22/79&161/379&117/185&463/769&28/145&117/1557 (0.0751)\\
Cluster 5 & 117/154 & 232/280 & 78/83 & 232/247 & 121/198 & 659/962 (.6850)\\
\end{tabular}
\caption{Wins and total matches for different cluster pairings, 2008\label{schedule}}
\end{table}
Table \ref{schedule} summarizes the re-encoded schedule for 2008. The re-encoding allows us to flesh out the intuition mentioned in the introduction some more: teams from the same cluster can be expected to have approximately the same strength, increasing the impact of chance on the outcome. Since we want to take all non-chance effects into account, we encode pairings in terms of which teams has home-court. The left margin indicates which team has home court in the pairing: this means, for instance, that while teams from Cluster 1 beat teams from Cluster 2 almost 80\% of the time when they have home court advantage, teams from Cluster 2 prevail in almost 57\% of the time if home court advantage is theirs. The effect of home court advantage is particularly pronounced on the diagonal, where unconditional winning percentages by definition should be at approximately 50\%. Instead, home court advantage pushes them always above 60\%. One can also see that the majority of cases teams were matched up with a team stronger than (or as strong as) themselves. Table \ref{schedule} is the empirical instantiation of our remark in Section \ref{limitations}: instead of a single distribution, 2008 seems to have been a weighted mixture of 25 distributions.\footnote{Although some might be similar enough to be merged.} None of these specificities can be captured by the unbiased MC simulation.
\subsection{Estimating chance}
The re-encoded schedule includes all the information we need to assess the effects of chance. The win percentage for a particular cluster pairing indicates which of the two clusters should be considered the stronger one in those circumstances, and from those matches that are lost by the stronger team, we can calculate the chance involved.
Consider, for instance, the pairing \emph{Cluster 5 -- Cluster 2}. When playing at home, teams from Cluster 5 win this match-up in 82.85\% of the cases! This is the practical limit to predictive accuracy in this setting for a model that always predicts the stronger team to win, and in the same way we used $c$ to calculate that limit above, we can now inverse the process: $c = 2*(1-0.8285) = 0.343$. When teams from Cluster 5 welcomed teams from Cluster 2 on their home court in 2008, the overall outcome is indistinguishable from 34.3\% of matches having been decided by chance.
The impact of chance for each cluster pairing, and the number of matches that have been played in particular settings, finally, allows us to calculate the effect of chance on the entire season, and using this result, the upper limit for predictive accuracy that could have been reached for a particular season.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}\hline
Season & 2008 & 2009 & 2010 & 2011 & 2012 & 2013\\\hline
\multicolumn{7}{|c|}{Unconstrained EM}\\\hline
KS & 0.0526 & 0.0307 & 0.0506 & 0.0327 & 0.0539 & 0.0429\\
Chance & 0.5736 & 0.5341 & 0.5066 & 0.5343 & 0.5486 & 0.5322 \\
Limit for predictive accuracy & 0.7132 & 0.7329 & 0.7467 & 0.7329 & 0.7257 & 0.7339\\\hline
\multicolumn{7}{|c|}{Optimized EM (Section \ref{optimization})}\\\hline
KS & 0.0236 & 0.0307 & 0.0396 & 0.0327 & 0.0315 & 0.0410\\
Chance & 0.4779 & 0.5341 & 0.4704 & 0.5343 & 0.4853 & 0.5311\\
Limit & 0.7610 & 0.7329 & 0.7648 & 0.7329& 0.7573 & 0.7345\\\hline\hline
KenPom prediction & 0.7105 & 0.7112 & 0.7244 & 0.7148 & 0.7307 & 0.7035\\\hline
\end{tabular}
\caption{Effects of chance on different seasons' matches and limit on predictive accuracy (for team encoding shown in Table \ref{profile-stats})\label{limits}}
\end{table}
The upper part of Table \ref{limits} shows the resulting effects of chance and the limits regarding predictive accuracy for the six seasons under consideration. Notably, the last row shows the predictive accuracy when using the method described on \cite{kenpom}: the Log5-method, with Pythagorean expectation to derive each team's win probability, and the adjusted efficiencies of the home (away) team improved (deteriorated) by 1.4\%. This method effectively always predicts the stronger team to win and should therefore show similar behavior as the observed outcomes. Its accuracy is always close to the limit and in one case (2012) actually exceeds it. One could explain this by the use of daily instead of end-of-season statistics but there is also another aspect in play. To describe that aspect, we need to discuss simulating seasons.
\section{Simulating seasons\label{simulating}}
With the scheduling information and the impact of chance for different pairings, we can simulate seasons in a similar manner to the Monte Carlo simulations we have discussed above, but with results that are much closer to the distribution of observed seasons. Figure \ref{2008} shows that while the simulated distribution is not equivalent to the observed one, it shows very similar trends. In addition, while the KS test does not reject any of the three simulated distributions, the distance of the one resulting from our approach to the observed one is lower than for the two Monte Carlo simulated ones.
The figure shows the result of simulating the season $10,000$ times, leading to the stabilization of the distribution. For fewer iterations, e.g. $100$ or less, distributions that diverge more from the observed season can be created. In particular, this allows the exploration of counterfactuals: if certain outcomes were due to chance, how would the model change if they came out differently? Finally, the information encoded in the different clusters -- means of statistics and co-variance matrices -- allows the generation of synthetic team instances that fit the cluster (similar to value imputation), which in combination with scheduling information could be used to generate wholly synthetic seasons to augment the training data used for learning predictive models. We plan to explore this direction in future work.
\section{Finding a good clustering\label{optimization}}
Coming back to predictive limits, there is no guarantee that the number of clusters found by the unconstrained EM will actually result in a distribution of win percentages that is necessarily close to the observed one. Instead, we can use the approach outlined in the preceding section to find a good clustering to base our chance and predictive accuracy limits on:
\begin{enumerate}
\item We let EM cluster teams for a fixed number of clusters (we evaluated 4--20)
\item For a derived clustering, we simulate 10,000 seasons
\item The resulting distribution is compared to the observed one using the Kolmogorov-Smirnov score
\end{enumerate}
The full details of the results of this optimization are too extensive to show here but what is interesting to see is that a) increasing the number of clusters does not automatically lead to a better fit with the observed distribution, and b) clusterings with different numbers of clusters occasionally lead to the same KS, validating our comment in Footnote 5.
Based on the clustering with the lowest KS, we calculate chance and predictive limit and show them in the second set of rows of Table \ref{limits}. There are several seasons for which EM already found the opimal assigment of teams to clusters (2009, 2011). Generally speaking, optimizing the fit allows to lower the KS quite a bit and leads to lower estimated chance and higher predictive limits. For both categories, however, the fact remains that different seasons were influenced by chance to differing degrees and therefore different limits exist. Furthermore, the limits we have found stay significantly below 80\% and are different from the limits than can be derived from MC simulation.
Those results obviously come with some caveats:
\begin{enumerate}
\item Teams were described in terms of adjusted efficiencies and Four Factors -- adding or removing statistics could lead to different numbers of clusters and different cluster memberships.
\item Predictive models that use additional information, e.g. experience of players, or networks models for drawing comparisons between teams that did not play each other, can exceed the limits reported in Table \ref{limits}.
\end{enumerate}
The table also indicates that it might be less than ideal to learn from preceding seasons to predict the current one (the approach we have chosen in our previous work): having a larger element of chance (e.g. 2009) could bias the learner against relatively stronger teams and lead it to underestimate a team's chances in a more regular season (e.g. 2010).
\section{Summary and conclusions}
In this paper, we have considered the question of the impact of chance on the outcome of (semi-)professional sports matches in more detail. In particular, we have shown that the unbiased MC simulations used to assess chance in the NFL and NHL are not applicable to the college basketball setting. We have argued that the resulting limits on predictive accuracy rest on simplifying and idealized assumptions and therefore do not help in assessing the performance of a predictive model on a particular season.
As an alternative, we propose clustering teams' statistical profiles and re-encoding a season's schedule in terms of which clusters play against each other. Using this approach, we have shown that college basketball seasons violate the assumptions of the unbiased MC simulation, given higher estimates for chance, as well as tighter limits for predictive accuracy.
There are several directions that we intend to pursue in the future. First, as we have argued above, NCAA basketball is not the only setting in which imbalanced schedules occur. We would expect similar effects in the NFL, and even in the NBA, where conference membership has an effect. What is needed to explore this question is a good statistical representation of teams, something that is easier to achieve for basketball than football/soccer teams.
In addition, as we have mentioned in Section \ref{simulating}, the exploration of counterfactuals and generation of synthetic data should help in analyzing sports better. We find a recent paper \cite{ohgraphical} particularly inspirational, in that the authors used a detailed simulation of substitution and activity patterns to explore alternative outcomes for an NBA playoff series.
Finally, since we can identify different cluster pairings and the differing of chance therein, separating those cases and training classifiers idependently for each could improve classification accuracy. To achieve this, however, we will need solve the problem of clustering statistical profiles over the entire season -- which should also allow to identify certain trends over the course of seasons.
\bibliographystyle{splncs03}
| {
"attr-fineweb-edu": 1.90625,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdv05qsJBjms2Y7S2 | \section{Environments and Setups}\label{sec:env}
We consider two multi-agent domains, Google Research Football~\citep{kurach2019google} (GRF) and DeepMind Lab 2D~\citep{dmlab2d} (DMLab2D), with a focus on the more dynamically complex GRF.
\subsection{Google Research Football}\label{sec:grf}
Google Research Football~\citep{kurach2019google} (GRF) is a FIFA-like environment. Home player agents match against the opponent agents in the game of 5-vs-5 football. All agents are individually controlled except for the goalie, which by default is operated under the built-in rules. Each agent picks 1 out of 19 actions to execute at each step, with a total of 3000 frames per game. We compete against two types of opponent policies: the built-in AI and the self-play AI.
The built-in AI is a rule-based policy~\citep{gameplayfootball}\footnote{To add more challenge when training against built-in AI, we also take control over the goalie.} Its behavioral patterns reflect simple and logical football strategies. But the logic can be exploited easily. For instance, the video in the Supplementary Materials show that the built-in AI does not have programmed rules to prevent getting into an offside position. Therefore, the learned policy takes advantage of this flaw to switch possession by tricking the built-in AI into off-set positions instead of actual defending.
The other more robust and generalized opponent policy is trained via self-pla
(for more details in Table~\ref{tab:tournament} and Appendix 3). It requires more advanced cooperative strategies to win against the self-play AI.
Additionally, in our ablation studies, we use another a single-agent setup, 11-vs-11 ``Hard Stochastic''~\citep{kurach2019google}. In this case, one active player is being controlled at one time.
For all setups, the home agents are all rewarded ${+}1$ after scoring and ${-}1$ if a goal is conceded. To speed up training, we reward the agent in possession of the ball an additional ${+}0.1$ if it advances towards the opponent's goal~\citep{kurach2019google}.
Observations are in the Super Mini Map (SMM)~\citep{kurach2019google} format. An SMM is a 2D bit map of spatial dimension $72\times 96$, covering the entire football field. It is composed of four channels: positions of the home players, the opponents, the ball and the active agent. SMMs across four time steps are stacked to convey information such as velocity. When learning from observations, we predict the height and width locations of an agent on the SMM by outputting $72$- and $96$-dim heatmap vectors separately.
\subsection{DeepMind Lab 2D}
\vspace{-0.05in}
\begin{wrapfigure}{r}{0.3\textwidth}
\vspace{-.7cm}
\begin{center}
\includegraphics[width=0.25\textwidth]{cleanup.png}
\end{center}
\vspace{-0.1in}
\caption{DMLab2D ``Cleanup''}\label{fig:cleanup}
\end{wrapfigure}
DeepMind Lab 2D (DMLab2D)~\citep{dmlab2d} is a framework supporting multi-agent 2D puzzles. We experiment on the task ``Cleanup'' (Figure~\ref{fig:cleanup}). At the top of the screen, mud is randomly spawned in the water and at the bottom apples spawned at a rate inversely proportional to the amount of mud. The agents are rewarded ${+}1$ for each apple eaten. Four agents must cooperate between cleaning up and eating apples. The list of actions is in Appendix 2.
There are 1000 $72\times 96$ RGB frames per episode. The state input for each agent colors itself blue and the others red. Frames across four time steps are stacked for temporal information.
\subsection{Implementation Details}\label{sec:implement}
For all MARL experiments, we sweep the initial learning rate over the set of values $(0.00007, 0.00014, 0.00028)$ and sweep the auxiliary loss coefficient over $(0.0001, 0.0005)$. The loss coefficient for value approximation is $0.5$. We use TensorFlow~\cite{abadi2016tensorflow} default values for other parameters in ADAM~\cite{kingma2014adam}. The unroll length is $32$, discounting factor $0.99$, entropy coefficient in V-trace $0.0005$. The multi-agent experiments for GRF are run on 16 TPU cores, 10 of which are for training and 6 for inference, and 2400 CPU cores for actors with 18 actors per core. The batch size is 120. We train 500M frames when playing against the built-in AI and 4.5G frames when playing against the built-in AI. The single-agent experiments (Section~\ref{sec:single}) for GRF are run on 32 TPU cores, 20 of which for training and the rest for inference, and 480 CPU cores for actors with 18 actors per core. The batch size is 160. We train 900M frames for the single-agent RL tasks. The DMLab2D experiments are run on 8 TPU cores, 5 of which for training the other for inference, and 1472 CPU cores with 18 actors per core.
All learning curves presented in the paper are of 3 seeds. Our environments are relatively stable to random seeds. We display all 3 runs for selected models on GRF self-play and DMLab2D as examples in Figure~\ref{fig:seeds}. Therefore, we plot the average over 3 seeds and omit the standard error bars for clearer exposition.
\begin{figure*}[]
\centering
\includegraphics[width=0.24\textwidth]{self_play_aux_CNN.pdf}
\includegraphics[width=0.24\textwidth]{self_play_aux_ACNN.pdf}
\includegraphics[width=0.24\textwidth]{dmlab_aux_CNN.pdf}
\includegraphics[width=0.24\textwidth]{dmlab_acnnPrACNN.pdf}
\caption{We plot out all 3 runs for selected models on GRF self-play and DMLab2D to showchase that our environments are relatively stable to random seeds.}
\label{fig:seeds}
\vspace{-0.1in}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=0.28\textwidth]{built_in_aux.pdf}\hspace{0.35in}
\includegraphics[width=0.28\textwidth]{self_play_aux.pdf}\hspace{0.35in}
\includegraphics[width=0.28\textwidth]{dmlab_aux.pdf}
\caption{Comparison of {CNN} and {ACNN} models trained without and with the auxiliary loss. On the most challenging task self-play GRF which requires more advanced relational reasoning, attention becomes crucial\textemdash without it, the average return does not pass the zero mark. On both GRF tasks, the auxiliary loss improves performance and sample efficiency. Plots are averaged over 3 random seeds.}
\label{fig:aux}
\end{figure*}
\section{Experiments and Discussions}
This section starts with describing our model architectures, implementation details for pre-training from observations and implementation details for MARL. Next, we dive into analysing the efficacy of agent-centric representations for MARL. Finally, we conduct two additional ablation studies: comparing agent-centric representation learning against an agent-agnostic alternative on MARL, and evaluating agent-centric representation learning on single-agent RL.
\subsection{Model Architecture}
The baseline in our experiments is a convolutional neural network (CNN) followed by a fully connected (FC) layer (Figure~\ref{fig:baseline}). The policy, value and height/width predictive heads are immediately after the FC layer. In experiments with the attention module, the FC layer is replaced by a 2-head attention module as illustrated in Figure~\ref{fig:model} and Equation~\ref{eq:attn}. The detailed architectures are adapted from~\cite{espeholt2019seed} and attached in the Supplementary Materials.
\begin{figure*}[]
\centering
\subfloat[Comparison of {CNN}, {CNN} using initialization, and {PrCNN}.]{\label{fig:init1}{
\includegraphics[width=0.28\textwidth]{built_in_cnn.pdf}\hspace{0.35in}
\includegraphics[width=0.28\textwidth]{self_play_cnn.pdf}\hspace{0.35in}
\includegraphics[width=0.28\textwidth]{dmlab_cnn.pdf}}}\\
\subfloat[Comparison of {ACNN}, {ACNN} using initialization, and {PrACNN}.]{\label{fig:init2}{
\includegraphics[width=0.28\textwidth]{built_in_acnn.pdf}\hspace{0.35in}
\includegraphics[width=0.28\textwidth]{self_play_acnn.pdf}\hspace{0.35in}
\includegraphics[width=0.28\textwidth]{dmlab_acnn.pdf}
}}\\
\subfloat[Comparison of {CNN}, {CNN} using initialization, and {PrCNN} when trained with auxiliary loss.]{\label{fig:init3}{
\includegraphics[width=0.28\textwidth]{built_in_cnn_aux.pdf}\hspace{0.35in}
\includegraphics[width=0.28\textwidth]{self_play_cnn_aux.pdf}\hspace{0.35in}
\includegraphics[width=0.28\textwidth]{dmlab_cnn_aux.pdf}}}\\
\vspace{-0.1in}
\subfloat[Comparison of {ACNN}, {ACNN} using initialization, and {PrACNN} when trained with auxiliary loss.]{\label{fig:init4}{
\includegraphics[width=0.28\textwidth]{built_in_acnn_aux.pdf}\hspace{0.35in}
\includegraphics[width=0.28\textwidth]{self_play_acnn_aux.pdf}\hspace{0.35in}
\includegraphics[width=0.28\textwidth]{dmlab_acnn_aux.pdf}
}}\\
\caption{Comparison of training from scratch, using pre-trained models as initialization and as a frozen column in progressive networks, on top of (a) {CNN}, (b) {ACNN} models as well as (c) {CNN} and (d) {ACNN} with auxiliary loss.
For (a) and (b), initialization and progressive networks show better or comparable performance as well as sample efficiency, with more evident effects on the simpler tasks, built-in AI and DMLab2D. The best performing model for 2 out of 3 tasks are {PrACNN}.
The same trend is observed with auxiliary loss (c) and (d) but less evident, as the auxiliary loss and the pre-training loss likely carry overlapping information.
Plots are averaged over 3 random seeds.
}
\label{fig:init}
\end{figure*}
\subsection{Representation Learning from Observations}\label{sec:observationlearning}
We collect replays without action labels for the \emph{unsupervised} representation learning from observations.
For GRF built-in AI and DMLab2D ``Cleanup'', we record replays evenly over the course of the baseline RL training, from the early stage of training to convergence.
For GRF self-play, we record checkpoints evenly throughout self-play training (details in Appendix 3) and sample checkpoints to play against each other.
We collect approximately 300K frames from each task.
In principle, one can utilize \emph{any} reasonable replays. Unlike learning from demonstration, the unsupervised approaches we study do not require action annotations, enabling potential usage of observations for which it would be infeasible to label actions~\citep{schmeckpeper2019learning}.
We learn two predictive models from observations as described in Section~\ref{sec:observation}, based on {CNN} and {ACNN}. The predictive objectives are optimized via negative log-likelihood (NLL) minimization using Adam~\citep{kingma2014adam} with default parameters in TensorFlow~\citep{abadi2016tensorflow} and a sweep over learning rate.
The batch size is 32, and we train till convergence on the validation set.
More training details and NLL results are summarized in Appendix 4.
The attention module does not offer significant advantage in terms of NLL.
We conjecture that because the location of each agent is predicted independently, the {CNN} architecture has sufficient model complexity to handle the tasks used in our experiments.
\subsection{MARL and Main Results}\label{sec:mainexp}
The RL training procedures closely follow SEED RL~\citep{espeholt2019seed}. When adding the auxiliary loss, we sweep its weighting coefficient over a range of values as illustrated in Section~\ref{sec:implement}. Figures~\ref{fig:aux} and~\ref{fig:init} show plots of our core results, where the horizontal axis is the number of frames and the vertical axis the episode return. We train on 500M frames for GRF built-in AI, 4.5G for self-play AI and 100M for DMLab2D. All plots are averaged over 3 random seeds of the best performing set of hyperparameters. The rest of this section examines and discusses the experiments.
\subsubsection{Essential Roles of Attention}
Attention is empirically shown to be essential to form complex strategies among agents. Figure~\ref{fig:aux} compares the baseline {CNN} with the attention based {ACNN}, all trained from random initialization. Figure~\ref{fig:init} compares {CNN} and {ACNN} with pre-training. For \emph{2 out of 3} environments, namely GRF self-play AI and DMLab2D ``Cleanup'', {PrACNN} are the best performing models.
For DMLab2D, {PrACNN} converges significantly faster and more stably than its CNN counterpart without the attention module.
For GRF, note that the most critical phase here is when the agent transitions from passively defending to actively scoring, i.e. when the scores cross the 0 mark. This transition signifies the agent's understanding of the opponent's offensive as well as defensive policy. After the transition, the agent may excessively exploit the opponent's weaknesses. By examining the replays against the built-in AI and self-play AI (attached in the Supplementary Materials), we find it is possible to exploit the built-in AI with very simple tactics by tricking it into an offside position. Thus for built-in AI, CNN models' scoring more than ACNN in absolute term merely means the former is good at exploiting this weakness. On the other hand, when against self-play AI, ACNN models can pass the 0 mark to start winning, while none of the CNN models are able to achieve so. This strongly indicates that the attention module plays an essential role in providing the reasoning capacity to form complex cooperative strategies.
Moreover, Figure~\ref{fig:visualattention} takes two players from {ACNN} trained against the self-play AI and visualize their attention patterns. Green dots are the active player agents, and yellow ones are the other agents on the home team. We visualize the attention weights from one of the two heads. The intensity of the red circles surrounding the home agents reflect the weights. The most watched area is in the vicinity of the ball. E.g. in the 5th frame, one agent (top row) focuses on the player in possession of the ball, whereas the other agent (bottom row) is looking at the player to whom it is passing the ball. Full replays are in the Supplementary Materials.
\subsubsection{Effects of Agent-Centric Prediction}
\paragraph{The agent-centric auxiliary loss complements MARL.} Figure~\ref{fig:aux} compares the effects of adding the agent-centric auxiliary loss as described in Section~\ref{sec:aux} to the CNN and ACNN model, both trained from random initialization. As RL is sensitive to tuning, an incompatible auxiliary loss can hinder its training.
In our experiments, however, the auxiliary loss for the most part improves the models' performance and efficiency, particularly for the CNN models. This suggests the agent-centric loss is supportive of the reward structure in the game.
\paragraph{Unsupervised pre-training improves sample-efficiency.}
We compare training the {CNN} (Figure~\ref{fig:init1}) and {ACNN} (Figure~\ref{fig:init2}) from scratch with the two ways of integrating unsupervised pre-training to MARL, namely weight initialization and progressive neural networks.
For a fair comparison, we use the same hyperparameter tuning protocol from baseline training to tune models involving pre-training.
Both ways of integration provide significant improvements to sample-efficiency, especially on the simpler DMLab2D and GRF with the built-in AI. For some cases, the progressive models can achieve better performance and efficiency than those with the weight initialization. Even when the impact from pre-training is limited, it does not hurt the performance of MARL.
Hence in practice it can be beneficial to perform a simple pre-training step with existing observation recordings to speed up downstream MARL.
We repeat the same control experiments on RL models trained with the auxiliary loss in Figure~\ref{fig:init3} and Figure~\ref{fig:init4}. The same trend is observed, albeit to a lesser degree, which is as expected because the auxiliary objectives and pre-training carry overlapping information.
\begin{figure*}[]
\centering
\includegraphics[width=0.24\textwidth]{single1.pdf}
\includegraphics[width=0.24\textwidth]{single2.pdf}
\includegraphics[width=0.24\textwidth]{single3.pdf}
\includegraphics[width=0.24\textwidth]{single4.pdf}
\caption{Results of incorporating agent-centric inductive biases to \emph{single-agent RL} (GRF 11-vs-11 Hard Stochastic). The attention module is no longer optimal as cooperation matters less for single-agent RL. The auxiliary loss and pre-training from observations still help albeit to a lesser degree.}
\label{fig:single}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=0.28\textwidth]{observe1.pdf}\hspace{0.35in}
\includegraphics[width=0.28\textwidth]{observe2.pdf}\hspace{0.35in}
\includegraphics[width=0.28\textwidth]{observe3.pdf}
\caption{Comparison between the agent-centric objective and the observe-all objective that predicts for all agents' location at once for MARL on GRF Built-in AI. The former exhibits clear advantage.}
\label{fig:observeall}
\end{figure*}
\subsection{Agent-Centric Representation Learning for Single-Agent RL}\label{sec:single}
We have demonstrated the efficacy of agent-centric representation learning for multi-agent RL. To evaluate whether similar conclusions holds for single-agent RL, we use the 11-vs-11 ``Hard Stochastic'' task~\citep{kurach2019google}, where only one player is controlled at a time (simiarly to FIFA).
Although the game logic has much in common between single and multiple players, the experimental outcome is different as shown in Figure~\ref{fig:single}. The attention module brings down baseline performance, likely because cooperation matters less here\textemdash the rest of the home players are controlled by the built-in AI\textemdash and the attention module is harder to optimize. The agent-centric prediction task still helps as auxiliary loss or for pre-training, but to a much limited extend.
\subsection{Agent-agnostic Observe-All Representation Learning for MARL}\label{sec:observeall}
Finally, to verify the necessity for MARL representations to be \emph{agent-centric}, we implement an alternative agent-agnostic observation prediction objective, referred to as \emph{observe-all}, and test with playing against the GRF built-in AI.
Concretely, the height and width predictive heads for observe-all output $72-$ and $96-$ dimension binary masks respectively, where 1 indicates player occupation and 0 vacancy, taking over \emph{all agents}. In this way, the prediction is agnostic of agent identity and can predict for all players at once.
First, we use the observe-all objective as an auxiliary loss to RL and train from scratch. Next we apply the observe-all objective to pre-training from observations, which is then used for MARL, either via weight initialization or as a progressive frozen column. Figure~\ref{fig:observeall} clearly shows the agent-centric auxiliary loss is more competitive than the observe-all one in all setups. It confirms that the agent-centric nature of the prediction task is indeed important for MARL.
\begin{table*}[t]
\begin{minipage}[b]{0.7\linewidth}\centering
\resizebox{0.99\linewidth}{!}{
\begin{tabular}{ccccc|cccc|c}
& {CNN} & +aux. & +init. & +prgs. & \footnotesize{{ACNN}} & +aux. & +init. & +prgs. &rating\\
{CNN} & \backslashbox[7mm]{}{}& 5/8/7 & 9/1/10 & 7/5/8 & 4/4/12 & 5/2/13 & 2/7/11 & 3/5/12 & ${-}323$ \\
+aux. & 7/8/5 & \backslashbox[7mm]{}{} & 6/4/10 & 6/1/13 & 3/4/13 & 2/4/14 & 2/8/10 & 2/5/13 &${-}420$\\
+init. & 10/1/9 & 10/4/6 & \backslashbox[7mm]{}{} & 6/5/9 & 5/5/10 & 2/5/13 & 3/6/11 & 4/6/10 &${-}283$\\
+prgs. & 8/5/7 & 13/1/6 & 9/5/6 & \backslashbox[7mm]{}{} & 2/8/10 & 2/4/14 & 5/1/14 & 2/5/13 &${-}322$\\ \hline
\footnotesize{{ACNN}} & 12/4/4 & 13/4/3 & 10/5/5 & 10/8/2 & \backslashbox[7mm]{}{} & 8/4/8 & 7/7/6 & 10/6/4 &$\mathbf{147}$\\
+aux. & 13/2/5 & 14/4/2 & 13/5/2 & 14/4/2 & 8/4/8 & \backslashbox[7mm]{}{} & 11/3/6 & 7/5/8 &$\mathbf{182}$\\
+init & 11/7/2 & 10/8/2 & 11/6/3 & 14/1/5 & 6/7/7 & 6/3/11 & \backslashbox[7mm]{}{} & 8/4/8 &$24$\\
+prgs. & 12/5/3 & 13/2/5 & 10/6/4 & 13/5/2 & 4/6/10& 8/5/7 & 8/4/8 & \backslashbox[7mm]{}{} &$35$\\
\end{tabular}}
\end{minipage}
\begin{minipage}[b]{0.28\linewidth}
\centering
\resizebox{0.99\linewidth}{!}{\begin{tabular}{cc|c}
\multicolumn{3}{c}{GRF Leaderboard Results} \\ \hline
Agent&Opponent&Rating\\
\hline
{ACNN}+aux &CNN-v1/v2&$\mathbf{1992}$\\
{ACNN}&CNN-v1/v2&$1841$\\
{CNN}-v1& itself &$1659$\\
{CNN}-v2& itself &$1630$\\
{CNN}+aux&Built-in&$1048$\\
Built-in&NA&$1000$\\
\end{tabular}}
\end{minipage}
\caption{
\small{\textbf{Left:} selected agents play 20 matches among each other, all trained against the self-play AI. Each entry records win/tie/loss between row agent and column agent. Ratings are estimated ELO scores. {ACNN}s shows clear superiority over {CNN}s. {ACNN} trained from scratch generalizes better. \textbf{Right:} GRF Multi-Agent public leaderboard results by the time of model submission. Opponent refers to the policy the agent trained against. Each submitted agent plays 300 games. {ACNN}s trained against self-play AI perform the best. The {CNN} trained against built-in AI performs poorly and does not generalize. The self-play AI used in our experiments {CNN}-v1/v2 are clearly superior to the built-in AI.}
}\label{tab:tournament}
\end{table*}
\section{GRF Agent Tournament and Public Leaderboard}\label{sec:lb}
Table~\ref{tab:tournament} (Left) investigates how various agent-centric representation learning components generalize by hosting a tournament among selected agents. We include agents trained from scratch, from scratch plus auxiliary loss, from initialization plus auxiliary loss, and with progressive column plus auxiliary loss.
All are trained against self-play AI for a total of 4.5 billion frames and each play 20 matches against another.
Each entry records win/tie/loss between row agent (home) and column agent (opponent).
We also estimate their ELO ratings (for details see Appendix 6). Clearly, {ACNN} based models outperform {CNN} models, corroborating the claim that the agent-centric attention module enhances generalization. Meanwhile, the {ACNN} models using pre-training are inferior to the ones trained from scratch.
It suggests that although pre-training speeds up convergence, it can also limit the model's ability to generalize.
Finally, we upload our best performing agent trained against the built-in AI, i.e. {CNN} with auxiliary loss, and best agents against the self-play AI, i.e. {ACNN} and {ACNN} with auxiliary loss, to the public GRF Multi-agent League~\citep{leaderboard}. For comparison, we also upload the two self-play AI models used in our experiments, i.e. {CNN}-v1 and {CNN}-v2. Each of the submitted agents play 300 games against agents submitted by other participants and their ELO ratings are listed in Table~\ref{tab:tournament} (Right).
The self-play AI agents deliver a decent performance, showing clear advantages over the built-in AI. The agents trained against the self-play AI overall perform the best. In contrast, the agent trained against the built-in AI, although dominating the built-in AI, is extremely fragile against other agents. This supports our observation that the built-in AI can be exploited with little cooperation. It is also worth mentioning that, at the time of submission, our {ACNN} agent with auxiliary loss ranks top 1 on the learderboard.
\iffalse
\begin{table}[t]
\centering
\resizebox{0.95\linewidth}{!}{
\begin{tabular}{ccccc|cccc|c}
& {CNN} & +aux. & +init. & +prgs. & \footnotesize{{ACNN}} & +aux. & +init. & +prgs. &\footnotesize{ELO}\\
{CNN} & \backslashbox{}{}& 5/8/7 & 9/1/10 & 7/5/8 & 4/4/12 & 5/2/13 & 2/7/11 & 3/5/12 & ${-}323$ \\
+aux. & 7/8/5 & \backslashbox{}{} & 6/4/10 & 6/1/13 & 3/4/13 & 2/4/14 & 2/8/10 & 2/5/13 &${-}420$\\
+init. & 10/1/9 & 10/4/6 & \backslashbox{}{} & 6/5/9 & 5/5/10 & 2/5/13 & 3/6/11 & 4/6/10 &${-}283$\\
+prgs. & 8/5/7 & 13/1/6 & 9/5/6 & \backslashbox{}{} & 2/8/10 & 2/4/14 & 5/1/14 & 2/5/13 &${-}322$\\ \hline
\footnotesize{{ACNN}} & 12/4/4 & 13/4/3 & 10/5/5 & 10/8/2 & \backslashbox{}{} & 8/4/8 & 7/7/6 & 10/6/4 &$\mathbf{147}$\\
+aux. & 13/2/5 & 14/4/2 & 13/5/2 & 14/4/2 & 8/4/8 & \backslashbox{}{} & 11/3/6 & 7/5/8 &$\mathbf{182}$\\
+init & 11/7/2 & 10/8/2 & 11/6/3 & 14/1/5 & 6/7/7 & 6/3/11 & \backslashbox{}{} & 8/4/8 &$24$\\
+prgs. & 12/5/3 & 13/2/5 & 10/6/4 & 13/5/2 & 4/6/10& 8/5/7 & 8/4/8 & \backslashbox{}{} &$35$\\
\end{tabular}
}
\caption{
\small{Left, tournament: selected agents trained on self-play AI play 20 matches among each ther, each entry is row agent vs col agent, row column win/tie/loss. Ratings are estimated ELO scores. {ACNN}s shows clear superiority over {CNN}s. {ACNN} trained from scratch generalizes better. Right, leaderboard: GRF Multi-Agent public leaderboard results by the time of model submission, each submitted agent plays 300 games. The self-play AI used in our training is of much higher quality than built-in AI. {CNN} trained on built-in AI is fragile and does not generalize. {ACNN} on self-play AI performs very well against other submitted agents.}
}\label{tab:tournament}
\end{table}
\begin{figure}[]
\centering
\subfloat[+ attention]{\label{fig:single1}{\includegraphics[width=0.25\textwidth]{single_cnn_vs_acnn.png}}}\hfill
\subfloat[+ auxiliary loss]{\label{fig:single2}{\includegraphics[width=0.25\textwidth]{single_pred.png}}}\hfill
\subfloat[+ initialization]{\label{fig:single3}{\includegraphics[width=0.25\textwidth]{single_init.png}}}
\subfloat[+ progressive net]{\label{fig:single4}{\includegraphics[width=0.25\textwidth]{single_progress.png}}}
\caption{Agent-Centric Representation on Single-Agent RL \textcolor{red}{Wendy: }\textcolor{red}{the current plots are from RNN models trained long ago, CNN models are in training}}
\label{fig:single}
\vspace{-0.1in}
\end{figure}
\fi
\section{Introduction}
Human perception and understanding of the world is structured around objects. Inspired by this cognitive foundation, many recent efforts have successfully built strong object-centric inductive biases into neural architectures and algorithms to tackle relational reasoning tasks, from robotics manipulation~\citep{devin2018deep}, to visual question answering~\citep{shi2019explainable}.
Yet one problem class involving relational reasoning that still remains under-explored is multi-agent reinforcement learning (MARL).
This work studies how agent-centric representations can benefit model-free MARL where each agent generates its policy independently. We consider a fully cooperative scenario, which can be modeled as a Multi-Agent Markov Decision Process (MAMDP), an extension of the single-agent MDP~\citep{boutilier1996planning}.
In light of recent advances in model-free RL and neural relational reasoning~\citep{jaderberg2016reinforcement,zambaldi2018relational}, we study two ways of incorporating agent-centric inductive biases into our algorithm.
First, we introduce an attention module~\citep{vaswani2017attention} with explicit connections across the decentralized agents.
Existing RL works~\citep{zambaldi2018relational, mott2019towards, liu2019pic} have adapted similar self-attention modules on top of a single network.
In our setup, the agents share a model to generate their actions individually to ensure scalability when the number of agents increases.
The proposed attention module is then implemented \emph{across} intermediate features from forward passes of the agents, explicitly connecting them.
As we will show in experiments, this leads to the emergence of more complex cooperation strategies as well as better generalization.
The closest approach to ours is Multi-Attention-Actor-Critic (MAAC)~\citep{iqbal2018actor}, which also applies a shared encoder across agents and aggregates features for an attention module.
However, each agent has its own unique critic that takes actions and observations of all agents through the attention features.
Secondly, we develop an unsupervised trajectory predictive task\textemdash i.e. without using action labels\textemdash for pre-training and/or as an auxiliary task in RL.
Observations, without action labels, are often readily available which is a desirable property for pre-training.
Unlike prior works in multi-agent trajectory forecasting~\citep{yeh2019diverse, sun2019stochastic}, we consider an \emph{agent-centric} version where the location of each agent position is predicted separately.
This task encourages the model to reason over an agent's internal states such as its velocity, intent, etc.
We explore two ways to leverage the pre-trained models in RL: (1) as weight initialization and (2) as a frozen column inspired by Progressive Neural Networks~\citep{rusu2016progressive}.
Furthermore, we investigate whether the agent-centric predictive objective serves as a suitable auxiliary loss for MARL.
Auxiliary tasks have been used to facilitate RL representation learning in terms of stability and efficiency~\citep{oord2018representation, jaderberg2016reinforcement}.
Our key contributions are as follows:
\begin{enumerate}
\item We introduce an agent-centric attention module for MARL to encourage complex cooperation strategies and generalization. We are the first to incorporate such attention module into an on-policy MARL algorithm.
\item We employ an agent-centric predictive task as an auxiliary loss for MARL and/or for pre-training to improve sample efficiency. To our knowledge, we are the first to study auxiliary task in the context of MARL.
\item We assess incorporating agent-centric inductive biases on MARL using the proposed approaches on challenging tasks from Google Research Football and DeepMind Lab 2D.
\end{enumerate}
\section{Conclusions}
\vspace{-0.1in}
We propose to integrate novel agent-centric representation learning components, namely the agent-centric attention module and the agent-centric predictive objective, to multi-agent RL. In experiments, we show that the attention module leads to complex cooperative strategies and better generalization. In addition, leveraging the agent-centric predictive objectives as an auxiliary loss and/or for unsupervised pre-training from observations improves sample efficiency.
\section*{Acknowledgements}
We would like to thank Charles Beattie for help on DMLab2D and Piotr Stanczyk on GRF. We would also like to thank Thomas Kipf, Alexey Dosovitskiy, Dennis Lee, and Aleksandra Faust for insightful discussions.
\section{Methods}
Section~\ref{sec:MARL} describes our problem setup, fully cooperative multi-agent reinforcement learning (MARL), in a policy gradient setting. Section~\ref{sec:attention} introduces the agent-centric attention module. Section~\ref{sec:observation} motivates the agent-centric prediction objective to learn from observations as an unsupervised pre-training step and/or as an auxiliary loss for MARL.
\begin{figure*}[]
\centering
\includegraphics[width=0.99\textwidth]{visual_attention.pdf}
\caption{Visualization of attention for two different home player agents. Green is the active agent being controlled, yellow are the other home agents, the red borders signify attention intensity, blue the opponents and white the ball. Attention is mostly on the agents around the ball but the attention weights of the two agents differ. E.g. in the 5th frame, one looks at agent possessing the ball, the other at the agent to whom it is passing the ball. Complete video replays of all players are on the project page.}
\label{fig:visualattention}
\end{figure*}
\subsection{Multiagent Reinforcement Learning}\label{sec:MARL}
We consider a group of $N$ agents, denoted by $\mathcal{N}$, operating cooperatively in a shared environment towards a common goal. It can be formalized as a multi-agent Markov decision process (MAMDP)~\citep{boutilier1996planning}. A MAMDP is a tuple $(S, \{A^i\}_{i\in\mathcal{N}}, P, \{R^i\}_{i\in\mathcal{N}})$ where $S$ is the shared state space, $A^i$ the set of actions for the $i$-th agent, $\mathbf{A} = A^1 \times \cdots A^N$, $P: S\times \mathbf{A} \times S \to [0 , 1]$ the transition function, and $R: S \times \mathbf{A} \to \mathbb{R} $ the reward function. In our experiments, $S$ are 2D maps of the environments.
We adapt an actor-critic method, V-trace~\citep{espeholt2018impala} implemented with SEED RL~\citep{espeholt2019seed}. The actor and critic are parameterized by deep neural networks. Each agent receives a state input $s^i$, covering the global $s$ and a specification of the agent's location, and generates its policy $\pi^i = \pi(s^i)$ and state value $V^i=V(s^i)$.
The model is shared between agents, adding scalability when the number of agents grows larger and the environment more complex~\citep{iqbal2018actor, jiang2018learning, jeon2020scalable}.
It also potentially alleviates instability due to the non-stationary nature of multi-agent environments by sharing the same embedding space~\citep{lowe2017multi}.
The goal for all agents is to maximize the expected long term discounted global return
\begin{equation}
J(\theta)=\mathbb{E}_{s\sim d, \mathbf{a} \sim \pi}\left[ \Sigma_{t\geq 0} \gamma^t R(s_t,\mathbf{a})\right] = \mathbb{E}_{s\sim d} \left[ V(s)\right] ,
\end{equation}
where $0\leq \gamma \leq 1$ is the discount factor and $\mathbf{a} {=} (a^1,{\cdots}, a^N)$,
\begin{equation}
V(s) = \mathbb{E}_\pi\left[\Sigma_{t\geq t_0} \gamma^t R(s_t,\mathbf{a}_t)| s_0 = s\right]
\end{equation}
the state value, and $\theta$ the parameterization for policy and value functions.
In a decentralized cooperative setting, ~\cite{zhang2018fully} proves the policy gradient theorem
\begin{equation}
\nabla_\theta J(\theta) {=} \mathbb{E}_{d,\pi} \left[\nabla \log \pi^i(s, a^i) Q(s, \mathbf{a}) \right] {=} \mathbb{E}_{d, \pi} \left[\nabla \log \pi(s, \mathbf{a}) A(s, \mathbf{a}) \right],\nonumber
\end{equation}
where $Q(s,\mathbf{a}) {=} \mathbb{E}_\pi \left[R(s_t,\mathbf{a}_t) {+} \gamma V(s_{t+1})\right]$ is the state-action value and $A(s,\mathbf{a}){=}Q(s,\mathbf{a}){-}V(s)$ the advantage for variance reduction.
In theory, $V^i {\approx} V$ and we can directly apply V-trace for each individual agent with the policy gradient above.
In practice, we slightly shape each agent's reward (see Section~\ref{sec:env}) but the policy gradient direction is approximately retained (proof in Appendix 1).
\subsection{Agent-Centric Attention Module}\label{sec:attention}
In the MARL baseline {CNN} (Figure~\ref{fig:baseline}),
an agent makes independent decisions without considering the high-level representations from other agents.
This can hinder the formation of more complex cooperation strategies.
To address the issue, we propose a novel attention module built upon the multi-head self-attention mechanism~\citep{vaswani2017attention} to explicitly enable relational reasoning across agents.
As shown in Figure~\ref{fig:attention} and Equation~\ref{eq:attn}, the forward pass of each agent produces the key, query and value independently, which are congregated to output the final features for each agent. These features are then sent to the RL value and policy heads.
The attention module allows easy model interpretation~\citep{mott2019towards}, see Figure~\ref{fig:visualattention} with a detailed explanation in Section~\ref{sec:mainexp}.
We term the model {ACNN}.
The following summarizes the operations of our attention module:
\begin{align}\label{eq:attn}
&z^i {=} f_{\mathrm{fc}}(f_{\mathrm{cnn}}(s^i)), q^i {=} q\left(\mathrm{LN}(z^i)\right), k^i {=}k\left(\mathrm{LN}(z^i)\right), v^i {=} v\left(\mathrm{LN}(z^i)\right);\nonumber\\
&K {=} (k^1 \cdots k^N), Q {=}(q^1\cdots q^N), V {=} (v^1 \cdots v^N), \nonumber \\
&\widetilde{V} = \mathrm{Attn}(Q,K,V){=}\sigma(\frac{QK^T}{\sqrt{d_k}})V; \nonumber \\
&\tilde{z}^i = \mathrm{LN}(z^i + \tilde{v}^i) \longrightarrow \pi^i = f_\pi(\tilde{z}^i), v^i = f_v(\tilde{z}^i).
\end{align}
We find a proper placement of Layer Normalization~\citep{ba2016layer} within the attention module is crucial.
\subsection{Multi-agent Observation Prediction}\label{sec:observation}
When developing a new skill, humans often extract knowledge by simply observing how others approach the problem. We do not necessarily pair the observations with well-defined granular actions but grasp the general patterns and logic.
Observations without action labels are often readily available, such as recordings of historical football matches and team-based FPS gameplay videos.
Therefore, in this work we explore how to transfer knowledge from existing observations to downstream MARL without action labels being present.
A useful supervision signal from observation is the agent's location. Even when not directly accessible, existing techniques~\citep{he2017mask,ren2015faster} can in many cases extract this information. We thus adopt an agent's future location as a prediction target. It is worth noting that the same action can lead to different outcomes for different agents depending on their internal states such as velocity. Therefore, we expect the location predictive objective provides cues, independent of actions taken, for the model to comprehend an agent's intent and its internal states.
Recent works~\cite{yeh2019diverse, sun2019stochastic, zhan2018generative} develop models that predict trajectories over all agents at once.
We, on the other hand, task each agent to predict its future location, arriving at an agent-centric predictive objective, as illustrated in Figure~\ref{fig:baseline}.
The motivation for the agent-centric objective is two-fold.
For one, as discussed later in this section, the agent-centric loss can be integrated as an auxiliary loss to MARL in a straightforward manner.
And secondly, if the model predicts for all agents at the same time, it can overwhelm the RL training as later compared in Section~\ref{sec:observeall}.
Concretely, after collecting observation rollouts, we minimize a prediction loss over the observations instead of maximizing return as in RL training (for details see Section~\ref{sec:observationlearning}). Our observation is in a 2D map format, and the prediction task for each agent consists of predicting location heatmaps along height $\sigma_h^i$ and width $\sigma_w^i$ as softmax vectors. We train these predictions by minimizing the negative log likelihood via cross-entropy loss,
\begin{equation}
\arg\!\min_\theta \mathbb{E}_{\sigma ^i_h}\left[- \log \sigma^i_h[h^i_{t+1}]\right], \arg\!\min_\theta \mathbb{E}_{\sigma ^i_w}\left[- \log \sigma^i_w[w^i_{t+1}]\right],
\end{equation}
where $h^i_{t+1}$ and $w^i_{t+1}$ are the ground truth next step locations of the $i$th agent.
The pre-training is applied to both the {CNN} and {ACNN} architectures.
Next, we investigate two ways to transfer knowledge from the pre-trained models to MARL.
\textbf{Weight Initialization} Transfer via weight initialization is a long-standing method for transfer learning in many domains~\citep{donahue2013decaf, devlin2018bert}. Often, the initialization model is trained on larger related supervised task~\citep{donahue2013decaf,carreira2017quo}, but unsupervised pre-training~\citep{he2019momentum, devlin2018bert} has also made much progress. In RL, some prior works initialize models with weights trained for another relevant RL task, but general pre-training through a non-RL objective has been explored less.
\textbf{Progressive Neural Networks} Progressive Neural Networks~\citep{rusu2016progressive} are originally designed to prevent catastrophic forgetting and to enable transfer between RL tasks. They build lateral connections between features from an existing RL model\textemdash on a relevant task\textemdash to the RL model in training. The setup is also a promising candidate for transferring knowledge from our pre-trained models to MARL. Specifically, a pre-trained model becomes a \emph{frozen column} from which the intermediate features, after a non-linear function, are concatenated to the corresponding activation from the RL model, as shown in Figure~\ref{fig:progressive}. We experiment with Progressive Networks combined with {CNN} and {ACNN}, called {PrCNN} and {PrACNN}.
\subsubsection{Multi-agent Observation Prediction as Auxiliary Task for MARL}\label{sec:aux}
Auxiliary objectives for single-agent RL in terms of stability, efficiency and generalization have been widely studied~\citep{oord2018representation, jaderberg2016reinforcement}.
In light of prior works, we assess using the agent-centric prediction objective as an auxiliary task for MARL.
Thanks to the convenient formulation of the agent-centric objective, we can simply add prediction heads in juxtaposition with the policy and value heads as in Figure~\ref{fig:baseline}.
| {
"attr-fineweb-edu": 1.62207,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUf2rxK6EuNCvesKsQ | \section{Introduction}
\label{intro-sec}
Research and advanced development labs in both industry and academia
are actively building a new generation of conversational assistants,
to be deployed on mobile devices or on in-home smart speakers, such as
Google Home. None of these conversational assistants
can currently carry on a coherent multi-turn conversation in support
of a complex decision task such as choosing a hotel, where there are
many possible options and the user's choice may involve making trade-offs among complex
personal preferences and
the pros and cons of different options.
For example, consider the hotel description in the InfoBox in
Figure~\ref{CS-desc}, the search result for the typed query {\it "Tell
me about Bass Lake Taverne"}. These descriptions are written by
human writers within Google Content Studio and cover more than 200
thousand hotels worldwide. The descriptions are designed to provide
travelers with quick, reliable and accurate information that they may
need when making booking decisions, namely a hotel's amenities,
property, and location. The writers implement many of the decisions
that a dialogue system would have to make: they make decisions about
content selection, content structuring, attribute groupings and the
final realization of the content
\cite{RambowKorelsky92}. They access multiple
sources of information, such as user reviews and the hotels' own web
pages. The descriptions cannot be longer than 650 characters and are
optimized for visual scanning. There is currently no method for
delivering this content to users via a conversation other than reading
the whole InfoBox aloud, or reading individual sections of it.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=3.0in]{bass-lake}
\vspace{-.1in}
\caption{InfoBox Hotel Description for Bass Lake Taverne \label{CS-desc}}
\end{center}
\vspace{-.3in}
\end{figure}
Structured data is also available for each hotel, which includes
information about the setting of a hotel and its grounds, the feel of
the hotel and its rooms, points of interest nearby, room
features, and amenities
such as restaurants and swimming pools. Sample structured data
for the Bass Lake Taverne is in
Figure~\ref{bass-struct}.\footnote{The publicly available Yelp
dataset\footnote{\url{https://www.yelp.com/dataset/challenge}} has
around 8,000 entries for US hotels, providing around 80 unique
attributes.} The type of information available in the structured
data varies a great deal according to the type of hotel: for
specialized hotels it includes highly distinctive low-frequency
attributes for look-and-feel such as ``feels swanky'' ``historical
rooms'' or amenities such as ``direct access to beach'', ``has hot
tubs'', or ``ski-in, ski-out''.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.2in]{yelp-structured-2}
\caption{Sample of Hotel Structured Data for "Bass Lake Taverne"
\label{bass-struct}}
\end{center}
\end{figure}
Research on dialogue systems for hotel information has existed
for many years, in some cases producing shared dialogue corpora that
include hotel bookings
\cite{devillers2004french,Walkeretal02a,Rudnickyetal99,villaneau2009deeper,bonneau2006results,hastie2002automatic,lemon2006evaluating}.
Historically, these systems have greatly simplified the richness of
the domain, and supported highly restricted versions of the hotel
booking task, by limiting the information that the system can talk
about to a small number of attributes, such as location, number of
stars, room type, and price. Data collection involved users being
given specific tasks where they simply had to find a hotel in a
particular location, rather than satisfy the complex
preferences that users may have booking hotels.
This reduction in content simplifies the decisions that a dialogue
manager has to make, and it also reduces the complexity of the natural
language generator, since a few pre-constructed templates may suffice
to present the small number of attributes that the system knows about.
It is also important to note that the challenges for the hotel
domain are not unique. Dialogue systems for movies, weather reports,
real estate and restaurant information also have access to rich
content, and yet previous work and current conversational assistants
reduce this content down to just a few attributes.
This paper takes several steps toward solving the challenging problem
of building a conversational agent that can flexibly deliver richer
content in the hotels domain.\footnote{This version contains updates to the version published at LREC '18 \cite{Walkeretal18}, including updated results.} Section~\ref{background-overview} first
reviews possible methods that could be applied, and describes several
types of data collection experiments that can inform an initial
design. After motivating these data collection experiments, the rest
of the paper describes them and their results
(Section~\ref{paraphrase-exp}, Section~\ref{generation-exp}, and
Section~\ref{dialogue-exp}). Our results show, not surprisingly, that
both the hotel utterances and the complete dialogues that we
crowdsource are very different in style than the original written
InfoBox hotel descriptions. We compare different data collection
methods and quantify the stylistic features that characterize their
differences. The resulting corpora are available at {\tt
nlds.soe.ucsc.edu/hotels}.
\section{Background and Experimental Overview}
\label{background-overview}
Current methods for supporting dialogue about hotels revolve either
around search or around using a structured dialogue flow. Neither of
these methods on their own support fully natural dialogue, and there
is not yet an architecture for conversational agents that flexibly
combines unstructured information, such as that found in the InfoBox
or in reviews or other textual forms, and structured information such
as that in Figure~\ref{bass-struct}.
Search methods could focus on the content in the current InfoBox, and
carry out short (1-2 turn) conversations by applying compression
techniques on sentences to make them
more conversational \cite{andor2016globally,krause2017redundancy}. For
example, when asked ``Tell me about Bass Lake Taverne'', Google Home
currently produces an utterance providing its location and how far it
is from the user's location. When asked about hotels in a location,
Google Home reads out parts of the information in the Infobox, but it
does not engage in further dialogue that explores individual content
items. Moreover, the
well-known differences between written and oral language
\cite{Biber91} means that selected spans from written descriptions may not sound
natural when spoken in conversation, and techniques may be needed to
adapt the utterance to the dialogic context. Our first experiment,
described in Section~\ref{paraphrase-exp} asks crowdworkers to (1)
indicate which sentences in the InfoBox are most important, and (2)
write dialogic paraphrases for the selected sentences
in order to explore some of these issues.
Another approach is to train an end-to-end dialogue system for the
hotels domain using a combination of simulation, reinforcement
learning and neural generation methods
\cite{Nayaketal17,shah2018building,liu2017end,gavsic2017spoken}. This
requires first developing a user-system simulation to produce
simulated dialogues, crowdsourcing utterances for each system and user
turn, and then using the resulting data to (1) optimize the dialogue
manager using reinforcement learning,
(2) train the natural language understanding from the user utterances,
and (3) train the natural language generation from the crowd-sourced
system utterances. Currently however it is not clear how to build a
user-system simulation for the hotels domain that would allow more of
the relevant content to be exchanged, and there are no corpora
available with example dialogue flows and generated
utterances.
To build a simulation for such complex, rich content, we first need a
model for how the dialogue manager (DM) should (1) order the content
across turns, and (2) select and group the content in each individual
turn. Our assumption is that the most important information should be
presented earlier in the dialogue, so one way to do this is to apply
methods for inducing a {\it ranking} on the content attributes.
Previous work has developed a model of user preferences to solve this
problem \cite{careninimoore00b}, and shown that users
prefer systems whose dialogue behaviors are based on such customized
content selection and presentation
\cite{Stentetal02,Polifronietal03,Walkeretal07}. These
preferences (ranking on attributes) can be acquired directly from the
user, or can be inferred from their past behavior. Here we try two
other methods. First, in Section~\ref{paraphrase-exp}, we ask Turkers
to select the most important sentence from the InfoBox descriptions.
We then tabulate which attributes are in the selected sentences, and
use this to induce a ranking. After using this tabulation to collect
additional conversational utterances generated from meaning
representations (Section~\ref{generation-exp}), we carry out an
additional experiment (Section~\ref{dialogue-exp})
where we collect
whole dialogues simulating the exchange of information
between a user and a conversational agent, given particular
attributes to be communicated. We report how information is
ordered and grouped across these dialogues.
An end-to-end training method also needs a corpus for training for the
Natural Language Generator (NLG). Thus we also explore which
crowdsourcing design yields the
best conversational utterance data for training the NLG. Our first
experiment yields conversationalized paraphrases that match the
information in individual sentences in the original Infobox. Our
second experiment (Section~\ref{generation-exp}) uses content
selection preferences inferred from the paraphrase experiment and
collects utterances generated to match meaning representations.
Our third
experiment (Section~\ref{dialogue-exp}), crowdsources
whole dialogues for selected hotel attributes: the
utterances collected using this method are sensitive to the
context while the other two methods yield utterances that can be used
out of context.
To measure how conversational our collected utterances
are, we build on previous research that
counts linguistic features that vary across different situations of
language use \cite{Biber91}, and tabulates the effect of variables
like the mode of language as well as its setting. We use the
linguistic features tabulated by the Linguistic Inquiry and
Word Count (LIWC) tool \cite{pennebaker2015development}. See
Table~\ref{tab-features}. We select features to pay attention to
using the counts provided with the LIWC manual that distinguish
natural speech (Column 4) from articles in the New York Times (Column
5). Our hotel descriptions are not an exact genre match to the New
York Times, but they are editorial in nature. For example,
Table~\ref{tab-features} shows that spoken conversation has
shorter, more common words (Sixltr), more function words, fewer
articles and prepositions, and more affective and social language.
\begin{table}[h!t]
\begin{scriptsize}
\begin{tabular}{llp{.65in}rr} \toprule
Category & Abbrev & Examples & Speech & NYT \\ \toprule
\multicolumn{5}{l}{\cellcolor[gray]{0.9} \bf Summary Language Variables} \\
Words/sentence & WPS & - & - & 21.9 \\
Words \textgreater 6 letters & Sixltr & - & 10.4 & 23.4 \\
\multicolumn{5}{l}{\cellcolor[gray]{0.9} \bf Linguistic Dimensions} \\
Total function words & funct & it, to, no, very & 56.9 & 42.4 \\
Total pronouns & pronoun & I, them, itself & 20.9 & 7.4 \\
Personal pronouns & ppron & I, them, her & 13.4 & 3.6 \\
1st pers singular & i & I, me, mine & 7.0 & .6 \\
2nd person & you & you, your, thou & 4.0 & .3 \\
Impersonal pronouns & ipron & it, it's, those & 7.5 & 3.8 \\
Articles & article & a, an, the & 4.3 & 9.1 \\
Prepositions & prep & to, with, above & 10.3 & 14.3 \\
Auxiliary verbs & auxverb & am, will, have & 12.0 & 5.1 \\
Common Adverbs & adverb & very, really & 7.7 & 2.8 \\
Conjunctions & conj & and, but, whereas & 6.2 & 4.9 \\
Negations & negate & no, not, never & 2.4 & .6 \\
Common verbs & verb & eat, come, carry & 21.0 & 10.2 \\
\multicolumn{5}{l}{\cellcolor[gray]{0.9} \bf Psychological Processes} \\
Affective processes & affect & happy, cried & 6.5 & 3.8 \\
Social processes & social & mate, talk, they & 10.4 & 7.6 \\
Cognitive processes & cogproc & cause, know, ought & 12.3 & 7.5 \\
\multicolumn{5}{l}{\cellcolor[gray]{0.9} \bf Other} \\
Affiliation & affiliation & friend, social & 2.0 & 1.7 \\
Present focus & focuspresent & today, is, now & 15.3 & 5.1 \\
Informal language & informal & - & 7.1 & 0.3 \\
Assent & assent & agree, OK, yes & 3.3 & 0.1 \\
Nonfluencies & nonflu & er, hm, umm & 2.0 & 0.1 \\
Fillers & filler & Imean, youknow & 0.5 & 0.0\\ \bottomrule
\end{tabular}
\end{scriptsize}
\caption{LIWC Categories with Examples and Differences between Natural Speech and the New York Times \label{tab-features}}
\end{table}
The experiments use Turkers with a high level of qualification
and we ensure that Turkers make at least minimum wage on our tasks.
For the paraphrase and single-turn HITs for properties and
rooms, we ask for at least 90\% approval rate and at least 100
(sometimes 500) HITs approved and we always do location restriction
(English speaking locations). For the dialog HITs we paid 0.9 per HIT,
and restricted Turkers to those with a 95\% acceptance rate and at
least 1000 HITs approved. We also elicited the dialogs over multiple
rounds and excluded Turkers who had failed to include all 10
attributes on previous HITs.
We present a summary of all of our experiments in
Table~\ref{all-liwc-table} and then discuss the relevant columns in
each section. A scan of the whole table is highly
informative however, because \newcite{Biber91} makes the point that
differences across language use situations are not dichotomous,
i.e. there is not one kind of oral language and one kind of written
language. Rather language variation occurs continuously and on a
scale, so that language can be ``more or less'' conversational. The
overall results in Table~\ref{all-liwc-table} demonstrates this scalar
variation, with different methods resulting in more or less
conversationalization of the content in each utterance.
\section{Paraphrase Experiment}
\label{paraphrase-exp}
The overall goal of the Paraphrase experiment is to evaluate the
differences between monologic and dialogic content
that contain the same or similar information. These experiments are
valuable because the original content is given in unordered lists that
facilitate visual scanning, as opposed to a conversation in which the
dialogue system needs to decide the order in which to present information
and whether to leave some information out.
We ask Turkers to both select "the most important" content out
of the hotel descriptions, and then to paraphrase that content in a
conversational style. We use this data to induce an importance ranking
on content and we also measure how the conversational paraphrases of
that content differ from the original phrasing. We used a randomly
selected set of 1,000 hotel descriptions from our corpus of 200K, with
instructions to Turkers to:
\begin{small}
\begin{itemize}
\item Select the sentence out of the description that has the most important information
to provide in response to a user query to "tell me about HOTEL-NAME".
\item Cut and paste that sentence into the "Selected Sentence" box.
\item Rewrite your selected sentence so that it sounds conversational, as a turn in dialogue. You may need to reorder the content or convert your selected sentence to multiple sentences in order to make it sound natural.
\end{itemize}
\end{small}
\begin{figure}[ht!]
\begin{center}
\begin{small}
\begin{tabular}{|c|p{2.75in}|}
\hline
S1 & The elegant rooms, decorated in warm tones, feature high ceilings
and lots of natural light, plus Turkish marble bathrooms, Bose sound
systems, HDTVs and designer toiletries; some have views of the
park. \\
S2 & Suites include living rooms and soaking tubs; some have city
views. \\
S3 & Grand suites offer personal butler service. \\
S4& Open since 1930,
this opulent landmark sits across the street from Central Park on New
York's famed 5th Avenue. \\ \hline
\end{tabular}
\end{small}
\end{center}
\caption{\label{another-hotel-desc} An InfoBox description for the hotel {\it The Pierre, A Taj Hotel, New York}, split
into sentences and labeled. }
\vspace{.1in}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\begin{small}
\begin{tabular}{|c|p{2.65in}|}
\hline
T1 & This hotel's elegant rooms are decorated in warm tones. They feature high ceilings with lots of natural light. The rooms feature Turkish marble bathrooms, designer toiletries, high-definition televisions and Bose sound systems. Some rooms even offer views of the park. \\
T2 & Located on 5th Avenue, this landmark hotel is located across the street from Central Park and dates back to 1930.\\
T3 & Each room is elegantly decorated in warm tones. You will enjoy high ceilings and natural light. The bathrooms are done in Turkish marble and have designer toiletries. For entertainment, you will find HDTVs and Bose sound systems. There are views of Central Park from some rooms. \\
\hline
\end{tabular}
\end{small}
\caption{Turker generated paraphrases of the hotel description shown in Table \ref{another-hotel-desc}. The Turkers T1 and T3 selected S1 as containing the most important information and Turker T2 selected S4.
\label{another-hotel-utts}}
\end{center}
\end{figure}
\begin{table}[ht!]
\begin{center}
\begin{scriptsize}
\begin{tabular}{|c|cp{.1in}|}
\hline
\bf attribute & \bf F& \\
\hline \hline
locale\_mountain & 1.0 & \\ \hline
has\_bed\_wall\_in\_rooms & .67 & \\ \hline
has\_wet\_room & .67 & \\ \hline
feels\_quaint & .61 & \\ \hline
has\_crib & .50 & \\ \hline
feels\_artsy & .44 & \\ \hline
is\_whitewashed & .44 & \\ \hline
has\_private\_bathroom\_outside\_room & .44 & \\ \hline
feels\_nautical & .42 & \\ \hline
has\_luxury\_bedding & .40 & \\ \hline
welcomes\_children & .39 & \\ \hline
is\_dating\_from & .38 & \\ \hline
feels\_retro & .38 & \\ \hline
all\_inclusive & .34 & \\ \hline
has\_casino & .33 & \\ \hline
has\_heated\_floor & .33 & \\ \hline
has\_city\_views & .33 & \\ \hline
has\_boardwalk & .33 & \\ \hline
has\_hammocks & .33 & \\ \hline
has\_onsite\_barbecue\_area & .33 & \\ \hline
\end{tabular}
\end{scriptsize}
\caption{\label{content-selection-counts} Turker's Top 20 Attributes,
shown with their frequency $F$ of selection when given in the content.}
\end{center}
\end{table}
For each of the descriptions, three Turkers performed this HIT,
yielding a total of 3,000 triples consisting of the original
description, the selected sentence, and the human-generated dialogic
paraphrases. For example, for the hotel description in Figure
~\ref{another-hotel-desc}, two Turkers selected S1 and the other
selected S4. These sentences have different content, so for each
attribute realized we increase its count as part of our goal to induce
a ranking indicating the importance of different attributes. The
dialogic paraphrases the same Turkers produced are shown in
Figure~\ref{another-hotel-utts}. The paraphrases contain fewer words
per sentence, more use of anaphora, and more use of subjective phrases
taking the listener's perspective such as {\it you will enjoy}.
\begin{table*}[t!hb]
\begin{scriptsize}
\begin{center}
\begin{tabular}{l|rrr||rrcc} \toprule
Category & InfoBox & Paraphrase & p-val & Props+Rooms & Dialogues & p-val & p-val \\
&& & & & & Props+Rooms vs. Para & Props+Rooms vs. Dial \\ \toprule
Impersonal Pronouns & 0.97 & 3.80 & 0.00 & 3.36 & 5.19 & 0.00 & 0.00 \\
Adverbs & 0.97 & 3.41 & 0.00 & 3.57 & 6.25 & 0.12 & 0.00 \\
Affective Processes & 4.98 & 4.81 & 0.14 & 8.09 & 8.55 & 0.00 & 0.26 \\
Articles & 8.08 & 9.06 & 0.00 & 11.54 & 7.62 & 0.00 & 0.00 \\
Assent & 0.02 & 0.04 & 0.00 & 0.07 & 1.13 & 0.11 & 0.00 \\
Auxiliary Verbs & 1.69 & 6.12 & 0.00 & 8.02 & 11.81 & 0.00 & 0.00 \\
Common Verbs & 3.64 & 7.94 & 0.00 & 10.97 & 15.07 & 0.00 & 0.00 \\
Conjunctions & 8.07 & 8.13 & 0.54 & 7.33 & 6.52 & 0.00 & 0.00 \\
First Person Singular & 0.01 & 0.02 & 0.00 & 0.41 & 3.41 & 0.00 & 0.00 \\
Negations & 0.03 & 0.07 & 0.00 & 0.27 & 0.44 & 0.00 & 0.00 \\
Personal Pronouns & 0.06 & 1.15 & 0.00 & 3.87 & 10.17 & 0.00 & 0.00 \\
Second Person & 0.02 & 0.45 & 0.00 & 2.43 & 5.63 & 0.00 & 0.00 \\
Six Letter Words & 22.21 & 19.15 & 0.00 & 20.74 & 15.50 & 0.00 & 0.00 \\
Social Processes & 4.81 & 5.63 & 0.00 & 8.53 & 14.66 & 0.00 & 0.00 \\
Total Pronouns & 1.03 & 4.94 & 0.00 & 7.23 & 15.36 & 0.00 & 0.00 \\
Words Per Sentence & 22.86 & 14.69 & 0.00 & 14.52 & 10.90 & 0.46 & 0.00 \\
Affiliation & 1.18 & 0.95 & 0.00 & 1.13 & 5.97 & 0.08 & 0.00 \\
Cognitive Processes & 2.58 & 3.11 & 0.00 & 9.18 & 10.47 & 0.00 & 0.00 \\
Focus present & 3.64 & 7.91 & 0.00 & 9.35 & 14.40 & 0.00 & 0.00 \\
Function & 26.79 & 37.62 & 0.00 & 44.58 & 53.08 & 0.00 & 0.00 \\
Informal & 0.42 & 0.35 & 0.03 & 0.51 & 1.76 & 0.00 & 0.00 \\
nonflu & 0.37 & 0.28 & 0.00 & 0.42 & 0.63 & 0.00 & 0.00 \\
prep & 9.65 & 8.50 & 0.00 & 9.83 & 9.88 & 0.00 & 0.72 \\
\bottomrule
\end{tabular}
\end{center}
\end{scriptsize}
\caption{\label{all-liwc-table} Conversational LIWC features across
all Utterance Types/Data Collection Methods}
\vspace{-.2in}
\end{table*}
\noindent{\bf Results.} We build a ranked ordering of hotel attribute
importance using the selected sentences from each hotel
description. We count the number of times each attribute is realized
within a sentence selected as being the most informative or
relevant. We count the number of hotels for which each attribute
applies. The attribute frequency $F$ is given as the number of times
an attribute is selected divided by the product of the number of
hotels to which the attribute applies and the number of Turkers
that were shown those hotel descriptions. Finally, the attributes
are sorted and ranked by largest $F$.
Table~\ref{content-selection-counts} illustrates how the tabulation of
the Turker's selected sentences provides information on the ranking of
attributes that we can use in further experimentation. However, the
frequencies reported are conditioned on the relevant attribute being
available to select in the Infobox description, and many of the
attributes are both low frequency and highly distinctive, e.g. the
attribute {\tt local\_mountain}. A reliable importance ranking using
this method would need a larger sample than 1000 hotels.
It is also possible that attribute importance should be directly
linked to how distinctive the attribute is, with less frequent
attributes always mentioned earlier in the dialogue.
The first three columns of Table~\ref{all-liwc-table} summarize the
stylistic differences between the original Infobox sentences and the
collected paraphrases. Column 3 provides the p-values showing that
many differences are statistically significant. Differences that
indicate that the paraphrases are more similar to oral language (as in
Speech, column 4 of Table~\ref{tab-features}), include the use of
adverbs, common verbs, and a reduced number of
words per sentence. Examples of expected differences that are not
realized include increases in affective language and significantly greater use of conjunctions. So while
this method improves the conversational style of the content
realization, we will see that our other methods produce
{\it more} conversational utterances. While this
method is inexpensive and may not require such expert Turkers, the utterances collected may only
be useful for systems that do not use
structured data and so need paraphrases of the original Infobox
data that is more conversational.
\section{Generation from Meaning Representations}
\label{generation-exp}
The second experiment aims to determine whether we get higher
quality utterances if we ask crowdworkers to generate utterances directly
from a meaning representation, in the context of a conversation,
rather than by selecting from the original Infobox hotel descriptions.
Utterances generated in this way should not be influenced by the
original phrasing and sentence planning in the hotel descriptions.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.2in]{hit-room-attrs}
\vspace{-.2in}
\caption{Instructions for Room Attributes HIT \label{hit-fig}}
\end{center}
\end{figure}
Instructions for our second experiment are shown in
Figure~\ref{hit-fig}. Here we give Turkers specific content tables
and ask them to generate utterances that realize that content. Note
that the original hotel descriptions, as illustrated in
Figure~\ref{CS-desc} consists of three blocks of content, property,
rooms and amenities. For each hotel in a random selection of 200
hotels from the paraphrase experiment, we selected content for both
rooms (4 attributes) and properties (6 attributes) by picking the
attributes with the highest scores (as illustrated for a small set of
attributes in Table~\ref{content-selection-counts}). Thus hotel has
two unique content tables assigned to it, one pertaining to the
hotel's rooms, and the other for the hotel grounds. Each hotel
content table is given to three Turkers which results in a total of
1,200 utterances collected. Turkers were instructed to create
utterances as conversational as possible.
\noindent{\bf Results.} Sample utterances for both properties and
rooms are shown in Figure~\ref{prop-room-utts}.
\begin{figure}[h!bt]
\begin{center}
\begin{small}
\begin{tabular}{|c|p{2.62in}|}
\hline
Prop & A good choice is 1 Hotel South Beach in Miami Beach. It's luxurious, lively, upscale, and chic, with beach access and a bar onsite. \\
Prop & I think that 1 Hotel South Beach will meet your needs. It's a chic luxury hotel with beach access and a bar. Very lively. \\
Room & One of the excellent hotels Miami Beach has to offer is the 1 Hotel South Beach. The upgraded rooms are full featured, including a kitchen and a desk for work. Each room also has a balcony. \\
Room & The 1 Hotel South Beach doesn't mess around. When you come to stay here you won't want to leave. Each upgraded room features a sunny balcony and personal kitchen. You also can expect to find a lovely writing desk for your all correspondence needs. \\
\hline
\end{tabular}
\end{small}
\caption{Example utterances generated by Turkers in the second experiment. Turkers were given specific content tables from which to generate dialogue utterances that realize that content. \label{prop-room-utts}}
\end{center}
\end{figure}
Column 5 in Table~\ref{all-liwc-table} shows the frequencies of LIWC's
conversationalization features for the utterances collected in this
experiment, and Column 7 reports statistical significance (p-values)
for comparing these collected utterances to the paraphrases collected
in Experiment 1, using an unpaired t-test on the two datasets. We can
see that some of the attributes that indicate conversationalization
indicate that this method yields more conversational utterances: there
are significantly more more auxiliary verbs and common verbs. There is
a greater use of first person and second person pronouns, as well as
words indicating affective, social and cognitive processes. Counts of
function words and focusing on the present are also higher as would be
expected of more conversational language.
\section{Dialogue Collection Experiment}
\label{dialogue-exp}
The final data collection experiment focuses on utterance generation
in an explicitly dialogic setting. In order to collect dialogues about
our hotel attributes, we employ a technique called "self-dialogue"
collection, which to our knowledge was pioneered by
\newcite{Krause17}, who claim that the results are surprisingly
natural. We ask individual Turkers to write a full dialogue between
an agent and a customer, where the Turker writes both sides of the
dialogue. The customer is looking for a hotel for a trip, and the
agent has access to a description table with a list of 10 attributes
for a single hotel. The agent is tasked with describing the hotel to
the customer. Figure \ref{hit-fig-dialog} shows our HIT instructions
that provided a sample dialogue as part of the instructions to the
Turkers.
This experiment utilizes 74 unique hotels, a subset of those used in
the property and room experiments above (Section
\ref{generation-exp}), where we have both 6 property attributes, and 4
room attributes. We aimed to collect 3 dialogues per hotel (from
unique Turkers), but due to some Turkers failing to follow
instructions, the final corpus consists of 205 dialogues (comprised of
58 hotels with 3 dialogues each, 15 hotels with 2 dialogs each, and 1
hotel with only 1 dialogue).
\begin{figure}[htb]
\begin{center}
\includegraphics[width=3.2in]{hit-hotel-dialogs}
\vspace{-.2in}
\caption{Instructions for Hotel Dialog HIT\label{hit-fig-dialog}}
\end{center}
\end{figure}
Figure~\ref{sit-hotel-dialogues} and Figure
\ref{straight-hotel-dialogues} provide sample dialogues from the
corpus, with the 10 required attributes shown in bold,
and the agent (A) and customer (C) turns shown with their respective
turn numbers. In Dialogue 1, we see an example of a creative dialogue
where the Turker designs a situational context for the dialogue where
the customer is looking for a hotel for a bachelorette party weekend,
and has specific requirements about flooring and amenities. We note
that in this dialogue, the agent only begins to discuss the hotel in
their third turn. In Dialogue 2, we see a much more basic dialogue,
where the agent begins to list properties and room attributes earlier
on the in dialogue (at Agent Turn 2), and the full list of attributes
is exhausted halfway through the conversation, at Agent Turn 3.
\begin{table}[ht!]
\begin{center}
\begin{small}
\begin{tabular}{|c|p{2.7in}|}
\hline
A1 & Hi! How can I help you today?\\
C1 & I am planning a trip to New York for a bachelorette party weekend and need help finding a hotel.\\
A2 & OK, what will you girls be planning to do?\\
C2 & We're going to a Broadway show but other than that just going to dinner and hitting some bars.\\
A3 & OK, I think the Hotel Indigo in the Chelsea section would be great! It's {\bf upscale} and has a great {\bf hip} and {\bf contemporary} vibe with that {\bf buzzy} New York City energy feel.\\
C3 & That sounds like what we're looking for. I know this is a weird question but one of the girls sometimes has an allergy to carpet, is there any type of option for a non carpeted room?\\
A4 & Actually, this hotel has {\bf hardwood floors in the rooms}.\\
C4 & Great! I think we may be bringing some snacks and maybe some of our own alcohol. Can we arrange for a fridge or do they just have ice buckets?\\
A5 & The rooms have {\bf mini bars} as well as {\bf coffee} if you girls need some help waking up for your time out. There is also a {\bf bar} on site so you can start the party before you even head out.\\
C5 & Great! One more question, one of the girls does need to keep in touch with work. Do you offer WiFi? \\
A6 & The hotel has {\bf desks in every room} and offers a {\bf business center} if she needs anything like a printer or a desktop computer. \\
C6 & I think we'll go ahead and book this. It sounds perfect!\\ \bottomrule
\end{tabular}
\end{small}
\caption{\label{sit-hotel-dialogues} Situational Context for Content Hotel Dialogue}
\end{center}
\end{table}
\begin{table}[ht!]
\begin{center}
\begin{small}
\begin{tabular}{|c|p{2.7in}|}
\hline
A1 & Good evening, how can i help you?\\
C1 & I am looking for a good hotel to have a business conference in the Brooklyn area.\\
A2 & Sure, let me see what i can find. Hotel Indigo Brooklyn may be just what you are looking for. It has a {\bf hip} feel with an {\bf onsite bar}, {\bf Business center}, {\bf restaurant}, {\bf free wifi}. Its got it all.\\
C2 & That sounds excellent. What room amenities are offered?\\
A3 & There is {\bf coffee} in the rooms and a {\bf mini fridge}. All the rooms have been recently {\bf upgraded} and did i mention it has a {\bf fitness room}? I has full {\bf room service} as well.\\
C3 & Wow, that sounds great.Whats the address? I need to make sure its in the right area for me.\\
A4 & Sure, its 229 Duffield Street, Brooklyn, NY 11201, USA.\\
C4 & Thanks, thats just the right spot. Go ahead and make me a reservation for next Tuesday.\\
A5 & excellent! Its done!\\
C5 & Thanks you have been extremely helpful! \\ \bottomrule
\end{tabular}
\end{small}
\caption{\label{straight-hotel-dialogues} Straightforward Attribute Listing Hotel Dialogue}
\end{center}
\end{table}
\noindent{\bf Results.} We begin by analyzing information that
both the dialogue manager and the natural language generator
would need to know, namely how frequently attributes are
grouped in a single turn in our collected dialogues, by
counting the number of times certain keywords are mentioned related to
the attributes in the dialogues. Table \ref{attribute-grouping} shows
attributes groups that occur at least 4 times in the dataset, showing
the group of attributes and the frequency count. We note that the
attributes within the groups are generally either: 1) semantically
similar, e.g. "modern" and "contemporary"; 2) describe the same
aspect, e.g. "feels elegant" and "feels upscale"; or 3) describe the
same general attribute, e.g. "has breakfast buffet", "has free
breakfast", and "has free breakfast buffet". It is interesting to note
that the semantic similarity is not always completely obvious (for
example, "has balcony in rooms" and "has fireplace" may be used to
emphasize more luxurious amenities that are a rare find).
\begin{table}[ht!]
\begin{center}
\begin{small}
\begin{tabular}{|p{2.5in}|c|} \toprule
\bf Attribute Group & \bf Count \\ \toprule
{ (has\_business\_center, has\_meeting\_rooms) } & 13 \\ \hline
{ (has\_bar\_onsite, has\_restaurant) } & 9 \\\hline
{ (feels\_contemporary, feels\_modern) } & 9 \\\hline
{ (has\_bar\_onsite, has\_business\_center) } & 8 \\\hline
{ (has\_business\_center, has\_desk\_in\_rooms) } & 7 \\\hline
{ (feels\_casual, feels\_contemporary, feels\_modern) } & 6 \\\hline
{ (feels\_modern, has\_business\_center) } & 6 \\\hline
{ (has\_business\_center, has\_convention\_center) } & 6 \\\hline
{ (feels\_elegant, feels\_upscale) } & 4 \\\hline
{ (has\_bar\_onsite, has\_bar\_poolside) } & 4 \\\hline
{ (has\_microwave\_in\_rooms, has\_minifridge\_in\_rooms) } & 4 \\\hline
{ (feels\_contemporary, feels\_elegant, feels\_modern) } & 4 \\\hline
{ (feels\_contemporary, feels\_upscale) } & 4 \\\hline
{ (has\_balcony\_in\_rooms, has\_fireplace) } & 4 \\\hline
{ (feels\_chic, feels\_upscale) } & 4 \\ \bottomrule
{ (has\_business\_center, \newline has\_desk\_in\_rooms, \newline has\_wi\_fi\_free) } & 4 \\\hline
{ (has\_breakfast\_buffet, \newline has\_free\_breakfast, \newline has\_free\_breakfast\_buffet) } & 4 \\\hline
{ (has\_coffee\_in\_rooms, \newline has\_desk\_in\_rooms, \newline has\_microwave\_in\_rooms, has\_minifridge\_in\_rooms) } & 4 \\\hline
\end{tabular}
\end{small}
\vspace{-.1in}
\caption{\label{attribute-grouping} Attributes Frequently Grouped in a Single Turn}
\end{center}
\end{table}
Our assumption is that more important attributes should be presented
earlier in the dialogue, and that a user-system dialogue simulation
system design \cite{shah2018building,liu2017end,gavsic2017spoken}
would require such information to be available. Thus, in order to
provide more information on the importance of particular attributes,
we analyze where in the conversation (i.e. first or second half)
certain types of attributes are mentioned. For example, we observe
that attributes describing the "feel", such as "feels chic" or "feels
upscale", are mentioned around 700 times, and that for 80\% of those
times they appear in the first half of the conversation as opposed to
the second half, showing that they are often used as general hotel
descriptors before diving into detailed attributes. Attributes
describing room amenities on the other hand, such as "has kitchen in
rooms" or "has minifridge", were mentioned around 530 times, with a
more even distribution of 53\% in the first half of the conversation,
and 47\% in the second half.
We also observe that most attributes are first
introduced into the conversation by the agent, but that a small number
of attributes are more frequently first introduced by the customer,
specifically: {\it has\_swimming\_pool\_indoor,
popular\_with\_business\_travelers, has\_onsite\_laundry,
welcomes\_families, has\_convention\_center, has\_ocean\_view,
has\_free\_breakfast\_buffet, has\_swimming\_pool\_saltwater}.
Next, we compare our collected dialogues to the single-turn dialogue
descriptions described in Section \ref{generation-exp}). Specifically,
we focus on the "agent" turns of our dialogues, as they are more
directly comparable to the property and room turns.
Table \ref{table-compare-prop-room-dialog} describes the average
number of turns, number of
sentences per turn, words per turn, and attributes per turn across the
property, room, and agent dialogue turns. We note that the average
number of sentences, words, and attributes per turn for our property
and room descriptions are higher in general than the agent turns in
our dialogues, because the dialogues allow the agent to distribute the
required content across multiple turns.
\begin{table}[ht!]
\begin{center}
\begin{small}
\begin{tabular}{|c|c|c|c|} \toprule
&\bf Properties & \bf Rooms & \bf Dialogues \\ \toprule
\bf Number of turns & 600 & 600 & 1227 \\
\multicolumn{4}{l}{\cellcolor[gray]{0.9} \bf Sentences per turn } \\
Average & 2.80 & 2.55 & 1.80 \\
\multicolumn{4}{l}{\cellcolor[gray]{0.9} \bf Words per turn } \\
Average & 41.37 & 39.81 & 21.45\\
\multicolumn{4}{l}{\cellcolor[gray]{0.9} \bf Attributes per turn } \\
Average & 6 & 4 & 1.62 \\
\end{tabular}
\end{small}
\caption{\label{table-compare-prop-room-dialog} Comparing Property, Room, and Agent Dialogue Turns}
\end{center}
\end{table}
Column 6 (Dialogues) of Table \ref{all-liwc-table} reports the
frequencies for conversational features in the collected data, with
p-values in Column 8 comparing the dialogic utterances to the
property+room utterances collected in Experiment 2 (Section
~\ref{generation-exp}. The dialogic data collection results in
utterances that are more conversational according to these counts,
with higher use of impersonal pronouns and adverbs, auxiliary verbs
and common verbs, and first person and second person pronouns. We
also see increases in words indicative of affective, social and
cognitive processes, more informal language, and reduced use of Six
Letter words, fewer words per sentence and greater use of language
focused on the present. Thus these utterances are clearly much more
conversational, and provide information on attribute ordering across
turns as well as possible ways of grouping attributes. The
utterances collected in this way might also be useful for template
induction, especially if the induced templates could be indexed for
appropriate use in context.
Finally, Table \ref{table:liwc-examples} presents examples of
utterances from each dataset for four LIWC categories where we see significant differences across
the sets, specifically, common verbs, personal pronouns, social processes, and focus present. From the table,
we can see that for all of these categories, the average LIWC score increases steadily as the data source becomes
more dialogic.
\begin{table*}[h!]
\begin{scriptsize}
\begin{tabular}
{@{} p{.3in}|p{.3in}|p{.2in}|p{5in} @{}}
\hline
\bf \scriptsize LIWC Cat. & \bf \scriptsize Dataset & \bf \scriptsize LIWC Avg. & \bf \scriptsize Example \\ \midrule
Common Verbs & ORIG & 3.64 & The property has a high-end restaurant and a rooftop bar, as well as 4 outdoor pools, a fitness center and direct beach access with umbrellas. \\\hline
& PARA & 7.94 & A basic hostel that has dorms with four to 8 beds in them. Males and females dorm together in mixed rooms. There is also a roof terrace and pool table. \\\hline
& PROP+\newline ROOM & 10.97 & Argonaut Hotel is a spectacular historic hotel located just a stones throw from the coast. It has a casual and relaxing yet elegant feel with a touch of an nautical atmosphere. You can take a stroll along the boardwalk to access Pier 39, or check out Fishermans's Wharf and see all that is surrounding this historic area. \\\hline
& DIAL & 15.07 & Then this hotel would be perfect for you. They have recently updated the place to be more contemporary and have a relaxing touch to it. Also, if you need to meet with your co-workers, they have meeting rooms you can reserve or an on-site restaurant if you need to convene with them. \\\midrule
Personal\newline Pronouns & ORIG & 0.06 & Straightforward dorm-style \& private rooms with free WiFi, plus a casual bar \& a business center. \\\hline
& PARA & 1.15 & The hotel rooms are airy with ocean views and have reclaimed driftwood lining the walls. The rooms come with cotton sheets, live edge wood desks and Nespresso machines. For your entertainment needs, there is free Wi-Fi and 55 inch flat-screen TVs. \\\hline
& PROP+\newline ROOM & 3.87 & You might want to look at Ocean Resort Fort Lauderdale. It's nautical in theme, but feels modern. It has an onsite bar, a restaurant, and free wifi. \\\hline
& DIAL & 10.17 & Comfort Inn offers roadside lodging so it will be very convenient for you. We also have free parking. \\\midrule
Social\newline Processes & ORIG & 4.81 & Budget hotel in a converted warehouse The bright, simple rooms come with en suite bathrooms, complimentary Wi-Fi and cable TV. Children age 18 and under stay free with a parent. \\\hline
& PARA & 5.63 & The rooms in this hotel are nautical-style and have exposed brickwork. Also, they have flat-screen TVs, coffeemakers and yoga mats. \\\hline
& PROP+\newline ROOM & 8.53 & The Aston Waikiki Beach Hotel would be the perfect choice. This casual and relaxing hotel offers a host of amenities. It features an onsite, highly-rated restaurant, an outdoor swimming pool, and colorful and fully-outfitted suites. \\\hline
& DIAL & 14.66 & Well, the hotel also has event space for meetings, and there is a bar onsite for winding down with clients afterwards. \\\midrule
Focus Present & ORIG & 3.64 & The polished rooms and suites provide flat-screen TVs, minifridges and Nespresso machines, as well as free Wi-Fi and 24-hour room service \\\hline
& PARA & 7.91 & The hostel offers a relaxed feel and is with-in a few km from several popular destinations. Near by areas include Munich Hauptbahnhof U-Bahn and S-Bahn stations, Oktoberfest, and Marienplatz public square. \\\hline
& PROP+\newline ROOM & 9.35 & A good choice would be the Ames Boston Hotel, Curio Collection by Hilton. It is historic but has a very modern and chic feeling to it. They offer free wi-fi and a business center for your use. \\\hline
& DIAL & 14.4 & The rooms are relaxing and well appointed. They come with an kitchenette and are equipped with a microwave and a minifridge. Of course there is coffee supplied as well. Are you traveling for business or pleasure? \\ \bottomrule
\end{tabular}
\end{scriptsize}
\centering \caption{\label{table:liwc-examples} {Examples for Significantly Different LIWC Categories across the Datasets}}
\end{table*}
\section{Conclusion and Future Work}
This paper presents a new corpus that contributes to defining the
requirements and provide training data for a conversational agent that
can talk about all the rich content available in the hotel domain.
All of the data we collect in all of the experiments is available at
{\tt nlds.soe.ucsc.edu/hotels}.
After completing three different types of data collection, we posit
that the self-dialogue collection might produce the best utterances
but at the highest cost, with the most challenges for direct re-use.
The generation from meaning representations produces fairly high
quality utterances, but they are not sensitive to the context, and
our results from the dialogic collection suggest that it might
be useful to collect additional utterances using this method that
sample different combinations of attributes, and select fewer
attributes for each turn.
In future work, we plan to use these results in three different ways.
First, we can train a "conversational style ranker" based on the data
we collected, so that it can retrieve pre-existing utterances that
have good conversational properties. The features that this ranker
will use are the linguistic features we have identified so far, as
well as new features we plan to develop related to context. Second,
we will experiment directly using the collected utterances in a
dialogue system, first by templatizing them by removing specific
instantiations of attributes, and then indexing them for their uses in
particular contexts.
\section{References}
\bibliographystyle{lrec}
| {
"attr-fineweb-edu": 1.719727,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUd8E4uzqh_NPd3g6Y | \section{Introduction}
RoboCup simulated soccer has been conceived and is widely accepted as a
common platform to address various challenges in artificial intelligence and
robotics research.
Here, we consider a subtask of the full problem, namely the {\em keepaway} problem. In
{\em keepaway} we have two smaller teams: one team (the `keepers') must try to maintain
possession of the ball for as long as possible while staying within a small region of the full soccer field.
The other team (the `takers') tries to gain possession of the ball.
\citet{AB05} initially formulated keepaway as benchmark problem for reinforcement learning (RL);
the keepers must individually {\em learn} how to maximize the time they control the ball as a team
against the team of opposing takers playing a fixed strategy.
The central challenges to overcome are, for one, the high
dimensionality of the state space (each observed state is a vector of 13 measurements), meaning
that conventional approaches to function approximation in RL, like grid-based tilecoding, are infeasible;
second, the
stochasticity due to noise and the uncertainty in control due to the multi-agent nature imply that
the dynamics of the environment are both unknown and cannot be obtained easily. Hence we need
model-free methods. Finally, the underlying soccer server expects an action every 100 msec,
meaning that efficient methods are necessary that are able to learn in real-time.
\citet{AB05} successfully applied RL to {\em keepaway}, using the
textbook approach with online Sarsa($\lambda$) and
tilecoding as underlying function approximator \citep{sutton98introduction}. However, tilecoding is
a local method and places parameters (i.e.\ basis functions) in a regular fashion throughout the
entire state space,
such that the number of parameters grows exponentially with the dimensionality of the space. In \citep{AB05} this very
serious shortcoming
was adressed by exploiting
problem-specific knowledge of how the various state variables interact. In particular, each state variable
was considered independently from the rest. Here, we will demonstrate that one can also learn using the full
(untampered) state information, without resorting to simplifying assumptions.
In this paper we propose a (non-parametric) kernel-based approach to approximate the value function.
The rationale for doing this is that by representing the solution through the data and not by some
basis functions chosen before the data becomes available, we can better adapt to the complexity of
the unknown function we are trying to estimate. In particular, parameters
are not `wasted' on parts of the input space that are never visited. The hope is that thereby the
exponential growth of parameters
is bypassed. To solve the RL problem
of optimal control we consider the framework of approximate policy iteration with the related
least-squares based
policy evaluation methods LSPE($\lambda$) proposed by \citet{nedicbert2003LSPE} and LSTD($\lambda$) proposed by \citet{boyan99lstd}. Least-squares based policy evaluation is
ideally suited for the use with linear models and is a very sample-efficient variant of RL. In this paper
we provide a unified and concise formulation of LSPE and LSTD; the
approximated value function is obtained from a regularization network
which is
effectively the mean of the posterior obtained by GP regression \citep{raswil06gpbook}. We use the subset of regressors method \citep{smola2000SGMA,luowahba97has} to approximate
the kernel using a much reduced subset of basis functions.
To select this subset we employ greedy online selection, similar to
\citep{csato2001sparse,engel2003gptd}, that adds a
candidate basis function based on its distance to the span of the previously chosen ones. One improvement is
that we consider a {\em supervised} criterion for the selection of the relevant basis functions
that takes into account the reduction of the cost in the original learning task
in addition to reducing the error incurred from
approximating the kernel. Since the per-step complexity during training and prediction depends
on the size of the subset, making sure that no unnecessary basis functions are selected ensures
more efficient usage of otherwise scarce resources. In this way learning in real-time
(a necessity for {\em keepaway}) becomes possible.
This paper is structured in three parts: the first part (Section~\ref{sec:background})
gives a brief introduction on reinforcement learning and carrying out general regression with regularization networks.
The second part (Section~\ref{sec:pe with rn}) describes and derives an efficient recursive implementation of the proposed
approach, particularly suited for online learning. The third
part describes the RoboCup-keepaway problem in more detail (Section~\ref{sec:robocup}) and contains
the results we were able to achieve (Section~\ref{sec:experiments and results}).
A longer discussion of related work is deferred to the end of the paper; there we
compare the similarities of our work with that of \citet{engel2003gptd,engel2005rlgptd,engel2005octopus}.
\section{Background}
\label{sec:background}
In this section we briefly review the subjects of RL and regularization networks.
\subsection{Reinforcement Learning}
Reinforcement learning (RL) is a simulation-based form of approximate dynamic programming, e.g. see
\citep{bert96neurodynamicprogram}. Consider a
discrete-time dynamical system with states $\mathcal S=\{1,\ldots,N\}$ (for ease of exposition we
assume the finite case). At each time step $t$, when the system is in state $s_t$, a decision
maker chooses a control-action $a_t$ (again, selected from a finite set $\mathcal A$ of admissible actions)
which changes probabilistically the state of the system to $s_{t+1}$, with distribution $P(s_{t+1}|s_t,a_t)$.
Every such transition yields an immediate reward $r_{t+1}=R(s_{t+1}|s_t,a_t)$. The ultimate goal
of the decision-maker is to choose a course of actions such that the long-term performance, a measure of the
cumulated sum of rewards, is maximized.
\subsubsection{Model-free Q-value function and optimal control}
Let $\pi$ denote a decision-rule (called the policy) that maps states to actions. For a fixed
policy $\pi$ we want to evaluate the state-action value function (Q-function) which for every state $s$ is
taken to be the expected infinite-horizon discounted sum of rewards obtained from starting in state $s$,
choosing action $a$ and then proceeding to select actions according to $\pi$:
\begin{equation}
\label{eq: Definition von Q}
Q^\pi(s,a):= E^\pi \left\{ \sum_{t\ge0} \gamma^t r_{t+1} |s_0=s, a_0=a \right\} \quad \forall s,a
\end{equation}
where $s_{t+1} \sim P(\cdot \ |s_t,\pi(s_t))$ and $r_{t+1}=R(s_{t+1}|s_t,\pi(s_t))$. The parameter
$\gamma\in (0,1)$ denotes a discount factor.
Ultimately, we are not directly interested in $Q^\pi$; our true goal is optimal control, i.e.\ we seek an
optimal policy $\pi^*=\mathop{\mathrm{argmax}}_\pi Q^\pi$. To accomplish
that, policy iteration interleaves the two steps policy evaluation and policy improvement:
First, compute
$Q^{\pi_k}$ for a fixed policy $\pi_k$. Then, once $Q^{\pi_k}$ is known, derive an improved policy $\pi_{k+1}$
by choosing the greedy policy with respect to $Q^{\pi_k}$, i.e. by
by choosing in every state the action $\pi_{k+1}(s)=\mathop{\mathrm{argmax}}_a Q^{\pi_k}(s,a)$
that achieves the best Q-value. Obtaining the best action is trivial if we employ the
Q-notation, otherwise we would need the transition probabilities and reward function (i.e.\ a `model').
To compute the Q-function, one exploits the fact that $Q^\pi$ obeys the fixed-point relation $Q^\pi=\mathcal T_\pi Q^\pi$,
where $\mathcal T_\pi$ is the Bellman operator
\[
\bigl(\mathcal T_\pi Q\bigr)(s,a):= E_{s'\sim P(\cdot \ |s,a)} \left\{R(s'|s,a)+\gamma Q(s',\pi(s')) \right\}.
\]
In principle, it is possible to calculate $Q^\pi$
exactly by solving the corresponding linear system of equations, provided that the transition probabilities $P(s'|s,a)$
and rewards $R(s'|s,a)$ are known in advance and the number of states is finite and small.
However, in many practical situations this is not the case. If the number of states is very large or infinite, one
can only operate with an approximation of the Q-function, e.g. a linear approximation
$\tilde Q(s,a;\mathbf w)=\boldsymbol \phi_m(s,a)\trans\mathbf w$, where $\boldsymbol \phi_m(s,a)$ is an $m$-dimensional feature vector
and $\mathbf w$ the adjustable weight vector. To approximate the unknown expectation value one employs
simulation (i.e.\ an agent interacts with the environment) to generate a large
number of observed transitions. Figure~\ref{fig: API} depicts the resulting approximate policy iteration
framework: using only a parameterized $\tilde Q$ and sample transitions to emulate
application of $\mathcal T_\pi$ means that we can carry out the policy evaluation step only approximately.
Also, using an approximation of $Q^{\pi_k}$ to derive an improved policy from does not necessarily mean that
the new policy actually is an improved one; oscillations in policy space are possible. In practice however,
approximate policy iteration is a fairly sound procedure that either converges or oscillates
with bounded suboptimality \citep{bert96neurodynamicprogram}.
Inferring a parameter vector $\mathbf w_k$ from sample transitions such that $\tilde Q(\cdot \ ;\mathbf w_k)$
is a good approximation to $Q^{\pi_k}$ is therefore the central problem addressed by
reinforcement learning. Chiefly two questions need to be answered:
\begin{enumerate}
\item By what method do we choose the parametrisation of $\tilde Q$ and carry out regression?
\item By what method do we learn the weight vector $\mathbf w$ of this approximation, given sample transitions?
\end{enumerate}
The latter can be solved by the family of temporal difference learning, with TD($\lambda$), initially
proposed by \citet{sutton88td}, being its most prominent member. Using a linearly parametrized
value function, it was in shown in \citep{tsivanroy97convergece_of_td} that TD($\lambda$)
converges against the true value function (under certain technical assumptions).
\begin{figure}
\psfrag{pik}{{\small $\pi_k$}}
\psfrag{pikk}{{\small $\pi_{k+1}$}}
\psfrag{wk}{{\small $\mathbf w_{k}$}}
\psfrag{Q}{{\tiny $\tilde Q(\cdot \ ;\mathbf w_k) \approx Q^{\pi_k}$}}
\psfrag{max}{{\tiny $\tilde Q(\cdot \ ;\mathbf w_k)$}}
\psfrag{s0a0}{{\tiny $\{s_i,a_i,r_{i+1},s_{i+1}\}$}}
\centering
\includegraphics[width=0.7\textwidth]{api_eng.eps}
\caption{Approximate policy iteration framework.}
\label{fig: API}
\end{figure}
\subsubsection{Approximate policy evaluation with least-squares methods}
In what follows we will discuss three related algorithms for approximate policy evaluation that share most of the advantages
of TD($\lambda$) but converge much faster,
since they are based on solving a least-squares problem in closed form, whereas TD($\lambda$) is based on
stochastic gradient descent. All three methods assume that an (infinitely) long\footnote{If we are dealing with
an episodic learning task with designated terminal states, we can generate an infinite trajectory
in the following way: once an episode ends, we set the discount factor $\gamma$ to zero
and make a zero-reward transition from the terminal state to the start state of the next (following)
episode.} trajectory of states and rewards is generated using a simulation of the system (e.g. an
agent interacting with its environment). The trajectory starts from an initial state $s_0$ and
consists of tuples $(s_0,a_0),(s_1,a_1),\ldots$ and rewards $r_1,r_2,\ldots$ where action $a_i$ is
chosen according to $\pi$ and successor states and associated rewards are sampled from the underlying
transition probabilities. From now on, to abbreviate these state-action tuples, we will understand $\mathbf x_t$ as
denoting $\mathbf x_t := (s_t,a_t)$. Furthermore, we assume that the Q-function is parameterized by
$\tilde Q^\pi(\mathbf x;\mathbf w)=\boldsymbol \phi_m(\mathbf x)\trans\mathbf w$ and that $\mathbf w$ needs to be determined.
\paragraph{The LSPE($\lambda$) method.}
The method $\lambda$-least squares policy evaluation LSPE($\lambda$) was
proposed by \citet{nedicbert2003LSPE,BBN04@improved_td} and proceeds by making incremental changes to
the weights $\mathbf w$. Assume that at time $t$ (after having observed $t$ transitions) we have a current
weight vector $\mathbf w_t$ and observe a new transition from $\mathbf x_t$ to $\mathbf x_{t+1}$ with
associated reward $r_{t+1}$. Then we compute the solution $\hat \mathbf w_{t+1}$ of the least-squares problem
\begin{equation}
\label{eq:LSPE1}
\mathbf{\hat w}_{t+1}=\mathop{\mathrm{argmin}}_{\mathbf w} \sum_{i=0}^t\left\{ \boldsymbol \phi_m(\mathbf x_i)\trans\mathbf w - \boldsymbol \phi_m(\mathbf x_i)\trans\mathbf w_t -
\sum_{k=i}^t (\lambda \gamma)^{k-i} d(\mathbf x_k,\mathbf x_{k+1};\mathbf w_t) \right\}^2
\end{equation}
where
\[
d(\mathbf x_k,\mathbf x_{k+1};\mathbf w_t) := r_{k+1} + \gamma \boldsymbol \phi_m(\mathbf x_{k+1})\trans\mathbf w_t
- \boldsymbol \phi_m(\mathbf x_{k})\trans\mathbf w_t.
\]
The new weight vector $\mathbf w_{t+1}$ is obtained by setting
\begin{equation}
\label{eq:LSPE1b}
\mathbf w_{t+1}=\mathbf w_t + \eta_t (\mathbf{\hat w}_{t+1} - \mathbf w_t)
\end{equation}
where $\mathbf w_0$ is the initial weight vector and $0<\eta_t\le 1$ is a diminishing step size.
\paragraph{The LSTD($\lambda$) method.}
The least-squares temporal difference method LSTD($\lambda$) proposed by
\citet{bradtke96lstd} for $\lambda=0$ and by \citet{boyan99lstd} for general $\lambda \in [0,1]$ does not proceed
by making incremental changes to the weight vector $\mathbf w$. Instead, at time $t$
(after having observed $t$ transitions), the weight vector $\mathbf w_{t+1}$
is obtained by solving the fixed-point equation
\begin{equation}
\label{eq:LSTD1}
\mathbf{\hat w}=\mathop{\mathrm{argmin}}_{\mathbf w} \sum_{i=0}^t \left\{ \boldsymbol \phi_m(\mathbf x_i)\trans\mathbf w - \boldsymbol \phi_m(\mathbf x_i)\trans\mathbf{\hat w} -
\sum_{k=i}^t (\lambda \gamma)^{k-i} d(\mathbf x_k,\mathbf x_{k+1};\mathbf{\hat w}) \right\}^2
\end{equation}
for $\mathbf{\hat w}$, where
\[
d(\mathbf x_k,\mathbf x_{k+1};\mathbf{\hat w}) := r_{k+1} + \gamma \boldsymbol \phi_m(\mathbf x_{k+1})\trans\mathbf{\hat w}
- \boldsymbol \phi_m(\mathbf x_{k})\trans\mathbf{\hat w} ,
\]
and setting $\mathbf w_{t+1}$ to this unique solution.
\paragraph{Comparison of LSPE and LSTD.}
The similarities and differences between LSPE($\lambda$) and LSTD($\lambda$) are listed in Table~\ref{tab:vergleich_von_lspe_lstd}.
Both LSPE($\lambda$) and LSTD($\lambda$) converge to the same limit \citep[see][]{BBN04@improved_td}, which
is also the limit to which TD($\lambda$) converges (the initial iterates may be vastly different though).
Both methods rely on the solution of a least-squares problem (either explicitly
as is the case in LSPE or implicitly as is the case in LSTD) and can be efficiently implemented using
recursive computations. Computational experiments in \citep{Ioffe96@lambdapi} or \citep{lagoudakis2003lspi}
indicate that both approaches can perform much better than TD($\lambda$).
Both methods LSPE and LSTD differ as far as their role in the approximate policy iteration framework is concerned.
LSPE can take advantage of previous estimates of the weight vector and can hence be used in
the context of optimistic policy iteration (OPI), i.e.\ the policy under consideration gets improved
following very few observed transitions. For LSTD this is not possible; here a more rigid
actor-critic approach is called for.
Both methods LSPE and LSTD also differ as far as their relation to standard regression with least-squares methods is
concerned. LSPE directly minimizes a quadratic objective function. Using this function it will be possible to carry out `supervised' basis selection, where for the selection of basis functions the reduction of the costs (the quantity we are trying to minimize) is taken into account. For LSTD this is not possible; here
in fact we are solving a fixed point equation that employs least-squares only implicitly
(to carry out the projection).
\begin{table}
{\small
\begin{center}
\begin{tabular}{|lll|}
\hline
\bfseries BRM& \bfseries LSTD & \bfseries LSPE\\
\hline
Corresponds to TD($0$) & Corresponds to TD($\lambda$) & Corresponds to TD($\lambda$)\\
Deterministic transitions only& Stochastic transitions possible & Stochastic transitions possible \\
No OPI & No OPI & OPI possible\\
Explicit least-squares & Least-squares only implicitly & Explicit least-squares\\
$\Rightarrow$ Supervised basis selection & $\Rightarrow$ No supervised basis selection & $\Rightarrow$ Supervised basis selection\\
\hline
\end{tabular}
\caption{Comparison of least-squares policy evaluation}
\label{tab:vergleich_von_lspe_lstd}
\end{center}
}
\end{table}
\paragraph{The BRM method.}
A third approach, related to LSTD(0) is the
direct minimization of the Bellman residuals (BRM), as proposed in \citep{baird95residual,lagoudakis2003lspi}.
Here, at time $t$, the weight vector $\mathbf w_{t+1}$ is obtained from solving the least-squares problem
\[
\mathbf w_{t+1}=\mathop{\mathrm{argmin}}_{\mathbf w} \sum_{i=0}^t \left\{ \boldsymbol \phi_m(\mathbf x_i)\trans\mathbf w - \sum_{s'} P(s'|s_i,\pi(s_i))
\left[ R(s'|s_i,\pi(s_i)) + \gamma \boldsymbol \phi_m(s',\pi(s'))\trans\mathbf w \right] \right\}^2
\]
Unfortunately, the transition probabilities can not be approximated by using single samples from
the trajectory; one would need `doubled' samples to obtain an unbiased estimate \citep[see][]{baird95residual}. Thus this method
would be only applicable for tasks with deterministic state transitions or known state dynamics; two
conditions which are both violated
in our application to RoboCup-keepaway. Nevertheless
we will treat the deterministic case in first place during all our derivations, since LSPE and
LSTD require only very minor changes to the resulting implementation. Using BRM with
deterministic transitions amounts to solving the least-squares problem
\begin{equation}
\label{eq:BRM}
\mathbf w_{t+1}=\mathop{\mathrm{argmin}}_{\mathbf w} \sum_{i=0}^t \left\{ \boldsymbol \phi_m(\mathbf x_i)\trans\mathbf w - r_{i+1} - \gamma \boldsymbol \phi_m(\mathbf x_{i+1})\trans\mathbf w \right\}^2
\end{equation}
\subsection{Standard regression with regularization networks}
From the foregoing discussion we have seen that
(approximate) policy evaluation can amount to a
traditional function approximation problem. For this purpose we will here consider
the family of regularization networks \citep{RN95}, which are functionally
equivalent to kernel ridge regression and Bayesian regression with Gaussian
processes \citep{raswil06gpbook}. Here however, we will introduce them
from the non-Bayesian regularization perspective as in \citep{smola2000SGMA}.
\subsubsection{Solving the full problem}
Given $t$ training examples $\{\mathbf x_i,y_i\}_{i=1}^t$ with inputs $\mathbf x_i$ and observed outputs $y_i$,
to reconstruct the underlying function,
one considers candidates from a function space $\mathcal H_k$,
where $\mathcal H_k$ is a reproducing kernel Hilbert space with reproducing
kernel $k$ \citep[e.g.][]{wahba1990}, and
searches among all possible candidates for
the function $f \in \mathcal H_k$ that achieves the minimum in the risk functional $\sum (y_i-f(\mathbf x_i))^2
+ \sigma^2 \norm{f}_{\mathcal H_k}$. The scalar $\sigma^2$ is a regularization parameter.
Since solutions to this variational problem may be represented through the data alone \citep{wahba1990}
as $f(\cdot)=\sum k(\mathbf x_i,\cdot) w_i$, the unknown weight vector $\mathbf w$ is obtained from solving the quadratic
problem
\begin{equation}
\label{eq:full_problem}
\min_{\mathbf w \in \mathbb R^t} \ (\mathbf K \mathbf w - \mathbf y)\trans(\mathbf K \mathbf w -\mathbf y) + \sigma^2 \mathbf w\trans\mathbf K \mathbf w
\end{equation}
The solution to \eqref{eq:full_problem} is $\mathbf w=(\mathbf K+\sigma^2 \mathbf I)^{-1} \mathbf y$, where
$\mathbf y=\rowvector{y_1,\ldots y_t}$ and $\mathbf K$ is the $t \times t$ kernel matrix $[\mathbf K]_{ij}=k(\mathbf x_i,\mathbf x_j)$.
\subsubsection{Subset of regressor approximation}
\label{sec:subset of regressors}
Often, one is not willing to solve the full $t$-by-$t$ problem in \eqref{eq:full_problem} when
the number of training examples $t$ is large and instead considers means of approximation. In
the subset of regressors (SR) approach \citep{poggiogirosi90rn,luowahba97has,smola2000SGMA} one chooses
a subset
$\{\mathbf{\tilde x}_i\}_{i=1}^m$ of the data, with $m \ll t$, and approximates the kernel
for arbitrary $\mathbf x,\mathbf x'$ by taking
\begin{equation}
\label{eq:kernel_approx}
k(\mathbf x,\mathbf x')=\bkm{\mathbf x}\trans \bKmmi \bkm{\mathbf x'}.
\end{equation}
Here $\bkm{\mathbf x}$ denotes the $m \times 1$ feature vector
$\bkm{\mathbf x}=\rowvector{k(\mathbf{\tilde x}_1,\mathbf x),\ldots,k(\mathbf{\tilde x}_m,\mathbf x)}$
and the $m \times m$ matrix $\bKmm$ is the submatrix $[\bKmm]_{ij}=k(\mathbf{\tilde x}_i,\mathbf{\tilde x}_j)$ of
the full kernel matrix $\mathbf K$. Replacing the kernel in \eqref{eq:full_problem} by expression
\eqref{eq:kernel_approx} gives
\[
\min_{\mathbf w \in \mathbb R^m} (\mathbf K_{tm} \mathbf w - \mathbf y)\trans(\mathbf K_{tm} \mathbf w -\mathbf y) + \sigma^2 \mathbf w\trans\bKmm \mathbf w
\]
with solution
\begin{equation}
\label{eq:SR_solution}
\mathbf w_t=\bigl( \mathbf K_{tm}\trans \mathbf K_{tm} + \sigma^2 \bKmm\bigr)^{-1} \mathbf K_{tm}\trans \mathbf y
\end{equation}
where $\mathbf K_{tm}$ is the $t \times m$ submatrix $[\mathbf K_{tm}]_{ij}=k(\mathbf x_i,\mathbf{\tilde x}_j)$ corresponding
to the $m$ columns of the data points in the subset. Learning
the weight vector $\mathbf w_t$ from \eqref{eq:SR_solution} costs $\mathcal O(tm^2)$ operations.
Afterwards, predictions for unknown test points $\mathbf x_*$ are made by
$f(\mathbf x_*)=\bkm{\mathbf x_*}\trans\mathbf w$ at $\mathcal O(m)$ operations.
\subsubsection{Online selection of the subset}
\label{sect:SOG}
To choose the subset of relevant basis functions (termed the dictionary or set of basis vectors $\mathcal{BV}$)
many different approaches are possible; typically they can be distinguished as being unsupervised or supervised.
Unsupervised approaches like random selection \citep{williams01nystroem} or the incomplete Cholesky decomposition
\citep{fine01efficientsvmtrainin} do not use information about the task we want to solve, i.e.\ the response variable
we wish to regress upon. Random selection does not use any information at all whereas incomplete Cholesky
aims at reducing the error incurred from approximating the kernel matrix. Supervised choice of the subset
does take into account the response variable and usually proceeds by greedy forward selection, using e.g.
matching pursuit techniques \citep{smola01sparsegpr}.
However, none of these approaches are directly applicable for sequential learning, since the complete set
of basis function candidates must be known from the start. Instead, assume that the data becomes available
only sequentially at $t=1,2,\ldots$ and that only one pass over the data set is possible, so that we
cannot select the subset $\mathcal{BV}$ in advance. Working in the context of Gaussian process regression,
\citet{csato2001sparse} and later
\citet{engel2003gptd} have proposed a sparse greedy online approximation: start from an empty set of $\mathcal{BV}$ and
examine at every time step $t$ if the new
example needs to be included in $\mathcal{BV}$ or if it can be processed without augmenting $\mathcal{BV}$.
The criterion they employ to make that decision is an unsupervised one: at every time step $t$ compute
for the new data point $\mathbf x_t$ the error
\begin{equation}
\label{eq:ALD-test}
\delta_t=k(\mathbf x_t,\mathbf x_t) - \bkm{\mathbf x_t}\trans \bKmmi \bkm{\mathbf x_t}
\end{equation}
incurred from approximating the new data point using the current $\mathcal{BV}$. If $\delta_t$ exceeds a given threshold
then it is considered as sufficiently different and added to the dictionary $\mathcal{BV}$. Note that only the current
number of elements in $\mathcal{BV}$ at a given time $t$ is considered, the contribution from basis functions
that will be added at a later time is ignored.
In this case, it might be instructive to visualize what happens to the $t \times m$ data
matrix $\mathbf K_{tm}$ once $\mathcal{BV}$ is augmented. Adding the new element $\mathbf x_t$ to $\mathcal{BV}$ means adding a new
basis function (centered on $\mathbf x_t$) to the model and consequently adding a new associated column
$\mathbf q=\rowvector{k(\mathbf x_1,\mathbf x_t),\ldots,k(\mathbf x_t,\mathbf x_t)}$ to $\mathbf K_{tm}$. With sparse online approximation all
$t-1$ past entries in $\mathbf q$ are given
by $k(\mathbf x_i,\mathbf x_t)\approx \bkm{\mathbf x_i}\trans \bKmmi \bkm{\mathbf x_t}$, $i=1\ldots,t-1$, which is exact for the
$m$ basis-elements and an approximation for the remaining $t-m-1$ non-basis elements. Hence, going
from $m$ to $m+1$ basis functions, we have that
\begin{equation}
\mathbf K_{t,m+1}=\begin{bmatrix} \mathbf K_{tm} & \mathbf q \end{bmatrix}=
\begin{bmatrix}
\mathbf K_{t-1,m} & \mathbf K_{t-1,m} \mathbf a_t \\ \bkm{\mathbf x_t}\trans & k(\mathbf x_t,\mathbf x_t)
\end{bmatrix}.
\end{equation}
where $\mathbf a_t := \bKmmi \bkm{\mathbf x_t}$.
The overall effect is that now we do not need to access the full data set any longer. All costly
$\mathcal O(tm)$ operations that arise from adding a new column, i.e.\ adding a new basis function,
computing the reduction of error during greedy forward selection of basis functions, or computing
predictive variance with augmentation as in \citep{rqc2005healing},
now become a more affordable $\mathcal O(m^2)$.
This is exploited in \citep{mein_icann2006}; here a simple modification of the
selection procedure is presented, where in addition to the unsupervised criterion
from \eqref{eq:ALD-test} the contribution to the reduction of the error (i.e.\ the
objective function one is trying to minimize) is taken into
account. Since the per-step complexity during training and then later during prediction critically
depends on the size $m$ of the subset $\mathcal{BV}$, making sure that no unnecessary basis functions are
selected ensures more efficient usage of otherwise scarce resources and makes learning in real-time
(a necessity for keepaway) possible.
\section{Policy evaluation with regularization networks}
\label{sec:pe with rn}
We now present an efficient online implementation for least-squares-based policy evaluation
(applicable to the methods LSPE, LSTD, BRM) to be used in the framework of approximate
policy iteration (see Figure~\ref{fig: API}). Our implementation
combines the aforementioned automatic
selection of basis functions (from Section~\ref{sect:SOG}) with a recursive computation of the weight vector
corresponding to the regularization network (from Section~\ref{sec:subset of regressors}) to represent the
underlying Q-function.
The goal is to infer an approximation $\tilde Q(\cdot \ ;\mathbf w)$ of $Q^\pi$, the unknown Q-function
of some given policy $\pi$. The training examples are taken from an observed trajectory
$\mathbf x_0, \mathbf x_1, \mathbf x_2,\ldots$ with associated rewards $r_1,r_2,\ldots$ where $\mathbf x_i$
denotes state-action tuples $\mathbf x_i := (s_i,a_i)$
and action $a_i=\pi(s_i)$ is selected according to policy $\pi$.
\subsection{Stating LSPE, LSTD and BRM with regularization networks}
First, express each of the three problems LSPE in eq.~\eqref{eq:LSPE1}, LSTD in eq.~\eqref{eq:LSTD1} and
BRM in eq.~\eqref{eq:BRM} in more compact
matrix form using regularization networks from \eqref{eq:SR_solution}. Assume that the dictionary $\mathcal{BV}$ contains
$m$ basis functions.
Further assume that at time $t$ (after having observed $t$ transitions) a new transition from $\mathbf x_t$ to $\mathbf x_{t+1}$
under reward $r_{t+1}$ is observed. From now on we will use a double index (also for vectors) to indicate the
dependence in the number of examples $t$ and the number of basis functions $m$.
Define the matrices:
\begin{gather}
\mathbf K_{t+1,m}:=
\begin{bmatrix}
\bkm{\mathbf x_0}\trans \\
\vdots \\
\bkm{\mathbf x_t}\trans
\end{bmatrix}, \quad
\mathbf H_{t+1,m}:=
\begin{bmatrix}
\bkm{\mathbf x_0}\trans-\gamma \bkm{\mathbf x_1}\trans \\
\vdots \\
\bkm{\mathbf x_t}\trans-\gamma \bkm{\mathbf x_{t+1}}\trans
\end{bmatrix} \bigskip \nonumber \\
\mathbf r_{t+1}:= \begin{bmatrix} r_1 \\ \vdots\\ r_{t+1} \end{bmatrix}, \quad
\boldsymbol \Lambda_{t+1}:=
\begin{bmatrix}
1 & (\lambda \gamma)^1 & \cdots &(\lambda \gamma)^t \\
0 & \ddots & & \vdots \\
\vdots & \ddots & 1 & (\lambda \gamma)^1 \\
0 & \cdots & 0 & 1
\end{bmatrix}
\label{eq:define_data_matrix}
\end{gather}
where, as before, $m \times 1$ vector $\bkm{\cdot}$ denotes
$\bkm{\cdot}=\rowvector{k(\cdot,\mathbf{\tilde x}_1),\ldots,k(\cdot,\mathbf{\tilde x}_m)}$.
\subsubsection{The LSPE($\lambda$) method}
Then, for LSPE($\lambda$), the least-squares problem \eqref{eq:LSPE1} is stated as
($\bw_{tm}$ being the weight vector of the previous step):
\begin{eqnarray*}
\mathbf{\hat w}_{t+1,m}&=&\mathop{\mathrm{argmin}}_{\mathbf w} \ \Bigl\{ \norm{\mathbf K_{t+1,m}\mathbf w - \mathbf K_{t+1,m} \bw_{tm} - \boldsymbol \Lambda_{t+1}
\bigl(\mathbf r_{t+1} - \mathbf H_{t+1,m}\bw_{tm}\bigr)}^2 \nonumber \\
& & \qquad \qquad + \sigma^2 (\mathbf w-\bw_{tm})\trans \bKmm (\mathbf w-\bw_{tm}) \Bigr\}
\end{eqnarray*}
Computing the derivative wrt $\mathbf w$ and setting it to zero, one obtains for $\mathbf{\hat w}_{t+1,m}$:
\[
\mathbf{\hat w}_{t+1,m} =\bw_{tm} + \bigl(\mathbf K_{t+1,m}\trans \mathbf K_{t+1,m} + \sigma^2 \bKmm\bigr)^{-1}
\bigl( \mathbf Z_{t+1,m}\trans \mathbf r_{t+1} - \mathbf Z_{t+1,m}\trans\mathbf H_{t+1,m}\bw_{tm} \bigr)
\]
where in the last line we have substituted $\mathbf Z_{t+1,m}:= \boldsymbol \Lambda_{t+1}\trans \mathbf K_{t+1,m}$.
From \eqref{eq:LSPE1b} the next iterate $\bw_{t+1,m}$ for the weight vector in LSPE($\lambda$)
is thus obtained by
\begin{eqnarray}
\label{eq:LSPE3}
\bw_{t+1,m}&=&\bw_{tm} + \eta_t (\mathbf{\hat w}_{t+1,m} - \bw_{tm})=\bw_{tm}+\eta_t
\bigl(\mathbf K_{t+1,m}\trans \mathbf K_{t+1,m} + \sigma^2 \bKmm\bigr)^{-1} \nonumber \\
& &
\bigl( \mathbf Z_{t+1,m}\trans \mathbf r_{t+1} - \mathbf Z_{t+1,m}\trans\mathbf H_{t+1,m} \bw_{tm} \bigr)
\end{eqnarray}
\subsubsection{The LSTD($\lambda$) method}
Likewise, for LSTD($\lambda$), the fixed point equation \eqref{eq:LSTD1} is stated as:
\begin{eqnarray*}
\mathbf{\hat w} &=&\mathop{\mathrm{argmin}}_{\mathbf w} \ \Bigl\{ \norm{\mathbf K_{t+1,m}\mathbf w - \mathbf K_{t+1,m} \mathbf{\hat w} - \boldsymbol \Lambda_{t+1}
\bigl(\mathbf r_{t+1} - \mathbf H_{t+1,m}\mathbf{\hat w}\bigr)}^2 \nonumber \\
& & \qquad \qquad + \sigma^2 \mathbf w\trans \bKmm \mathbf w \Bigr\}.
\end{eqnarray*}
Computing the derivative with respect to $\mathbf w$ and setting it to zero, one obtains
\[ \bigl( \mathbf Z_{t+1,m}\trans \mathbf H_{t+1,m} + \sigma^2 \bKmm \bigr) \mathbf{\hat w}=\mathbf Z_{t+1,m}\trans \mathbf r_{t+1}. \]
Thus the solution $\bw_{t+1,m}$ to the fixed point equation in LSTD($\lambda$) is
given by:
\begin{equation}
\label{eq:LSTD3}
\bw_{t+1,m}=\bigl( \mathbf Z_{t+1,m}\trans\mathbf H_{t+1,m} + \sigma^2 \bKmm \bigr)^{-1} \mathbf Z_{t+1,m}\trans \mathbf r_{t+1}
\end{equation}
\subsubsection{The BRM method}
Finally, for the case of BRM, the least-squares problem \eqref{eq:BRM} is stated as:
\[
\bw_{t+1,m}=\mathop{\mathrm{argmin}}_\mathbf w \ \Bigl\{ \norm{\mathbf r_{t+1}- \mathbf H_{t+1,m}\mathbf w }^2 + \sigma^2 \mathbf w\trans \bKmm \mathbf w \Bigr\}
\]
Thus again, one obtains the weight vector $\bw_{t+1,m}$ by
\begin{equation}
\label{eq:BRM2}
\bw_{t+1,m}=\bigl( \mathbf H_{t+1,m}\trans\mathbf H_{t+1,m} + \sigma^2 \bKmm \bigr)^{-1} \mathbf H_{t+1,m}\trans \mathbf r_{t+1}
\end{equation}
\subsection{Outline of the recursive implementation}
Note that all three methods amount to solving a very similar structured set of linear
equations in eqs. \eqref{eq:LSPE3},\eqref{eq:LSTD3},\eqref{eq:BRM2}. Overloading the notation
these can be compactly stated as:
\begin{itemize}
\item {\bfseries LSPE:} solve
\begin{equation}
\bw_{t+1,m}=\bw_{tm} + \eta \mathbf P_{t+1,m}^{-1}(\mathbf b_{t+1,m}-\mathbf A_{t+1,m}\bw_{tm}) \tag{\ref{eq:LSPE3}'}
\end{equation}
where
\begin{itemize}
\item $\mathbf P_{t+1,m}^{-1} := (\mathbf K_{t+1,m}\trans\mathbf K_{t+1,m} + \sigma^2 \bKmm)^{-1}$
\item $\mathbf b_{t+1,m} := \mathbf Z_{t+1,m}\trans\mathbf r_{t+1}$
\item $\mathbf A_{t+1,m} := \mathbf Z_{t+1,m}\trans\mathbf H_{t+1,m}$
\end{itemize}
%
\item {\bfseries LSTD:} solve
\begin{equation}
\bw_{t+1,m}=\mathbf P_{t+1,m}^{-1}\mathbf b_{t+1,m} \tag{\ref{eq:LSTD3}'}
\end{equation}
where
\begin{itemize}
\item $\mathbf P_{t+1,m}^{-1} := (\mathbf Z_{t+1,m}\trans\mathbf H_{t+1,m} + \sigma^2 \bKmm)^{-1}$
\item $\mathbf b_{t+1,m} := \mathbf Z_{t+1,m}\trans\mathbf r_{t+1}$
\end{itemize}
\item {\bfseries BRM:} solve
\begin{equation}
\bw_{t+1,m}=\mathbf P_{t+1,m}^{-1}\mathbf b_{t+1,m} \tag{\ref{eq:BRM2}'}
\end{equation}
where
\begin{itemize}
\item $\mathbf P_{t+1,m}^{-1} := (\mathbf H_{t+1,m}\trans\mathbf H_{t+1,m} + \sigma^2 \bKmm)^{-1}$
\item $\mathbf b_{t+1,m} := \mathbf H_{t+1,m}\trans\mathbf r_{t+1}$
\end{itemize}
\end{itemize}
Each time a new transitions from $\mathbf x_{t}$ to $\mathbf x_{t+1}$ under reward $r_{t+1}$ is observed, the goal is to recursively
\begin{enumerate}
\item update the weight vector $\bw_{tm}$, and
\item possibly augment the model, adding a new basis function (centered on $\mathbf x_{t+1}$) to the set of
currently selected basis functions $\mathcal{BV}$.
\end{enumerate}
More
specifically, we will perform one or both of the following update operations:
\begin{enumerate}
\item {\em Normal step}: Process $(\mathbf x_{t+1},r_{t+1})$ using the current fixed set of
basis functions $\mathcal{BV}$.
\item {\em Growing step}: If the new example is sufficiently different from the previous examples in $\mathcal{BV}$
(i.e.\ the reconstruction error in (\ref{eq:ALD-test}) exceeds a given threshold)
and strongly contributes to the solution of the problem (i.e.\ the decrease of the
loss when adding the new basis function is greater than a given threshold)
then the current example is added to $\mathcal{BV}$ and the number of basis functions
in the model is increased by one.
\end{enumerate}
The update operations work along the lines of recursive least squares (RLS), i.e.\
propagate forward the inverse\footnote{A better alternative (from the standpoint of
numerical implementation) would be to not propagate forward the inverse, but instead to work with
the Cholesky factor. For this paper we chose the first method in the first place because it gives consistent
update formulas for all three considered problems (note that for LSTD the cross-product matrix is not symmetric) and overall allows a better exposition. For details on the second way, see e.g. \citep{sayed03adaptivefiltering}.} of the $m \times m$ cross product matrix $\mathbf P_{tm}$.
Integral to the derivation of these updates are two well-known matrix identities for recursively computing the
inverse of a matrix: (for matrices with compatible dimensions)
\begin{equation}
\label{eq:SMW}
\text{if } \mathbf B_{t+1}=\mathbf B_t+\mathbf b\bb\trans
%
\text{ then }
\mathbf B_{t+1}^{-1}=\mathbf B_t^{-1} - \frac{\mathbf B_t^{-1}\mathbf b\bb\trans\mathbf B_t^{-1}}{1+\mathbf b\trans \mathbf B_t^{-1} \mathbf b}
\end{equation}
which is used when adding a row to the data matrix. Likewise,
\begin{equation}
\label{eq:PMI}
\text{if } \mathbf B_{t+1}= \begin{bmatrix} \mathbf B_t & \mathbf b \\ \mathbf b\trans & b^* \end{bmatrix}
\text{ then }
\mathbf B_{t+1}^{-1}=
\begin{bmatrix}\mathbf B_t^{-1} & \mathbf 0 \\ \mathbf 0 & 0 \end{bmatrix} + \frac{1}{\Delta_b}
%
\begin{bmatrix}-\mathbf B_t^{-1}\mathbf b \\ 1\end{bmatrix}
\begin{bmatrix}-\mathbf B_t^{-1}\mathbf b \\ 1\end{bmatrix}\trans
\end{equation}
with $\Delta_b=b^*-\mathbf b\trans \mathbf B_t^{-1}\mathbf b$. This second update is used when adding a column
to the data matrix.
An outline of the general implementation applicable to all three of the methods LSPE, LSTD, and BRM is
sketched in Figure~\ref{fig:algorithm}. To avoid unnecessary repetitions we will here only derive the update equations
for the BRM method; the other two are obtained with very minor modifications and are summarized in
the appendix.
\begin{figure*}[tbh]
{\footnotesize
\begin{center}
\begin{tabular}{|p{0.95\textwidth}|}
\hline
{\bf Relevant symbols:} \\
\hspace*{1.4cm} \begin{tabular}{rrl}
//& $\pi$: & Policy, whose value function $Q^\pi$ we want to estimate\\
//& $t$: & Number of transitions seen so far \\
//& $m$: & Current number of basis functions in $\mathcal{BV}$ \\
//& $\mathbf P_{tm}^{-1}$: & Cross product matrix used to compute $\bw_{tm}$ \\
//& $\bw_{tm}$: & Weights of $\tilde Q(\cdot \ ;\bw_{tm})$, the current approximation to $Q^\pi$\\
//& $\bKmmi$: & Used during approximation of kernel\\
\end{tabular} \\
\smallskip
{\bf Initialization:}\\
\hspace*{1.4cm} \begin{tabular}{p{0.8\textwidth}}
Generate first state $s_0$. Choose action $a_0=\pi(s_0)$. Execute $a_0$ and observe
$s_1$ and $r_1$. Choose $a_1=\pi(s_1)$. Let $\mathbf x_0:=(s_0,a_0)$ and
$bx_1:=(s_1,a_1)$. Initialize the set of basis functions
$\mathcal{BV}:=\{\mathbf x_0,\mathbf x_1\}$ and $\mathbf K_{2,2}^{-1}$. Initialize $\mathbf P_{1,2}^{-1}$,
$\mathbf w_{1,2}$ according to either LSPE, LSTD or BRM. Set $t:= 1$ and
$m := 2$.
\end{tabular} \smallskip \\
{\bf Loop:} {\bfseries For} $t=1,2,\ldots$ \\
\hspace*{1.4cm} \begin{tabular}{p{0.8\textwidth}}
Execute action $a_t$ (simulate a transition). \\
Observe next state $s_{t+1}$ and reward $r_{t+1}$. \\
Choose action $a_{t+1}=\pi(s_{t+1})$. Let $\mathbf x_{t+1}:=(s_{t+1},a_{t+1})$. \\ \smallskip
{\bfseries Step 1:} Check, if $\mathbf x_{t+1}$ should be added to the set of basis functions. \\
\hspace*{0.4cm} Unsupervised basis selection: return true if \eqref{eq:ALD-test}$>\texttt{TOL1}$.\\
\hspace*{0.4cm} Supervised basis selection: return true if \eqref{eq:ALD-test}$>\texttt{TOL1}$\\
\hspace*{4.4cm} and additionally if either \eqref{eq:xitmm} or (\ref{eq:xitmm}'')$>\texttt{TOL2}$. \\ \smallskip
{\bfseries Step 2: Normal step} \\
\hspace*{0.4cm} Obtain $\mathbf P_{t+1,m}^{-1}$ from either
\eqref{eq:normal Pittm},(\ref{eq:normal Pittm}'), or (\ref{eq:normal Pittm}''). \\
\hspace*{0.4cm} Obtain $\bw_{t+1,m}$ from either
\eqref{eq:normal wttm},(\ref{eq:normal wttm}'), or (\ref{eq:normal wttm}''). \\ \smallskip
{\bfseries Step 3: Growing step} (only when step 1 returned true) \\
\hspace*{0.4cm} Obtain $\mathbf P_{t+1,m+1}^{-1}$ from either
\eqref{eq:Pitmm},(\ref{eq:Pitmm}'), or (\ref{eq:Pitmm}''). \\
\hspace*{0.4cm} Obtain $\bw_{t+1,m+1}$ from either
\eqref{eq:bbetatmm},(\ref{eq:bbetatmm}'), or (\ref{eq:bbetatmm}''). \\
\hspace*{0.4cm} Add $\mathbf x_{t+1}$ to $\mathcal{BV}$ and obtain $\mathbf K_{m+1,m+1}$ from
\eqref{eq: aktualisiere Kmm}. \\
\hspace*{0.4cm} $m:=m+1$ \\ \smallskip
%
$t:=t+1$, $s_t:=s_{t+1}$, $a_t:=a_{t+1}$\\
\end{tabular} \\
\hline
\end{tabular}
\end{center}
}
\caption{Online policy evalution with growing regularization networks. This pseudo-code applies to BRM, LSPE
and LSTD, see the appendix for the exact equations. The computational complexity per observed
transition is $\mathcal O(m^2)$.}
\label{fig:algorithm}
\end{figure*}
\subsection{Derivation of recursive updates for the case BRM}
Let $t$ be the current time step, $(\mathbf x_{t+1},r_{t+1})$ the currently observed input-output pair and assume
that from the past $t$ examples $\{(\mathbf x_i,r_i)\}_{i=1}^t$ the $m$
examples $\{\tilde \mathbf x_i\}_{i=1}^m$ were selected into the dictionary $\mathcal{BV}$. Consider the penalized
least-squares problem that is BRM (restated here for clarity)
\begin{equation}
\label{eq:BRM4}
\min_{\mathbf w \in \mathbb R^m} J_{tm}(\mathbf w) = \norm{\mathbf r_t - \mathbf H_{tm}\mathbf w}^2 + \sigma^2 \mathbf w\trans \bKmm \mathbf w
\end{equation}
with $\mathbf H_{tm}$ being the $t \times m$ data matrix and $\mathbf r_t$ being the $t \times 1$ vector of the observed output values from \eqref{eq:define_data_matrix}. Defining the $m \times m$ cross product matrix
$\mathbf P_{tm}=(\mathbf H_{tm}\trans \mathbf H_{tm} + \sigma^2 \bKmm)$, the solution to (\ref{eq:BRM4}) is given
by
\[
\bw_{tm}=\mathbf P_{tm}^{-1} \mathbf H_{tm}\trans \mathbf r_t.
\]
Finally, introduce the costs $\xi_{tm}=J_{tm}(\bw_{tm})$. Assuming that
$\{\mathbf P_{tm}^{-1},\ \bw_{tm},\ \xi_{tm}\}$ are known from previous computations, every time a new transition
$(\mathbf x_{t+1},r_{t+1})$ is observed, we will perform one or both of the following update operations:
\subsubsection{Normal step: from $\{\mathbf P_{tm}^{-1},\bw_{tm},\xi_{tm}\}$ to $\{\mathbf P_{t+1,m}^{-1},\bw_{t+1,m},\xi_{t+1,m}\}$}
\label{sect:normalstep}
With $\mathbf h_{t+1}$ defined as $\mathbf h_{t+1}:= \rowvector{\bkm{\mathbf x_{t}}-\gamma \bkm{\mathbf x_{t+1}}}$, one gets
\[
\mathbf H_{t+1,m}=\begin{bmatrix} \mathbf H_{tm} \\ \mathbf h_{t+1}\trans \end{bmatrix} \quad \text{and} \quad
\mathbf r_{t+1}=\begin{bmatrix} \mathbf r_t \\ r_{t+1} \end{bmatrix}.
\]
Thus $\mathbf P_{t+1,m}=\mathbf P_{tm}+\mathbf h_{t+1}\bhtt\trans$ and we obtain from (\ref{eq:SMW}) the
well-known RLS updates
\begin{equation}
\label{eq:normal Pittm}
\mathbf P_{t+1,m}^{-1} = \mathbf P_{tm}^{-1} - \frac{\mathbf P_{tm}^{-1}\mathbf h_{t+1} \mathbf h_{t+1}\trans\mathbf P_{tm}^{-1}}{\Delta}
\end{equation}
with scalar $\Delta=1+\mathbf h_{t+1}\trans \mathbf P_{tm}^{-1} \mathbf h_{t+1}$ and
\begin{equation}
\label{eq:normal wttm}
\bw_{t+1,m} = \bw_{tm} + \frac{\varrho}{\Delta} \mathbf P_{tm}^{-1} \mathbf h_{t+1}
\end{equation}
with scalar $\varrho=r_{t+1} - \mathbf h_{t+1}\trans \bw_{tm}$. The costs become
$\xi_{t+1,m} = \xi_{tm} + \frac{\varrho^2}{\Delta}$. The set of basis
functions $\mathcal{BV}$ is not altered during this step. Operation complexity is $\mathcal O(m^2)$.
\subsubsection{Growing step: from $\{\mathbf P_{t+1,m}^{-1},\bw_{t+1,m},\xi_{t+1,m}\}$ to $\{\mathbf P_{t+1,m+1}^{-1},\bw_{t+1,m+1},\xi_{t+1,m+1}\}$}
\label{sect:growing step}
\paragraph{How to add a $\mathcal{BV}$.}
When adding a basis function (centered on $\mathbf x_{t+1}$) to
the model,
we augment the set $\mathcal{BV}$ with $\mathbf{\tilde x}_{m+1}$ (note that $\mathbf{\tilde x}_{m+1}$ is the same as $\mathbf x_{t+1}$ from above).
Define $\mathbf k_{t+1}:= \bkm{\mathbf{\tilde x}_{m+1}}$, $k^{*}_t := k(\mathbf x_{t},\mathbf x_{t+1})$,
and $k^{*}_{t+1}:= k(\mathbf x_{t+1},\mathbf x_{t+1})$.
Adding a basis function means appending a new $(t+1) \times 1$ vector $\mathbf q$ to the data matrix and appending
$\mathbf k_{t+1}$ as row/column to the penalty matrix $\bKmm$, thus
\[
\mathbf P_{t+1,m+1}=\begin{bmatrix} \mathbf H_{t+1,m} & \mathbf q \end{bmatrix}\trans \begin{bmatrix} \mathbf H_{t+1,m} & \mathbf q \end{bmatrix}
+ \sigma^2 \begin{bmatrix} \bKmm & \mathbf k_{t+1} \\ \mathbf k_{t+1}\trans & k^{*}_{t+1} \end{bmatrix}.
\]
Invoking (\ref{eq:PMI}) we obtain the updated inverse $\mathbf P_{t+1,m+1}^{-1}$ via
\begin{equation}
\label{eq:Pitmm}
\mathbf P_{t+1,m+1}^{-1}=\begin{bmatrix} \mathbf P_{t+1,m}^{-1} & \mathbf 0 \\ \mathbf 0 & 0 \end{bmatrix} +
\frac{1}{\Delta_b} \begin{bmatrix} -\mathbf w_b \\ 1 \end{bmatrix}
\begin{bmatrix} -\mathbf w_b \\ 1 \end{bmatrix}\trans
\end{equation}
where simple vector algebra reveals that
\begin{align}
\label{eq:wbdeltab}
\mathbf w_b&=\mathbf P_{t+1,m}^{-1} (\mathbf H_{t+1,m}\trans \mathbf q + \sigma^2 \mathbf k_{t+1}) \nonumber \\
\Delta_b&=\mathbf q\trans \mathbf q + \sigma^2 k^{*}_{t+1} - (\mathbf H_{t+1,m}\trans \mathbf q + \sigma^2 \mathbf k_{t+1})\trans \mathbf w_b.
\end{align}
Without sparse online approximation this step
would require us to recall all $t$ past examples and would come at the
undesirable price of $\mathcal O(tm)$ operations.
However, we are going to get away with merely $\mathcal O(m)$ operations and
only need to access the $m$ past examples in the memorized $\mathcal{BV}$.
Due to the sparse online approximation, $\mathbf q$ is actually of the form
$ \mathbf q\trans= \begin{bmatrix} \mathbf H_{tm} \mathbf a_{t+1} & \ \ h^{*}_{t+1} \end{bmatrix}\trans$
with $h^{*}_{t+1} := k^{*}_t-\gamma k^{*}_{t+1}$ and $\mathbf a_{t+1}=\bKmm^{-1} \mathbf k_{t+1}$
(see Section~\ref{sect:SOG}). Hence new information is injected
only through the last component. Exploiting this special structure of $\mathbf q$ equation
(\ref{eq:wbdeltab}) becomes
\begin{align}
\label{eq:wbdeltab2}
\mathbf w_b& =\mathbf a_{t+1} + \frac{\delta_h}{\Delta} \mathbf P_{tm}^{-1} \mathbf h_{t+1} \nonumber \\
\Delta_b& = \frac{\delta_h^2}{\Delta}+\sigma^2\delta_h
\end{align}
where $\delta_h=h^{*}_{t+1} - \mathbf h_{t+1}\trans \mathbf a_{t+1}$. If we cache
and reuse those terms already computed in the preceding step
(see Section~\ref{sect:normalstep}) then we can obtain $\mathbf w_b, \Delta_b$ in
$\mathcal O(m)$ operations.
To obtain the updated coefficients $\bw_{t+1,m+1}$ we postmultiply (\ref{eq:Pitmm})
by
$\mathbf H_{t+1,m+1}\trans \mathbf r_{t+1}=\begin{bmatrix} \mathbf H_{t+1,m}\trans \mathbf r_{t+1} & \ \mathbf q\trans\mathbf r_{t+1}\end{bmatrix}\trans$,
getting
\begin{equation}
\label{eq:bbetatmm}
\bw_{t+1,m+1}=\begin{bmatrix} \bw_{tm} \\ 0 \end{bmatrix} + \kappa
\begin{bmatrix} -\mathbf w_b \\ 1 \end{bmatrix}
\end{equation}
where scalar $\kappa$ is defined by $\kappa=\mathbf r_{t+1}\trans(\mathbf q-\mathbf H_{t+1,m}\mathbf w_b) / \Delta_b$.
Again we can now exploit the special structure of $\mathbf q$ to show that $\kappa$
is equal to
\[
\kappa=-\frac{\delta_h\varrho}{\Delta_b\Delta}
\]
And again we can reuse terms computed in the previous step (see Section~ \ref{sect:normalstep}).
Skipping the computations, we can show that the reduced (regularized)
cost $\xi_{t+1,m+1}$ is recursively obtained from $\xi_{t+1,m}$ via the expression:
\begin{equation}
\label{eq:xitmm}
\xi_{t+1,m+1}=\xi_{t+1,m} - \kappa^2 \Delta_b.
\end{equation}
Finally, each time we add an example to the $\mathcal{BV}$ set we must also update the
inverse kernel matrix $\bKmm^{-1}$ needed during the computation of $\mathbf a_{t+1}$ and
$\delta_h$. This can be done using the formula for partitioned matrix inverses
(\ref{eq:PMI}):
\begin{equation}
\label{eq: aktualisiere Kmm}
\mathbf K_{m+1,m+1}^{-1}=\begin{bmatrix} \bKmmi & \mathbf 0 \\ \mathbf 0\trans & 0\end{bmatrix}
+\frac{1}{\delta}\begin{bmatrix} -\mathbf a_{t+1} \\ 1 \end{bmatrix}
\begin{bmatrix} -\mathbf a_{t+1} \\ 1 \end{bmatrix}\trans.
\end{equation}
\medskip
\paragraph{When to add a $\mathcal{BV}$.}
To decide whether or not the current example $\mathbf x_{t+1}$ should be added to the $\mathcal{BV}$ set,
we employ the supervised two-part criterion from \citep{mein_icann2006}.
The first part measures the `novelty' of the current
example: only examples that are `far' from those already stored in the $\mathcal{BV}$ set
are considered for inclusion. To this end we compute as in
\citep{csato2001sparse} the squared norm of the residual from
projecting (in RKHS) the example onto the span of the current $\mathcal{BV}$ set, i.e.\
we compute, restated from (\ref{eq:ALD-test}),
$\delta=k^{*}_{t+1}-\mathbf k_{t+1}\trans\mathbf a_{t+1}$.
If $\delta<\mathtt{TOL1}$ for a given threshold $\mathtt{TOL1}$,
then $\mathbf x_{t+1}$ is well represented by the given $\mathcal{BV}$ set
and its inclusion would not contribute much to reduce the error from approximating
the kernel by the reduced set. On the other hand, if $\delta>\mathtt{TOL1}$ then
$\mathbf x_{t+1}$ is not well represented by the current $\mathcal{BV}$ set and leaving it
behind could incur a large error in the approximation of the kernel.
Aside from novelty, we consider as second part
of the selection criterion the `usefulness' of a basis function candidate.
Usefulness is taken to be its contribution to the reduction of the regularized
costs $\xi_{tm}$, i.e.\ the term $\kappa^2\Delta_b$ from (\ref{eq:xitmm}). Both parts together
are combined into one rule: only if $\delta > \mathtt{TOL1}$ and
$ \delta \kappa^2 \Delta_b > \mathtt{TOL2}$,
then the current example will become a new basis function and will be added to $\mathcal{BV}$.
\section{RoboCup-keepaway as RL benchmark}
\label{sec:robocup}
The experimental work we carried out for this article uses the publicly available\footnote{Sources are available
from {\tt http://www.cs.utexas.edu/users/AustinVilla/sim/keepaway/}.} keepaway
framework from \citep{AB05}, which is built on top of the standard RoboCup soccer simulator
also used for official competitions \citep{noda98ss}. Agents in RoboCup are autonomous entities; they
sense and act independently and asynchronously, run as individual processes and cannot communicate directly.
Agents receive visual perceptions every 150 msec and may act once every 100 msec.
The state description consists of relative distances and angles
to visible objects in the world, such as the ball, other agents or fixed beacons
for localization. In addition, random noise affects both the agents sensors as well as their actuators.
In keepaway, one team of `keepers' must learn how to maximize the time they can control the ball within a
limited region of the field against an opposing team of `takers'. Only the keepers are allowed to learn,
the behavior of the takers is governed
by a fixed set of hand-coded rules. However, each keeper only learns {\em individually}
from its own (noisy) actions
and its own (noisy) perceptions of the world. The decision-making happens at an intermediate level
using multi-step macro-actions; the keeper currently controlling the ball must decide between holding
the ball or passing it to one of its teammates. The remaining keepers automatically try to position
themselves such to best receive a pass. The task is episodic; it starts
with the keepers controlling the ball and continues as long as neither the ball leaves the
region nor the takers succeed in gaining control. Thus the goal for RL is to maximize the overall duration
of an episode. The immediate reward is the time that passes between individual calls
to the acting agent.
For our work, we consider as in \citep{AB05} the special 3vs2
keepaway problem (i.e.\ three learning keepers against two takers) played in a 20x20m field.
In this case the continuous state space has dimensionality 13, and the discrete action space consists of the
three different actions {\em hold, pass to teammate-1, pass to teammate-2} (see Figure~\ref{fig:keepaway}).
More generally, larger instantiations of keepaway would also be possible, like e.g. 4vs3, 5vs4 or more,
resulting in even larger state- and action spaces.
\begin{figure}[tb]
\centering
\includegraphics[height=4.5cm]{keepaway.eps}
\caption{Illustrating {\em keepaway}. The various lines and angles indicate the 13 state variables
making up each sensation as provided by the keepaway benchmark software.}
\label{fig:keepaway}
\end{figure}
\section{Experiments}
\label{sec:experiments and results}
In this section we are finally ready to apply our proposed approach to the keepaway problem. We implemented
and compared two different variations of the basic algorithm in a policy iteration based framework:
(a) Optimistic policy iteration using LSPE($\lambda$) and (b) Actor-critic policy iteration using LSTD($\lambda$).
As baseline method we used Sarsa($\lambda$) with tilecoding, which we re-implemented from \citep{AB05}
as faithfully as possible. Initially, we also tried to employ BRM instead of LSTD in the actor-critic framework. However,
this set-up did not fare well in our experiments because of the stochastic state-transitions in keepaway
(resulting in highly variable outcomes) and BRM's inability to deal with this situation adequately. Thus, the results
for BRM are not reported here.
\paragraph{Optimistic policy iteration.}
Sarsa($\lambda$) and LSPE($\lambda$) paired with optimistic policy iteration is an on-policy learning method, meaning that the
learning procedure estimates the Q-values from and for the current policy being executed by the agent. At the same time,
the agent continually updates the policy according to the changing estimates of the Q-function. Thus
policy evaluation and improvement are tightly interwoven. Optimistic policy iteration (OPI) is an online method
that immediately processes the observed transitions as they become available from the agent interacting with
the environment \citep{bert96neurodynamicprogram,sutton98introduction}.
\paragraph{Actor-critic.}
In contrast, LSTD($\lambda$)
paired with actor-critic is an off-policy learning method adhering with more rigor to the policy iteration framework.
Here the learning procedure estimates the Q-values
for a fixed policy, i.e.\ a policy that is not continually modified to reflect the changing estimates of Q.
Instead, one collects a large number of state transitions under the same policy and estimates Q from these
training examples. In OPI, where the most recent version of the Q-function is used to derive the next control action,
only one network is required to represent Q and make the predictions. In contrast, the actor-critic framework maintains
two instantiations of regularization networks: one (the actor) is used to represent the Q-function learned during the
previous policy evaluation step and which is now used to represent the current policy, i.e.\ control actions are
derived using its predictions. The second network (the critic) is used to represent the current Q-function and
is updated regularly.
One advantage of the actor-critic approach is that we can reuse the same set of
observed transitions to evaluate different policies, as proposed in \citep{lagoudakis2003lspi}.
We maintain an ever-growing list of all transitions observed from the learning agent (irrespective
of the policy), and use it to evaluate the current policy with LSTD($\lambda$). To reflect the real-time nature of learning in RoboCup, where
we can only carry out a very small amount of computations during one single function call to the agent, we evaluate the
transitions in small batches (20 examples per step). Once we have completed evaluating all training examples in the list, the
critic network is copied to the actor network and we can proceed to the next iteration, starting anew to process the examples,
using this time a new policy.
\paragraph{Policy improvement and $\varepsilon$-greedy action selection.}
To carry out policy improvement, every time we need to determine a control action for an arbitrary state
$s^*$, we choose the action $a^*$ that achieves the maximum Q-value; that is, given weights $\mathbf w_k$
and a set of basis functions $\{\mathbf{\tilde x}_1,\ldots,\mathbf{\tilde x}_m\}$, we choose
\[
a^*=\mathop{\mathrm{argmax}}_{a} \ \tilde Q(s^*,a;\mathbf w_k)=\mathop{\mathrm{argmax}}_{a} \ \bkm{s^*,a}\trans\mathbf w_k.
\]
Sometimes however, instead of choosing the best (greedy) action, it is recommended to try out
an alternative (non-greedy) action to ensure sufficient exploration. Here we employ the
$\varepsilon$-greedy selection scheme; we choose a random action with a small probability
$\varepsilon$ ( $\varepsilon=0.01$), otherwise we pick the greedy action with probability
$1-\varepsilon$. Taking a random action usually means to choose among all possible actions
with equal probability.
Under the standard assumption for errors in Bayesian regression
\citep[e.g., see][]{raswil06gpbook}, namely that the observed target values differ from
the true function values by an additive noise term (i.i.d.
Gaussian noise with zero mean and uniform variance), it is also possible to obtain an
expression for the `predictive variance' which measures the uncertainty associated
with value predictions. The availability of such confidence intervals (which is possible
for the direct least-squares problems LSPE and also BRM) could be used, as suggested
in \citep{engel2005rlgptd}, to guide the choice of actions during exploration and to increase
the overall performance. For the purpose of solving the keepaway problem
however, our initial experiments showed no measurable increase in performance
when including this additional feature.
\paragraph{Remaining parameters.}
Since the kernel is defined for state-action tuples, we employ a product kernel $k([s,a],[s',a'])=k_S(s,s')k_A(a,a')$ as suggested
by \citet{engel2005rlgptd}. The action kernel $k_A(a,a')$ is taken to be the Kronecker delta, since the actions in keepaway
are discrete and disparate. As state kernel $k_S(s,s')$ we chose the Gaussian RBF $k_S(s,s')=\exp(-h\norm{s-s'}^2)$
with uniform length-scale $h^{-1}=0.2$. The other
parameters were set to: regularization $\sigma^2=0.1$, discount factor for
RL $\gamma=0.99$, $\lambda=0.5$, and LSPE step size $\eta_t=0.5$. The novelty parameter for basis selection was set to
$\texttt{TOL1}=0.1$.
For the usefulness part we tried out different values to examine the effect supervised basis selection has;
we started with $\texttt{TOL2}=0$ corresponding to the unsupervised case and then began increasing the tolerance, considering
alternatively the settings $\texttt{TOL2}=0.001$ and $\texttt{TOL2}=0.01$.
Since in the case of LSTD we are not directly solving a least-squares problem, we use the associated BRM formulation to
obtain an expression for the error reduction in the supervised basis selection.
Due to the very long runtime of the simulations (simulating one hour in the soccer server roughly takes one hour
real time on a standard PC) we could not try out many different parameter combinations.
The parameters governing RL were set according to our experiences with smaller problems and are in the range
typically reported in the literature.
The parameters governing the choice of the kernel (i.e.\ the length-scale of the Gaussian RBF) was chosen such
that for the unsupervised case ($\texttt{TOL2}=0$) the number of selected basis functions approaches
the maximum number of basis functions the CPU used for these the experiments was able to process in real-time. This number
was determined to be $\sim 1400$ (on a standard 2 GHz PC).
\begin{figure}[hp]
\centering
\includegraphics[width=0.47\textwidth]{new_lstd0.eps}
\includegraphics[width=0.47\textwidth]{new_lstd0_001.eps}
\includegraphics[width=0.47\textwidth]{new_lstd0_01.eps}
\includegraphics[width=0.47\textwidth]{new_lspe.eps}
\includegraphics[width=0.47\textwidth]{new_cmac.eps}
\caption{From left to right: Learning curves for our approach with LSTD ({\tt TOL2}=0), LSTD ({\tt TOL2}=0.001),
LSTD ({\tt TOL2}=0.01), and LSPE. At the bottom we show the curves for Sarsa with tilecoding corresponding to \citep{AB05}.
We plot the average time the keepers
are able to control the ball (quality of learned behavior) against the
training time. After interacting for
15 hours the performance does not increase any more and the agent has experienced roughly 35,000 state
transitions. }
\label{fig:results}
\end{figure}
\paragraph{Results.}
We evaluate every algorithm/parameter configuration using 5 independent runs. The learning curves for these runs are shown in
Figure~\ref{fig:results}. The curves plot the average time the keepers are able to keep the ball
(corresponding to the performance) against the
simulated time the keepers were learning (roughly corresponding to the observed
training examples). Additionally, two horizontal lines indicate the scores for the two
benchmark policies random behavior and optimized
hand-coded behavior used in \citep{AB05}.
The plots show that generally RL is able to learn policies that are at least as effective as the optimized hand-coded
behavior. This is indeed quite an achievement, considering that the latter is the product of considerable manual effort. Comparing
the three approaches Sarsa, LSPE and LSTD we find that the performance of LSPE is on par with Sarsa. The curves of LSTD tell a
different story however; here we are outperforming Sarsa by 25\% in terms of performance (in Sarsa the best
performance is about $15$ seconds,
in LSTD the best performance is about $20$ seconds). This gain is even
more impressive when we consider the time scale at which this behavior is learned; just after a mere 2 hours we are already
outperforming hand-coded control. Thus our approach needs far fewer state transitions to discover good behavior.
The third observation shows the effectiveness of our proposed
supervised basis function selection; here we show that our supervised approach performs as well as the unsupervised one, but requires
significantly fewer basis functions to achieve that level of performance ($\sim$ 700 basis functions at {\tt TOL2}$=0.01$
against 1400 basis functions at {\tt TOL2}$=0$).
Regarding the unexpectedly weak performance of LSPE in comparison with LSTD, we conjecture that this strongly depends on the underlying
architecture of policy iteration (i.e.\ OPI vs. actor-critic) as well as the specific learning problem. On a related number of
experiments carried out with the octopus arm benchmark\footnote{From the ICML06 RL benchmarking page: \newline {\tt http://www.cs.mcgill.ca/dprecup/workshops/ICML06/octopus.html}}
we made exactly the opposite observation \citep[not discussed here in more detail, see][]{mein_ieee_adprl2007}.
\section{Discussion and related work}
\vspace*{-0.125cm}
We have presented a kernel-based approach for least-squares based policy evaluation in RL using regularization networks
as underlying function approximator. The key point is an efficient supervised basis selection mechanism, which is used to
select a subset of relevant basis functions directly from the data stream.
The proposed method was particularly devised with high-dimensional, stochastic control tasks for RL in mind;
we prove its effectiveness using the RoboCup keepaway benchmark. Overall the results indicate that kernel-based online learning in
RL is very well possible and recommendable. Even the rather few simulation runs we made clearly show that our approach is superior to convential
function approximation in RL using grid-based tilecoding. What could be even more important is that the kernel-based approach
only requires the setting of some fairly general parameters that do not depend on the specific control problem one wants to solve.
On the other hand, using tilecoding or a fixed basis function network in high dimensions requires considerable manual effort on part of
the programmer to carefully devise problem-specific features and manually choose suitable basis functions.
\citet{engel2003gptd,engel2005rlgptd} initially advocated using kernel-based methods in RL and proposed the
related GPTD algorithm. Our method using regularization
networks develops this idea further. Both methods have in common the online selection of relevant
basis functions based on \citep{csato2001sparse}. As opposed to the unsupervised selection in GPTD,
we use a supervised criterion to further reduce the number of relevant basis functions selected.
A more fundamental difference is the policy
evaluation method addressed by the respective formulation; GPTD models the Bellman residuals and corresponds to the BRM approach
(see Section 2.1.2). Thus, in its original formulation GPTD can be only applied to RL problems with deterministic state transitions.
In contrast, we provide a unified and concise formulation of LSTD and LSPE which can deal with stochastic state transitions as well.
Another difference is the type of benchmark problem used to showcase the respective method;
GPTD was demonstrated by learning to control a simulated octopus arm, which was posed as an 88-dimensional control
problem \citep{engel2005octopus}. Controlling the octopus arm is a deterministic control problem with known state transitions
and was solved there using model-based RL. In contrast, 3vs2 keepaway is only a 13-dimensional problem; here however,
we have to deal with
stochastic and unknown state transitions and need to use model-free RL.
\acks
The authors wish to thank the anonymous reviewers for their useful comments and suggestions.
\begin{appendix}
\section{A summary of the updates}
Let $\mathbf x_{t+1}=(s_{t+1},a_{t+1})$ be the next state-action tuple and $r_{t+1}$ be the reward assiociated
with transition from the previous state $s_t$ to $s_{t+1}$ under $a_t$. Define the abbreviations:
\begin{flalign*}
\mathbf k_{t} &:= \bkm{\mathbf x_t} & \mathbf k_{t+1} &:= \bkm{\mathbf x_{t+1}} & \mathbf h_{t+1}&:= \mathbf k_{t} - \gamma \mathbf k_{t+1}\\
k^*_{t}&:= k(\mathbf x_{t},\mathbf x_{t+1}) & k^*_{t+1}&:= k(\mathbf x_{t+1},\mathbf x_{t+1})& h^*_{t+1}&:= k^*_{t}-\gamma k^*_{t+1}
\end{flalign*}
and $\mathbf a_{t+1} := \bKmmi \mathbf k_{t+1}$.
\subsection{Unsupervised basis selection}
We want to test if $\mathbf x_{t+1}$ is well represented by the current basis functions in the dictionary
or if we need to add $\mathbf x_{t+1}$ to the basis elements. Compute
\[
\delta=k^{*}_{t+1} - \mathbf k_{t+1}\trans \mathbf a_{t+1}. \tag{\ref{eq:ALD-test}}
\]
If $\delta< \texttt{TOL1}$, then add $\mathbf x_{t+1}$ to the dictionary, execute the growing step (see below) and update
\[
\mathbf K_{m+1,m+1}^{-1}=\begin{bmatrix} \bKmmi & \mathbf 0 \\ \mathbf 0\trans & 0\end{bmatrix}
+\frac{1}{\delta}\begin{bmatrix} -\mathbf a_{t+1} \\ 1 \end{bmatrix}
\begin{bmatrix} -\mathbf a_{t+1} \\ 1 \end{bmatrix}\trans. \tag{\ref{eq: aktualisiere Kmm}}
\]
\subsection{Recursive updates for BRM}
\begin{itemize}
\item Normal step $\{t,m\} \mapsto \{t+1,m\}$:
\begin{enumerate}
\item \[ \mathbf P_{t+1,m}^{-1} = \mathbf P_{tm}^{-1} - \frac{\mathbf P_{tm}^{-1}\mathbf h_{t+1} \mathbf h_{t+1}\trans\mathbf P_{tm}^{-1}}{\Delta}
\tag{\ref{eq:normal Pittm}} \]
with $\Delta=1+\mathbf h_{t+1}\trans \mathbf P_{tm}^{-1} \mathbf h_{t+1}$.
\item \[ \bw_{t+1,m} = \bw_{tm} + \frac{\varrho}{\Delta} \mathbf P_{tm}^{-1} \mathbf h_{t+1}
\tag{\ref{eq:normal wttm}} \]
with $\varrho=r_{t+1} - \mathbf h_{t+1}\trans \bw_{tm}$.
\end{enumerate}
\item Growing step $\{t+1,m\} \mapsto \{t+1,m+1\}$
\begin{enumerate}
\item \[ \mathbf P_{t+1,m+1}^{-1}=\begin{bmatrix} \mathbf P_{t+1,m}^{-1} & \mathbf 0 \\ \mathbf 0 & 0 \end{bmatrix} +
\frac{1}{\Delta_b} \begin{bmatrix} -\mathbf w_b \\ 1 \end{bmatrix}
\begin{bmatrix} -\mathbf w_b \\ 1 \end{bmatrix}\trans
\tag{\ref{eq:Pitmm}}
\]
where
\[
\mathbf w_b =\mathbf a_{t+1} + \frac{\delta_h}{\Delta} \mathbf P_{tm}^{-1} \mathbf h_{t+1}, \qquad
\Delta_b = \frac{\delta_h^2}{\Delta}+\sigma^2\delta_h, \qquad
\delta_h=h^{*}_{t+1} - \mathbf h_{t+1}\trans \mathbf a_{t+1}
\]
\item \[
\bw_{t+1,m+1}=\begin{bmatrix} \bw_{t+1,m} \\ 0 \end{bmatrix} + \kappa
\begin{bmatrix} -\mathbf w_b \\ 1 \end{bmatrix}
\tag{\ref{eq:bbetatmm}}
\]
where $\kappa=-\frac{\delta_h\varrho}{\Delta_b\Delta}$.
\end{enumerate}
\item{Reduction of regularized cost when adding $\mathbf x_{t+1}$ (supervised basis selection)}:
\[ \xi_{t+1,m+1}=\xi_{t+1,m}- \kappa^2 \Delta_b \tag{\ref{eq:xitmm}} \]
For supervised basis selection we additionally check if
$\kappa^2 \Delta_b > \texttt{TOL2}$.
\end{itemize}
\subsection{Recursive updates for LSTD($\lambda$)}
\newcommand{\bwb^{(1)}}{\mathbf w_b^{(1)}}
\newcommand{\bwb^{(2)}}{\mathbf w_b^{(2)}}
\newcommand{\delta^{(1)}}{\delta^{(1)}}
\newcommand{\delta^{(2)}}{\delta^{(2)}}
\begin{itemize}
\item Normal step $\{t,m\} \mapsto \{t+1,m\}$:
\begin{enumerate}
\item \[ \mathbf z_{t+1,m}=(\gamma\lambda)\mathbf z_{tm}+\mathbf k_{t} \]
\item \[ \mathbf P_{t+1,m}^{-1} = \mathbf P_{tm}^{-1} - \frac{\mathbf P_{tm}^{-1}\mathbf z_{t+1,m} \mathbf h_{t+1}\trans\mathbf P_{tm}^{-1}}{\Delta}
\tag{\ref{eq:normal Pittm}'} \]
with $\Delta=1+\mathbf h_{t+1}\trans \mathbf P_{tm}^{-1} \mathbf z_{t+1,m}$.
\item \[ \bw_{t+1,m} = \bw_{tm} + \frac{\varrho}{\Delta} \mathbf P_{tm}^{-1} \mathbf z_{t+1,m}
\tag{\ref{eq:normal wttm}'} \]
with $\varrho=r_{t+1} - \mathbf h_{t+1}\trans \bw_{tm}$.
\end{enumerate}
\item Growing step $\{t+1,m\} \mapsto \{t+1,m+1\}$
\begin{enumerate}
\item \[
\mathbf z_{t+1,m+1}=\begin{bmatrix} \mathbf z_{t+1,m}\trans & z_{t+1,m}^* \end{bmatrix}\trans
\]
where $z_{t+1,m}^*=(\gamma\lambda)\mathbf z_{tm}\trans\mathbf a_{t+1} + k^*_t$.
\item \[ \mathbf P_{t+1,m+1}^{-1}=\begin{bmatrix} \mathbf P_{t+1,m}^{-1} & \mathbf 0 \\ \mathbf 0 & 0 \end{bmatrix} +
\frac{1}{\Delta_b} \begin{bmatrix} -\bwb^{(1)} \\ 1 \end{bmatrix}
\begin{bmatrix} -\bwb^{(2)} & 1 \end{bmatrix}
\tag{\ref{eq:Pitmm}'} \]
where
\begin{flalign*}
\bwb^{(1)} & =\mathbf a_{t+1} + \frac{\delta^{(1)}}{\Delta} \mathbf P_{tm}^{-1} \mathbf z_{t+1,m} &
\delta^{(1)} &=h^*_{t+1} - \mathbf a_{t+1}\trans\mathbf h_{t+1} \\
\bwb^{(2)} & =\mathbf a_{t+1}\trans + \frac{\delta^{(2)}}{\Delta} \mathbf h_{t+1}\trans\mathbf P_{tm}^{-1} &
\delta^{(2)} &=z_{t+1,m}^* - \mathbf a_{t+1}\trans\mathbf z_{t+1,m} \\
\end{flalign*}
and $\Delta_b=\frac{\delta^{(1)} \delta^{(2)}}{\Delta} + \sigma^2 (k^*_{t+1}-\mathbf k_{t+1}\trans\mathbf a_{t+1})$.
\item \[
\bw_{t+1,m+1}=\begin{bmatrix} \bw_{t+1,m} \\ 0 \end{bmatrix} + \kappa
\begin{bmatrix} -\bwb^{(1)} \\ 1 \end{bmatrix} \tag{\ref{eq:bbetatmm}'}
\]
where $\kappa=-\frac{\delta^{(2)}\varrho}{\Delta_b\Delta}$.
\end{enumerate}
\end{itemize}
\subsection{Recursive updates for LSPE($\lambda$)}
\begin{itemize}
\item Normal step $\{t,m\} \mapsto \{t+1,m\}$:
\begin{enumerate}
\item \begin{eqnarray*}
\mathbf z_{t+1,m} &=&(\gamma\lambda)\mathbf z_{tm}+\mathbf k_{t+1} \\
\mathbf A_{t+1,m} &=& \mathbf A_{tm}+\mathbf z_{t+1,m}\mathbf h_{t+1}\trans \\
\mathbf b_{t+1,m} &=& \mathbf b_{tm}+\mathbf z_{t+1,m} r_{t+1}
\end{eqnarray*}
\item \[ \mathbf P_{t+1,m}^{-1} = \mathbf P_{tm}^{-1} - \frac{\mathbf P_{tm}^{-1}\mathbf k_{t+1} \mathbf k_{t+1}\trans\mathbf P_{tm}^{-1}}{\Delta}
\tag{\ref{eq:normal Pittm}''} \]
with $\Delta=1+\mathbf k_{t+1}\trans \mathbf P_{tm}^{-1} \mathbf k_{t+1}$.
\item \[ \bw_{t+1,m} = \bw_{tm} + \eta \mathbf P_{t+1,m}^{-1} (\mathbf b_{t+1,m} - \mathbf A_{t+1,m} \bw_{tm})
\tag{\ref{eq:normal wttm}''} \]
\end{enumerate}
%
\item Growing step $\{t+1,m\} \mapsto \{t+1,m+1\}$
\begin{enumerate}
\item \begin{gather*}
\mathbf z_{t+1,m+1}=\begin{bmatrix} \mathbf z_{t+1,m} \\ z_{t+1,m}^* \end{bmatrix}
%
\qquad \qquad \mathbf b_{t+1,m+1}=\begin{bmatrix}
\mathbf b_{t+1,m} \\ \mathbf a_{t+1}\trans\mathbf b_{tm} + z_{t+1,m}^* r_{t+1}
\end{bmatrix} \\
%
\mathbf A_{t+1,m+1}=\begin{bmatrix}
\mathbf A_{t+1,m} & \mathbf A_{tm}\mathbf a_{t+1}+\mathbf z_{t+1,m} h^* \\
\mathbf a_{t+1}\trans\mathbf A_{tm}+z_{t+1,m}^*\mathbf h_{t+1}\trans &\mathbf a_{t+1}\trans\mathbf A_{tm}\mathbf a_{t+1}+ z_{t+1,m}^* h^*
\end{bmatrix}
\end{gather*}
where $z_{t+1,m}^*=(\gamma\lambda)\mathbf z_{tm}\trans\mathbf a_{t+1} + k^*_t$.
\item \[ \mathbf P_{t+1,m+1}^{-1}=\begin{bmatrix} \mathbf P_{t+1,m}^{-1} & \mathbf 0 \\ \mathbf 0 & 0 \end{bmatrix} +
\frac{1}{\Delta_b} \begin{bmatrix} -\mathbf w_b \\ 1 \end{bmatrix}
\begin{bmatrix} -\mathbf w_b \\ 1 \end{bmatrix}\trans
\tag{\ref{eq:Pitmm}''}
\]
where
\[
\mathbf w_b =\mathbf a_{t+1} + \frac{\delta}{\Delta} \mathbf P_{tm}^{-1} \mathbf k_{t+1}, \qquad
\Delta_b = \frac{\delta^2}{\Delta}+\sigma^2\delta, \qquad
\delta=k^{*}_{t} - \mathbf k_{t}\trans \mathbf a_{t+1}
\]
and $\Delta_b=\frac{\delta^{(1)} \delta^{(2)}}{\Delta} + \sigma^2 (k^*_{t+1}-\mathbf k_{t+1}\trans\mathbf a_{t+1})$.
\item \[
\bw_{t+1,m+1}=\begin{bmatrix} \bw_{t+1,m} \\ 0 \end{bmatrix} + \kappa
\begin{bmatrix} -\bwb^{(1)} \\ 1 \end{bmatrix}\tag{\ref{eq:bbetatmm}''}
\]
where $\kappa=-\frac{\delta^{(2)}\varrho}{\Delta_b\Delta}$.
\end{enumerate}
%
\item{Reduction of regularized cost when adding $\mathbf x_{t+1}$ (supervised basis selection)}:
\[ \xi_{t+1,m+1}=\xi_{t+1,m} - \Delta_b^{-1}(c-\mathbf w_b\trans \mathbf d)^2 \tag{\ref{eq:xitmm}''} \]
where $c=\mathbf a_{t+1}\trans(\mathbf b_{tm}-\mathbf A_{tm}\bw_{tm})+z^*_{t+1,m}(r_{t+1}-\mathbf h_{t+1}\trans\bw_{tm})$ and
$\mathbf d=\mathbf b_{t+1,m} - \mathbf A_{t+1,m}\bw_{tm}$. For supervised basis selection we additionally check if
$\Delta_b^{-1}(c-\mathbf w_b\trans \mathbf d)^2 > \texttt{TOL2}$.
\end{itemize}
\end{appendix}
\bibliographystyle{plainnat}
\input{gpip.bbl}
\end{document}
| {
"attr-fineweb-edu": 1.959961,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUbAw5ixsDMJPI2QsU | \section{INTRODUCTION}
Dialogue Robot Competition 2022 is a travel agency dialogue task, which aims to develop a dialogue service with hospitality.
Detailed conditions are described in the papers published by the organizers \cite{Higashinaka2022,Minato2022}.
Our system honors customers' preference and assists customers' decision.
Types of point of interest (POI) are classified into sightseeing type or experience type.
Interview with customers obtains demographic information and determines which POI is preferable for the customer.
System recommends a preferable POI and explains a ground based on the demographic attributions or travel conditions.
System answers customers' question in two types of systems that pick up a corresponding pair of question and answer based on a keyword search or generate answers by a neural-based system.
In addition, to make the given information more attractive, nearby POIs are searched and POI with better reputation is recommended.
\section{DIALOGUE FLOW}
\subsection{Overview of system}
Fig.~\ref{fig:overview} shows an overview of the system composed of two elements.
First element makes a recommendation of POI considering customers' attribution of demographic information or preference estimated from customers' interviews.
To estimate customer's preference, system uses collected information or emotion recognition results.
Second element is a question and answer part.
Two types of methods are used to make answers: rule-based one or neural dialogue generation.
Instead of finding a corresponding question and answer from entire question and answer database, system collates a question corresponding to the target category after category estimation of a customers' question.
If appropriate answer cannot be found in question and answer pairs, neural dialogue generation can generate an answer.
In addition, nearby POI is searched, if a customer is interested.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.96\linewidth]{overview.png}
\end{center}
\caption{Overview of our system}
\label{fig:overview}
\end{figure}
\subsection{Investigation of demographic information}
Demographic information is extracted from an interview with a customer.
First, our system asks a name of customers to make the customer feel familiar with the system.
To avoid misunderstanding, if answer is recognized as famous family names (top 5,000 in Japan), the recognition result is adopted and this is used for calling customers.
Second, to clarify the customer's demographic information and preference, following questions are asked to the customers.
\begin{enumerate}
\item How many times did you visit Odaiba?
\item How many people are you accompanying with?
\item Which types of travel is favorite? (sightseeing type: watching exhibition as you like; experience type: experience something by yourself)
\item (In the case of experience type and if the number of accompanying person is 3 and more,) do you accompany small children?
\item (If recommended POIs can allow visitors to accompany pet,) do you intend to accompany pets?
\end{enumerate}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.96\linewidth]{recommendation.png}
\end{center}
\caption{Example of recommendation by customer's demographic information and preference}
\label{fig:recommendation}
\end{figure}
\subsection{Recommendation}
Recommendation is made by grounds.
Fig.~\ref{fig:recommendation} shows an example of recommendation.
System finds grounds of recommendation from demographic attributions such as age, accompanying persons, and preferences.
There are two types of POIs: sightseeing type or experience type.
User's preferences are matched with a type of POI and the matched POI is recommended with grounds.
Information of POI is collected by web search (Jalan\footnote{\url{https://www.jalan.net/kankou/}} or Google Map)
When our system explains the information of POI and recommendation grounds, default speech synthesis is monotone and not natural.
Thus, our system emphasizes important words in the explanations.
Important words are specified by a calculation of BM25 (weighting of important words) \cite{Robertson2009}.
Top three important words are emphasized by slowing down the utterance speed.
\subsection{Question and answer (Estimation of question category and rule-based answering)}
\label{ss:rule_QA}
For a dialogue system that uses natural language, users ask various questions.
Thus, the direct collation of related question is difficult because to prepare those, the number of anticipated question and answer pairs is large.
It is effective to estimate a category of customers' questions before searching a pair of question and answer, because the estimation accuracy can be high if the category of the question is narrowed down.
Table~\ref{tab:category} shows fourteen categories to be classified.
Each question is classified into one category.
For classification, BERT \cite{Devlin2019} and Wikipedia2Vec \cite{Yamada2020} are employed.
For training, to construct pairs of question and answers, the information of POI provided by organizers is extended by taking additional questions from frequently asked questions of websites.
For example, Are there restaurants nearby? (cafe, restaurant, and service) When is the closed day? (open hours) If I come by car, is there any parking? (access information) Is it possible to watch the exhibition in English? (information of exhibition and experience)
For some categories, there are small amount of data.
By using crowdsourcing, questions are collected.
System picks up appropriate questions from question collection that is related to the estimated question category.
After the pickup, corresponding answer can be found by using rule-based keyword search.
\begin{table}[tb]
\centering
\tabcolsep 1pt
\caption{Question categories}
\label{tab:category}
\begin{tabular}{c||c}
\hline
cafe, restaurant, and service & accessibility \\
museum shop & rules \\
assistance of education & information of institution \\
open hours & access information \\
nearby POI & equipment \\
group admission & information of exhibition and experience \\
reservation & price \\
\hline
\end{tabular}
\end{table}
\begin{table*}[tb]
\centering
\caption{Example of responses to ``先週は神戸に行ったよ (Last week I went to Kobe)''}
\label{tab:response}
\begin{tabular}{c|c}
\hline
Model & Response\\
\hline
RNN (Chiebukuro/slack/dialogue book) & 神戸には行ったことがありますか (Did you go to Kobe?) \\
Transformer1 (Chiebukuro/slack/dialogue book) & そうなんですね。私も神戸に行きました (Really? I also went to Kobe) \\
Transformer2 (NTT released data) & どうだった?楽しかった? (How? Did you enjoy it?) \\
RNN+Transformer1 & そうなんですね。私も神戸に行きました (I see. I went to Kobe) \\
RNN+Transformer\{1+2\} & そうなんですね。私も行ってみたいです (I see. I'd like to go to Kobe)\\
\hline
\end{tabular}
\end{table*}
\subsection{Question and answer (Deep-learning-based system)}
It is impossible to answer all questions in the way of \ref{ss:rule_QA}.
To answer questions for which we cannot prepare pairs of question and answer, deep-learning-based methods such as sequence-to-sequence (seq2seq) method \cite{Sutskever2014} have been used.
seq2seq model is a translation model from a user utterance to the system response \cite{Vinyals2015}.
OpenNMT \cite{opennmt} was used for answer generation.
It is possible to generate responses to any user utterances but these models require a lot of training data.
Transformer \cite{Vaswani2017} was used in addition to RNN encoder-decoder model.
First, we pre-trained models on open2ch and twitter datasets.
Second, we fine-tuned models on Yahoo! Chiebukuro, simulated dialogue by slack (internal collection) and dialogue data attached to a book \cite{Higashinaka2020}.
Third, we further fine-tuned transformer model on the dialogue data released from NTT \footnote{\url{https://github.com/nttcslab/japanese-dialog-transformers}} and question and answer pairs from ``AI王''(AI king)\footnote{\url{https://sites.google.com/view/project-aio/dataset}}.
RNN and transformer with two types of training data were combined to generate answers.
Table~\ref{tab:response} shows an example of response to ``Last week, I went to Kobe''.
RNN's responses are frequently similar to the customers' utterances.
Transformer (with data from NTT) generates various responses but too frank.
Combination can generate more reasonable responses.
\subsection{Search for recommended nearby point-of-interest}
Question about nearby POIs is processed by Google places API\footnote{\url{https://developers.google.com/maps/documentation/places/web-service/overview}}, which can collect the information of nearby POI within less than 800m.
If a customer specifies a genre of POI such as restaurant, cafe, or park, our system searches POI with the specified genre.
By using distance matrix API\footnote{\url{https://developers.google.com/maps/documentation/distance-matrix/overview}}, the distances on foot from the target POI to the found nearby POI are sorted in ascending order.
From the nearest one, our system collects reputations and introduces the highest score one to eliminate negative comments.
If the length of comments is too long, the first two sentences are extracted.
If customer did not ask any questions and dialogue time remained, even when customer did not ask questions about nearby POI, system introduces a nearby POI in some genres that have not been introduced.
\subsection{Robot control}
Facial expression and head movement of the android robot used in the competition can be controlled.
To give a customer a feeling of relief, the basic facial expression is smile and sometimes to give a feeling of tension, normal facial expression is used.
When a customer answers a question, the android nods.
When the robot explains the description of POI, she gazes at the photo of the monitor.
Because a customer's age is given by the organizer, the utterance speed is adjusted, faster for younger and slower for elder.
\section{CONCLUSION}
We developed a dialogue system that recommends a POI that matches to the user demographic attribution or preferences, which the system obtains via interviews with a customer.
To answer various questions from customers, answers are generated by rule-based keyword search and answer generation with a search for nearby POIs.
Facial expression, head movement, and utterance speed of the android robot are adjusted for natural and comfortable dialogue.
| {
"attr-fineweb-edu": 1.74707,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUcw04eIXhqTbAZczV | \section*{Introduction}
\vspace{-0.2in}
Individual achievement in competitive endeavors -- such as professional sports \cite{petersen2008distribution,saavedra2010mutually,petersen2011methods,radicchi2011best,mukherjee2012identifying,mukherjee2014quantifying,yucesoy2016untangling}, academia \cite{radicchi2009diffusion,petersen2011quantitative,petersen2012persistence,petersen_citationinflation_2018} and other competitive arenas \cite{radicchi2012universality,schaigorodsky2014memory,liu2018hot,barabasi2018formula} -- depends on many factors. Importantly, some factors are time dependent whereas others are not. Time dependent factors can derive from overall policy (rule changes) and biophysical shifts (improved nutrition and training techniques), to competitive group-level determinants (e.g. talent dilution of players from league expansion, and shifts in the use of backup players) and individual-specific enhancements (performance enhancing drugs (PEDs) \cite{mitchell2007report,mazanov2010rethinking} and even cognitive enhancing drugs (CEDs) \cite{sahakian2007professor,maher2008poll,greely2008towards}). Accounting for era-specific factors in cross-era comparison (e.g. ranking ) and decision-making (e.g. election of players to the Hall of Fame) is a challenging problem for cultural heritage management in the present-day multi-billion dollar industry of professional sports.
Here we analyze two prominent and longstanding sports leagues -- Major League Baseball (MLB) and the National Basketball Association (NBA) -- which feature rich statistical game data, and consequently, record-oriented fanbases \cite{ward1996baseball,simmons2010book}. Each sport has well-known measures of greatness, whether they are single-season benchmarks or career records, that implicitly assume that long-term trends in player ability are negligible. However, this is frequently not the case, as a result of time-dependent endogenous and exogenous performance factors underlying competitive advantage and individual success in sport. Take for example the home run in baseball, for which the frequency (per-at-bat) has increased 5-fold from 1919 (the year that Babe Ruth popularized the achievement and took hold of the single-season record for another 42 years) to 2001 (when Barry Bonds hit 73 home runs, roughly 2.5 times as many as Ruth's record of 29 in 1919 \cite{petersen2008distribution}). Yet as this example illustrates, there is a measurement problem challenging the reverence of such {\it all-time} records, because it is implicitly assumed that the underlying success rates are stationary (i.e. the average, standard deviation and higher-order moments of success rates are time-independent), which is likely not the case -- especially when considering the entire history of a sport.
Indeed, this fundamental measurement problem is further compounded when considering career metrics, which for many great athletes span multiple decades of play, and thus possibly span distinct eras defined by specific events (e.g. the 1969 lowering of the pitching mound in Major League Baseball which notably reduced the competitive advantage of pitchers, and the introduction of the 3-point line to the NBA in 1979). By way of example, consider again the comparison of Barry Bonds (career years 1986-2007) and Babe Ruth (1914-1935). Despite the fact that Barry Bonds is also the career-level home-run leader (762 home runs total; see Supplementary Material Appendix Table S1), one could argue that since other contemporaneous sluggers during the `steroids era' (the primary era during which Bonds primarily payed) were also hitting home-runs at relatively high rates, that these nominal achievements were relatively less outstanding -- in a statistical sense -- compared to players from other eras when baseline home-run rates were lower. Thus, if the objective is to identify achievements that are outstanding relative to both contemporaneous peers in addition to all historical predecessors, then standardized measures of achievement that account for the time-dependent performance factors are needed.
In general, we argue that in order to compare human achievements from different time periods, success metrics should be {\it renormalized} to a common index (also termed `detrended' or `deflated' in other domains \cite{petersen2011methods,petersen_citationinflation_2018,petersen2018mobility}), so that the time dependent factors do not bias statistical comparison.
Hence, we address this measurement problem by leveraging the annual distributions of individual player achievement derived from comprehensive player data comprised of more than 21,000 individual careers spanning the entire history of both MLB and the NBA through the late 2000s for which data is collected \cite{BaseballData,BasketballData}. More specifically, we apply an intuitive statistical method that neutralizes time-dependent factors by renormalizing players' annual achievements to an annual inter-temporal average measuring characteristic player {\it prowess} -- operationalized as ability per in-game opportunity. In simple terms, this method corresponds to a simple rescaling of the achievement metric baseline. We show that this method succeeds in part due to the relatively stable functional form of the annual performance distributions for the seven performance metrics we analyzed: batter home runs (HR), batter hits (H), pitcher strikeouts (K) and pitcher wins (W) for MLB; and points scored (Pts.), rebounds (Reb.) and assists (Ast.) for the NBA. As a result, the outputs of our renormalization method are self-consistent achievement metrics that are more appropriate for comparing and evaluating the relative achievements of players from different historical eras.
In order to make our statistical analysis accessible, we use the most natural measures for accomplishment -- the statistics that are listed in typical box-scores and on every baseball and basketball card, so that the results are tangible to historians and casual fans interested in reviewing and discussing the ``all-time greats.'' Without loss of generality, our method can readily be applied to more sophisticated composite measures that are increasingly prevalent in sports analytics (e.g. `Win Shares' in baseball \cite{james2002win}). However, other sophisticated measures that incorporate team-play data (e.g. Box Plus Minus for basketball) or context-specific play data (e.g. Wins Above Replacement for baseball) are less feasible due to the difficulty in obtaining the necessary game-play information, which that is typically not possible to reconstruct from crude newspaper boxscores, and thus limits the feasibility of performing comprehensive historical analysis.
Notwithstanding these limitations, this study addresses two relevant questions:
\begin{enumerate}
\item How to quantitatively account for economic, technological, and social factors that influence the rate of achievement in competitive professions.
\item How to objectively compare individual career metrics for players from distinct historical eras. By way of example, this method could facilitate both standard and
retroactive induction of athletes into Halls of Fame. This is particularly relevant given the `inflation' in the home run rate observed in Major League Baseball during the `steroids era' \cite{petersen2008distribution,mitchell2007report}, and the overarching challenges of accounting for PEDs and other paradigm shifts in professional sports.
\end{enumerate}
\noindent This works contributes to an emerging literature providing a complex systems perspective on sports competitions and people analytics, in particular by highlighting the remarkable level of variation in annual and career performance metrics. Such high levels of variability point to the pervasive role of non-linear dynamics underlying the evolution of both individual and team competition.
\vspace{-0.2in}
\section*{Methods}
\vspace{-0.2in}
We define prowess as an individual player's ability to succeed in achieving a specific outcome $x$ (e.g. a HR in MLB or a Reb. in the NBA) in any given opportunity $y$ (here defined to be an at-bat (AB) or Inning-Pitched-in-Outs (IPO), for batters and pitchers respectively, in MLB; or a minute played in the NBA). Thus, our method implicitly accounts for a fundamental source of variation over time, which is growth in league size and games per season, since all outcome measures analyzed are considered on a per-opportunity basis.
\begin{figure}
\centering{\includegraphics[width=0.99\textwidth]{Fig1_HR_Pts_Prowess_LowRes.pdf}}
\caption{ \label{figure:F1} {\bf Non-stationary evolution of player prowess in professional Baseball and Basketball.} The seasonal prowess $\langle P(t) \rangle$ measures the relative success per opportunity rate using appropriate measures for a given sport. By normalizing accomplishments with respect to $\langle P(t) \rangle$, we objectively account for variations in prowess derived from endogenous and exogenous factors associated with the evolution of each sport. (A) The home-run prowess shows a significant increasing trend since 1920, reflecting the emergence of the modern ``slugger'' in MLB. Physiological, technological, economic, demographic and social factors have played significant roles in MLB history \cite{ward1996baseball}, and are responsible for sudden upward shifts observed for $\langle P_{HR}(t) \rangle$. (B) Scoring prowess exhibits a non-monotonic trend. Horizontal dashed lines correspond to the average value of each curve, $\overline{P}$, calculated over the entire period shown. See subpanels in Figure 4 for the prowess time series calculated for all 7 metrics analyzed. }
\end{figure}
Figure \ref{figure:F1} shows the evolution of home run prowess in MLB over the 139-year period 1871-2009 and the evolution of scoring prowess in the NBA over the 58-year period 1951-2008. It was beyond the scope of our analysis to update the performance data to present time, which is a clear limitation of our analysis, but such a right censoring issue is unavoidable with every passing year. Regardless, with data extending to the beginning of each league, our analysis accounts for several major paradigm shifts in each sport that highlight the utility of the method.
Indeed, while HR prowess has increased in era-specific bursts, point-scoring prowess shows different non-monotonic behavior that peaked in the early 1960s. Taken together, these results demonstrate the non-stationary evolution of player prowess over time with respect to the specific achievement metrics.
What this means from practical game, season and career perspectives, is that the occurrence of a home run in 1920 was much more significant from a statistical perspective (as it was relatively rarer per opportunity) than a home run at the turn of the 21st century, which was was the peak period of HR prowess (during which numerous players were implicated by the Mitchell Report \cite{mitchell2007report} regarding an investigation into performance-enhancing drug sue in MLB). By way of economic analogy, while the nominal baseball ticket price in the early 20th century was around 50 cents, the same ticket price might nominally be 100 times as much in present day USD\$, which points to the classic problem of comparing crude nominal values. To address this measurement problem, economists developed the `price deflator' to account for the discrepancy in nominal values by mapping values recorded in different periods to their `real' values, a procedure that requires measuring price values relative to a common baseline year. Hence, in what follows, our approach is a generalization of the common method used in economics to account for long-term inflation, and readily extends to other metric-oriented domains biased by persistent secular growth, such as scientometrics \cite{petersen_citationinflation_2018,petersen2018mobility}.
Thus, here the average prowess serves as a baseline `deflator index' for comparing accomplishments achieved in different years and thus distinct historical eras. We conjecture that the changes in the average prowess are related to league-wide factors which can be quantitatively neutralized (also referred to as `detrended' or `deflated') by renormalizing individual accomplishments by the average prowess for a given season. To achieve this renormalization we first calculate the prowess $P_{i}(t)$ of an individual player $i$ as $P_{i}(t) \equiv x_{i}(t) / y_{i}(t)$, where $x_{i}(t)$ is an individual's total number of successes out of his/her total number $y_{i}(t)$ of opportunities in a given year $t$.
To compute the league-wide average prowess, we then compute the aggregate prowess as the success rate across all opportunities,
\begin{equation}
\langle P(t) \rangle \equiv \frac{\sum_{i} x_{i}(t)}{\sum_{i} y_{i}(t)} \ .
\end{equation}
In practical terms, we apply the summation across $i$ only over players with at least $y_c$ opportunities during year $t$; as such, the denominator represents the total number of opportunities across the subset of $N_{c}(t)$ players in year $t$. We implemented thresholds of $y_{c} \equiv$ 100 AB (batters), 100 IPO (pitchers), and 24 Min. (basketball players) to discount statistical fluctuations arising from players with very short seasons. The results of our renormalization method are robust to reasonable choices of $y_{c}$ that exclude primarily just the trivially short seasons, with a relatively large subset of $N_{c}(t)$ players remaining.
Finally, the renormalized achievement metric for player $i$ in year $t$ is given by
\begin{equation}
x^{D}_{i}(t) \equiv x_{i}(t) \ \frac {P_{\text{baseline}}}{\langle P(t) \rangle} \ ,
\label{xdsingle}
\end{equation}
where $P_{\text{baseline}}$ is the arbitrary value applied to all $i$ and all $t$, which establishes a common baseline.
For example, in prior work \cite{petersen2011methods} we used $P_{\text{baseline}} \equiv\overline{P}$, the average prowess calculated across all years (corresponding to the dashed horizontal lines in Fig. \ref{figure:F1}). Again, because the choice of baseline is arbitrary, in this work we renormalize HR statistics in MLB relative to the most recent prowess value, $P_{\text{baseline}} \equiv \langle P(2009) \rangle$, a choice that facilitates contrasting with the results reported in \cite{petersen2011methods}; and for all 6 other performance metrics we normalize using $P_{\text{baseline}} \equiv \overline{P}$.
Applying this method we calculated renormalized metrics at both the single season level, corresponding to $x_{i}^{D}(t)$, and the total career level, corresponding to the aggregate player tally given by
\begin{eqnarray}
X^{D}_{i} &=& \sum_{s=1}^{L_{i}}x^{D}_{i}(s) \ ,
\label{XD}
\end{eqnarray}
where $s$ is an index for player season and $L_{i}$ is the player's career length measured in seasons.
\vspace{-0.2in}
\section*{Results}
\vspace{-0.2in}
We applied our renormalization method to two prominent and historically relevant North American professional sports leagues, using comprehensive player data comprised of roughly 17,000 individual careers spanning more than a century of league play in the case of MLB (1871-2009) and roughly 4,000 individual careers spanning more than a half-century (1946-2008) of league play in the case of NBA. Together, these data represent roughly 104,000 career years and millions of in-game opportunities consisting of more than 13.4 million at-bats and 10.5 million innings-pitched-in-outs in the MLB, and 24.3 million minutes played in the NBA through the end of the 2000s decade.
\begin{figure}
\centering{\includegraphics[width=0.75\textwidth]{Fig2_AveDetAve_LowRes.pdf}}
\caption{ \label{figure:F2} {\bf Renormalizing player performance metrics addresses systematic performance-inflation bias.} The annual league average for HR (MLB) and Pts. (NBA) calculated before versus after renormalizing the performance metrics -- i.e., panels (A,C) show $\langle x(t )\rangle$ and (B,D) show $\langle x^{D}(t )\rangle$. League averages are calculated using all players (black) and a subset of players with sufficient season lengths as to avoid fluctuations due to those players with trivially short season lengths (orange). Horizontal dashed lines correspond to the average value of each curve over the entire period. Significant dips are due to prominent player strikes resulting in game cancellations, in 1981, 1994 and 1995 for MLB and 1995-96 for the NBA.}
\end{figure}
\begin{figure}
\centering{\includegraphics[width=0.75\textwidth]{Fig3_AnnualPDF_LowRes.pdf}}
\caption{ \label{figure:F3} {\bf Distribution of annual player performance -- comparing traditional and renormalized metrics.} Each curve corresponds to the distribution $P(x)$ for traditional (A,C) and renormalized metrics (B,D); season level data were separated into non-overlapping observation periods indicated in each legend. $P(x)$ estimated using a kernel density approximation, which facilitates identifying outlier values. The renormalized metrics in panels (B,D) show improved data collapse towards a common distribution for a larger range of $x$ values (but not including the extreme tails which correspond to outlier achievements), thereby confirming that our method facilitates the distillation of a universal distribution of seasonal player achievement. Regarding the case of HR in panels (A,B), our method facilitates highlighting outlier achievements that might otherwise be obscured by underlying shifts in prowess; such is the case for Babe Ruth's career years during the 1920's, which break the {\it all-time scales}, as shown in Appendix Table S1.}
\end{figure}
Figure \ref{figure:F2} compares the league averages for home runs in MLB and points scored in the NBA, calculated using all players in each year (black curves) and just the subset of players with $y_{i}(t) \geq y_{c}$ (orange curves) in order to demonstrate the robustness of the method with respect to the choice of $y_{c}$. More specifically, Fig. \ref{figure:F2}(A,C) shows the league average based upon the traditional ``nominal'' metrics, computed as $\langle x (t) \rangle \equiv N_{c}(t)^{-1}\sum_{i}x_{i}(t)$, while Fig. \ref{figure:F2}(B,D) show the league average based upon renormalized metrics, $\langle x(t)^{D} \rangle \equiv N_{c}(t)^{-1}\sum_{i}x^{D}_{i}(t)$; the sample size $N_{c}(t)$ counts the number of players per season satisfying the opportunity threshold $y_{c}$.
In order to demonstrate the utility of this method to address the non-stationarity in the nominal or ``raw'' player data, we applied the Dickey-Fuller test \cite{dickey1979distribution} to the historical time series for in per-opportunity success rates (measured by $\langle P(t) \rangle$) and the corresponding league averages ($\langle x(t )\rangle$ and $\langle x^{D}(t )\rangle$). More specifically, we applied the test using an autoregressive model with drift to each player metric, and repoort the test statistic and corresponding $p$-value used to test the null hypothesis that the data follows a non-stationary process. For example, in the case of Home Runs: for the time series $\langle P(t) \rangle$ (respectively $\langle HR(t)\rangle$) we obtain a test statistic = -3.7 (-6.5) and corresponding $p$-value = 0.57 (0.3), meaning that we fail to reject the null hypothesis, thereby indicating that the prowess time series (league average time series) is non-stationary; contrariwise, for the renormalized league average time series $\langle HR^{D}(t)\rangle$ we obtain a test statistic = -31.5 and $p-$value = 0.0004 indicating that the data follow a stationary time series. Repeating the same procedure for Points: for $\langle P(t) \rangle$ (resp. $\langle PTS(t)\rangle$) we obtain a test statistic = -6.4 (-9.6) and corresponding $p$-value = 0.3 (0.13), also indicating that both time series are non-stationary; contrariwise, for the renormalized league average $\langle PTS^{D}(t)\rangle$ we obtain a test statistic = -22.2 and $p-$value = 0.003 indicating that the renormalized data follow a stationary time series. We observed this similar pattern, in which the renormalization method transforms non-stationary time series into a stationary time series, for Strikeouts, Rebounds and Assists; whereas in the case of Wins and Hits, the Dickey-Fuller test applied to $\langle P(t) \rangle$), $\langle x(t )\rangle$ indicate that these time series are already stationary.
Notably, as a result and consistent with a stationary data generation process, the league averages are more constant over time after renormalization, thereby demonstrating the utility of this renormalization methods to standardize multi-era individual achievement metrics. Nevertheless, there remain deviations from a perfectly horizontal line following from phenomena not perfectly captured by our simple renormalization. Indeed, extremely short seasons and careers, and phenomena underlying the prevalence of these short careers, can bias the league average estimates. For example, in 1973 the designated hitter rule was introduced into the American League, comprising half of all MLB teams, which skews the number of at-bats per player by position, since half of pitchers no longer tended to take plate appearances after this rule change. Consequently, there is a prominent increase in the average home runs per player in 1973 corresponding to this rule change, visible in Figure \ref{figure:F2}(B) in the curve calculated for all MLB players (black curve), because roughly 1 in 2 pitchers (who are not typically power hitters) did not enter into the analysis thereafter. For similar reasons, our method does not apply as well to pitcher metrics because of a compounding decreasing trend in the average number of innings pitched per game due to the increased role of relief pitchers in MLB over time; accounting for such strategic shifts in the role and use of individual player types could also be included within our framework, but is outside the scope of the present discourse and so we leave it for future work. See Fig. 5 in ref. \cite{petersen2011methods} for additional details regarding this detail, in addition to a more detailed development of our renormalization method in the context of MLB data only.
Based upon the convergence of the seasonal player averages to a consistent value that is weakly dependent on year, the next question is to what degree do the annual distributions for these player metrics collapse onto a common curve, both before and after application of the renormalization method.
To address this question, Figure \ref{figure:F3} shows the probability density function (PDF) $P(x)$ for the same two metrics, HR and points scored, measured at the season level.
For each case we separate the data into several non-overlapping periods. It is important to recall that $P_{\text{baseline}} \equiv \langle P(2009) \rangle$ for HR and $P_{\text{baseline}} \equiv \overline{P}$ for Pts. Consequently, there is a significant shift in the range of values for HR but not for Pts., which facilitates contrasting the benefits provided by these two options.
In the case of HR, the scale shifts from a maximum of 73 HR (corresponding to Barry Bond's 2001 single-season record) to 214 renormalized HR (corresponding to Babe Ruth's 59 nominal HR in 1921). While this latter value may be unrealistic, it nevertheless highlights the degree to which Babe Ruth's slugging achievements were outliers relative to his contemporaneous peers, further emphasizing the degree to which such achievements are under-valued by comparisons based on nominal metrics. In the case of Pts., in which there is negligible rescaling due to the choice of $P_{\text{baseline}} \equiv \overline{P}$, we observe a compacting at the right tail rather than the divergence observed for HR. And in both cases, we observe a notable data collapse in the bulk of $P(x)$. For example, Fig. \ref{figure:F3}(B) collapses to a common curve for the majority of the data, up to the level of $x^{D}\approx 35$ renormalized HR. In the case of NBA points scored, the data collapse in Fig. \ref{figure:F3}(D) extends to the level of $x^{D}\approx 2500$ renormalized Pts., whereas for the traditional metrics in Fig. \ref{figure:F3}(C) the data collapse extends to the $x^{D}\approx 1000$ renormalized Pts. level.
Figure \ref{figure:F4} shows the empirical distributions $P(X)$ and $P(X^{D})$ for career totals, addressing to what degree does renormalization of season-level metrics impact the achievement distributions at the career level. Also plotted along with each empirical PDF is the distribution model fit calculated using the Maximum Likelihood Estimation (MLE) method. In previous work \cite{petersen2011methods} we highlighted the continuous-variable Gamma distribution as a theoretical model, given by
\begin{eqnarray}
P_{\Gamma}(X \vert \alpha, X_{c}) &\propto& X^{-\alpha} \exp[-X/X_{c}] \ .
\label{PDFGamma}
\end{eqnarray}
This distribution is characterized by two parameters: the scaling parameter $\alpha$ (empirically observed sub-linear values range between 0.4 and 0.7) captures the power-law decay, while the location parameter $X_{c}$ represents the onset of extreme outlier achievement terminated by an exponential cutoff arising from finite size effects (finite season and career lengths); see ref. \cite{petersen2011methods} for estimation of the best-fit Gamma distribution parameters for MLB data.
We also highlight an alternative theoretical model given by the discrete-variable Log-Series distribution,
\begin{eqnarray}
P_{LS}(X \vert p) &\propto& p^{X}/X \ \approx X^{-1} \exp[-X/X_{c}] \ .
\label{PDFLS}
\end{eqnarray}
In particular, this model distribution is characterized by a single parameter $0 < p < 1$; for example, in the case of HR we estimate $p=0.996975$. In such a case where $p \approx 1$ (hence $1-p \ll 1$) then the approximation in Eq. (\ref{PDFLS}) follows, giving rise to the exponential cutoff value $X_{c} = 1/(1-p)$. A historical note, the Log-Series PDF was originally proposed in ecological studies \cite{FisherLogSeriesDist}.
In this work we find $P_{LS}(X)$ to provide a better fit than $P_{\Gamma}(X)$ for the empirical career distributions for MLB data, but not for NBA data. As such, the fit curves for MLB in Fig. \ref{figure:F4}(A-D) correspond to $P_{LS}(X)$, whereas the fit curves for the NBA in Fig. \ref{figure:F4}(E-F) correspond to $ P_{\Gamma}(X)$. This subtle difference in the functional form of the $P(X)$ distributions may be the starting point for understanding variations in competition and career development between these two professional sports. We refer the detail-oriented reader to ref. \cite{petersen2011quantitative} for further discussion on the analytic properties of $ P_{\Gamma}(X \vert \alpha, X_{c})$, as derived from a theoretical model of career longevity, which provides an intuitive mechanistic understanding of $\alpha$ and $X_{c}$. While in previous work we have emphasized the estimation, significance and meaning of distribution parameters, here we are motivated to demonstrate the generalizability of the renormalization method, and so we leave the analysis of different $P(X)$ parameter estimations between leagues as a possible avenue for future research.
Notably, Figure \ref{figure:F4} shows that each pair of empirical data, captured by $P(X)$ and $P(X^{D})$, exhibit relatively small deviations from each other in distribution.
Interestingly, metrics representing achievements with relatively lower per-opportunity success rates per opportunity (home runs, strikeouts and rebounds) are more sensitive to time-dependent success factors than those with relatively higher success rates (hits, wins and points). This pattern can also be explained in the context of the Dickey-Fuller test results, which indicated that Wins and Hits metrics are sufficiently stationary to begin with.
In all, our results indicate that the extremely right-skewed (heavy-tailed) nature of player achievement distributions reflect intrinsic properties underlying achievement that are robust to inflationary and deflationary factors that influence success rates, once accounted for. The stability of the $P(X)$ and $P(X^{D})$ distributions at the aggregate level is offset by the local reordering at the rank-order level -- see Supplementary Material Appendix for ranked tables. In short, for the NBA we provide 6 extensive tables that list the top-50 all-time achievements comparing traditional and renormalized metrics -- at both the season and career level; and for MLB we provide a top-20 ranking for career home runs, and refer the curious reader to ref. \cite{petersen2011methods} for analog tables listing top-50 rankings.
\begin{figure}
\centering{\includegraphics[width=0.84\textwidth]{Fig4_CareerPDF_ProwessSubpanel_LowRes.pdf}}
\caption{ \label{figure:F4} {\bf Distribution of career achievement totals -- comparing traditional and renormalized metrics.} Data points represent the empirical PDFs calculated for traditional (gray) and renormalized (red) metrics. Vertical dashed lines indicate the location of the 95th percentile value ($P_{95}$) for each distribution, indicating the onset of {\it all-time greats} likely to be honored in each league's Hall of Fame. Each solid line corresponds to a distribution fit estimated using the MLE method; panels (A-D) are fit using the Log-Series distribution defined in Eq. (\ref{PDFLS}) and (E-G) are fit using the Gamma distribution defined in Eq. (\ref{PDFGamma}); see ref. \cite{petersen2011methods} for estimation of the best-fit Gamma distribution parameters for MLB data. The deviations between $P(X)$ and $P(X^{D})$ are less pronounced than the counterparts $P(x)$ and $P(x^{D})$ calculated at the seasonal level, indicating that the overall distribution of career achievement is less sensitive to shifts in player prowess -- however, this statement does not necessarily apply to the {\it ranking} of individuals, which can differ remarkably between the traditional and renormalized metrics. (Insets) Time series of average league prowess for each metric to facilitate cross-comparison and to highlight the remarkable statistical regularity in the career achievement distributions despite the variability in player prowess across time; Horizontal dashed lines correspond to the average value $\overline{P}$ calculated over the entire period shown. }
\end{figure}
\vspace{-0.2in}
\section*{Discussion}
\vspace{-0.2in}
The analysis of career achievement features many characteristics of generic multi-scale complex systems. For example, we document non-stationarity arising from the growth of the system along with sudden shifts in player prowess following rule changes (i.e. policy interventions). Other characteristics frequently encountered in complex systems are the entry and exit dynamics associated with finite life-course and variable career lengths, and memory with consequential path dependency associated with cumulative advantage mechanisms underlying individual pathways to success. To address these challenges, researchers have applied concepts and methods from statistical physics \cite{petersen2008distribution,petersen2011quantitative,petersen2012persistence,schaigorodsky2014memory} and network science \cite{saavedra2010mutually,radicchi2011best,mukherjee2012identifying,mukherjee2014quantifying} to professional athlete data, revealing statistical regularities that provide a better understanding of the underlying nature of competition. Notably, academia also exhibits analogous statistical patterns that likely emerge from the general principles of competitive systems, such as the extremely high barriers to entry which may explain the highly skewed career longevity distributions \cite{petersen2011quantitative} and first-mover advantage dynamics that amplify the long-term impact of uncertainty \cite{petersen2012persistence}. By analogy, renormalized scientometrics are needed in order to compare researcher achievements across broad time periods \cite{petersen_citationinflation_2018}, for example recent work leveraged renormalized citations to compare the effects of researcher mobility across a panel of individuals spanning several decades \cite{petersen2018mobility}.
Motivated by the application of complex systems science to the emerging domain of people analytics, we analyzed comprehensive player data from two prominent sports leagues in order to objectively address a timeless question -- who's the greatest of all time? To this end, we applied our renormalization method in order to obtain performance metrics that are more suitable for cross-era comparison, thereby addressing motivation (1) identified in the introduction section. From a practical perspective, our method renormalizes player achievement metrics with respect to player success rates, which facilitates removing time-dependent trends in performance ability relating to various physiological, technological, and economic factors.
In particular, our method accounts for various types of historical events that have increased or decreased the rates of success per player opportunity, e.g. modern training regimens, PEDs, changes in the physical construction of bats and balls and shoes, sizes of ballparks, talent dilution of players from expansion, etc. While in previous work we applied our renormalization method exclusively to MLB career data \cite{petersen2011methods}, here we demonstrate the generalizability of the method by applying it to an entirely different sport. Since renormalized metrics facilitate objective comparison of player achievements across distinct league eras, in principal an appropriate cross-normalization could also facilitate comparison across different sports.
The principal requirements of our renormalization method are: (a) individual-oriented metrics recording achievements as well as opportunities, even if the sport is team-oriented; and (b) data be comprehensively available for all player opportunities so that per-opportunity success rates can be consistently and robustly estimated.
We then use the prowess time-series $\langle P(t) \rangle$ as an `achievement deflator' to robustly capture time-dependent performance factors. Take for example assists in the NBA, for which the average player prowess $\langle P(t) \rangle$ peaked in 1984 during the era of point-guard dominance in the NBA, and then decreased 25\% by 2008 (see Fig. \ref{figure:F4}G). This decline captures a confluence of factors including shifts in team strategy and dynamics, as well as other individual-level factors (i.e. since an assist is contingent on another player scoring, assist frequencies depend also on scoring prowess). More generally, such performance factors may affect players differently depending on their team position or specialization, and so this is another reason why comprehensive player data is necessary to capture league-wide paradigm shifts.
The choice of renormalization baseline $P_{\text{baseline}}$ also affects the resulting renormalized metric range. Consequently, the arbitrary value selected for $P_{\text{baseline}}$ can be used to emphasize the occasional apparently super-human achievements of foregone greats when measured using contemporary metrics. For example, we highlight the ramification of this choice in the case of home runs, for which we used $P_{\text{baseline}} \equiv \langle P_{HR}(2009) \rangle$, such that Fig. \ref{figure:F3}(B) shows season home-run tallies measured in units of 2009 home-runs. As a result, the maximum value in the season home-run distribution corresponds to Babe Ruth's career year in 1921 (and in fact not 1927, when HR prowess was relatively higher) in which he hit the equivalent of 214 renormalized Home Runs (or 2009 HRs).
Alternatively, we also demonstrate how using the average prowess value as the baseline, $P_{\text{baseline}} \equiv \overline{P}$, yields a renormalized metric range that is more consistent with the range of traditional metrics, as illustrated by the distributions in Fig. \ref{figure:F3}(C,D). In such cases when the prowess time series is non-monotonic, there may not be a unique year corresponding to a given prowess value used as $P_{\text{baseline}}$. This is the case for assists, see Fig. \ref{figure:F4}(G), since assist prowess peaked in the mid-1980s. As a result, renormalized assist metrics for players significantly before or after this period, when prowess values were lower, will have relatively greater renormalized assist metrics.
To facilitate visual inspection of how the nominal values translate into renormalized values, we provide 6 tables in the Supplementary Material Appendix that rank NBA metrics at the season and career levels (see \cite{petersen2011methods} for analog tables ranking MLB player achievements).
All tables are split into left (traditional ranking) and right sides (renormalized ranking). For example, Table S6 starts with: \\
\begin{table}[h!]
\begin{tabular}{lccc||lcccc}
&\multicolumn4c{{Traditional Rank}}&\multicolumn4c{{Renormalized Rank}}\\
Rank & Name & Season ($Y\#$) & Season Metric & Rank$^{*} $(Rank) & \% Change & Name & Season ($Y\#$) & Season Metric \\
\hline
1 & Wilt Chamberlain & 1960 (2) &2149 & 1(28) & 96 & Dennis Rodman & 1991 (6) &1691 \\
\end{tabular}
\end{table}
\noindent This line indicates that in the 1960-61 season, Wilt Chamberlain obtained 2149 rebounds, the most for a single season, corresponding to his second career year (Y\#).
However, according to renormalized metrics, Dennis Rodman's 6th career year in the 1991-1992 season finds new light as the greatest achievement in terms of renormalized rebounds (1691), despite being ranked \#28th all-time according to the nominal value (1530 rebounds), a shift corresponding to a 96\% percent rank increase.
Not all metrics display such profound re-ranking among the all-time achievements. Such is the case for Wilt Chamberlain's single-season scoring record (see Table S5) and John Stockton's single-season assists record (see Table S7), which maintain their top ranking after renormalization.
Also at the season level, another source of variation in addition to performance factors is the wide range of ability and achievement rates across individuals.
Consequently, renormalization based upon average league prowess, $\langle P(t) \rangle$, can be strongly influenced by outlier achievements at the player-season level. Fig. \ref{figure:F3} illustrates season-level performance distributions for HR and Points, comparing the distributions calculated for nominal metrics, $P(x)$, and renormalized metrics, $P(x^{D})$.
Because $\langle P(t) \rangle$ captures average performance levels, the data collapse across achievement distributions drawn from multiple eras in Fig. \ref{figure:F3} is weakest in the right tails that capture outlier player performance. Nevertheless, the data collapse observed in the bulk of the $P(x^{D})$ distributions indicates that the variation in player achievements, an appropriate proxy for league competitiveness, has been relatively stable over the history of each league.
At the career level, this comprehensive study of all player careers facilitates a better appreciation for the relatively high frequencies of {\it one-hit wonders} -- individuals with nearly minimal achievement metrics -- along with much smaller but statistically regular and theoretically predictable frequencies of superstar careers. By way of example, previous work reveals that roughly 3\% of non-pitchers (pitchers) have a career lasting only one at-bat (lasting an inning or less) and 5\% of non-pitchers complete their career with just a single hit; Yet, the same profession also sustains careers that span more than 2,000 games, 10,000 at bats and 4,000 innings pitched \cite{petersen2011methods}. Here we find that the same disparities hold for players in a different sport with different team dynamics, player-player interactions, and career development system (e.g. the NBA introduced a `minor league' system in 2001). In particular, 3\% of NBA careers end within the first 1-12 minutes played, and 2\% of careers last only 1 game! Yet, the average career length is roughly 273 games (roughly 3 seasons), while the maximum career length is owed to Robert Parish with 1,611 games, almost six times the average. Another anomaly is Kareem Abdul-Jabbar's career, which spanned 57,446 minutes played, roughly 9 times the average career length measured in minutes. Similar results have also been observed for professional tennis careers \cite{radicchi2011best}. Such comparisons between extreme achievers and average player performance illustrate the difficulty in defining a `typical' player in light of such right-skewed achievement distributions.
This lack of characteristic scale is evident in the career achievement distributions shown in Fig. \ref{figure:F4}, which indicate a continuum of achievement totals across the entire range. In other words, these professional sport leagues breed one-hit wonders, superstars and all types of careers inbetween -- following a highly regular statistical pattern that bridges the gap between the extremes.
Remarkably, Fig. \ref{figure:F4} indicates little variation when comparing the career achievement distribution $P(X)$ calculated using traditional metrics against the corresponding $P(X^{D})$ calculated using renormalized career metrics.
This observation provides several insights and relevant policy implications. First, the invariance indicates that the extremely right skewed distribution of career achievement are not merely the result of mixed era-specific distributions characterized by different parameters and possibly different functional forms. Instead, this stability points to a universal distribution of career achievement that likely follows from simple parsimonious system dynamics. Second, this invariance also indicates that the all-time greats were not {\it born on another planet}, but rather, follow naturally form the statistical regularity observed in the player achievement distributions, which feature common lower and upper tail behavior representing the most common and most outstanding careers, respectively.
Third, considering benchmark achievements in various sports, such as the 500 HR and 3000 K clubs in MLB, and the 20,000 points, 10,000 rebounds and 5,000 assists clubs in the NBA, such invariance indicates that such thresholds are nevertheless stable with respect to the time-dependent factors where renormalized metrics are used.
This latter point follows because, while the distribution may be stable, the ranking of individuals is not. Such local rank-instability provides additional fodder for casual argument and serious consideration among fans and statisticians alike.
And finally, regarding the preservation of cultural heritage, these considerations can be informative to both Baseball and Basketball Hall of Fame selection committees, in particular to address motivation (2) identified in the introduction section concerning standard and retroactive player induction.\\
\noindent{\bf Acknowledgements:} We are indebted to countless sports \& data fanatics who helped to compile the comprehensive player data, in particular Sean Lahman \cite{BaseballData}. We are also grateful to two reviewers for their insightful comments.
| {
"attr-fineweb-edu": 1.84375,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUdM45qsMAI5yh6n28 | \section{Introduction}
A player contributes to team productivity in many ways. Besides obvious contributions such as goal scoring and assists, a player's presence, tendencies, and other more difficult to quantify dynamics also factor into the fluid direction a game progresses. While direct involvement in goal scoring remains a significant indicator of player performance, it does not encompass all of their contributions to the team.
In this paper we quantify a player's marginal contribution, decompose this into competitive and altruistic contributions, and use these measures to develop a playmaking metric. All of these measures can be used to assess an individual's contribution to his team. In particular, the playmaking metric quantifies a player's ability to improve the productivity of his teammates.
\subsection{Motivation}
The dynamics of a competitive sports team are governed by a complex underlying framework of player attributes and group dynamics. Generally speaking, a collection of great players is assumed to yield great results, and organizations fight constraints of both the availability of such talent, and the inherent costs associated with acquiring them, to grow rosters with maximum potential. Identifying productive players is therefore critical to team management when drafting and trading players, and targeting free agents.
While prospective player value can be assessed using individual statistics of past performance, these measures can carry inherent biases, and arguably do not represent a player's total on-ice contributions. There is much more to a player's performance than the commonly referenced goals, assists, and plus-minus statistics. Flaws in these measures include that they do not account for shorthanded and power-play usage, nor the strength of their team and teammates. Scouts can assess player performance well but are unable to process every player in every game with the efficiency a computer has. Developing advanced statistics that are capable of better quantifying player contributions is an aspiration for many, and it is the goal that we have here regarding a player's ability to improve his teammates' performance.
\subsection{Problem approach}
To assess a player's playmaking ability we first define the marginal contribution by quantifying a player's total productivity for his team. This contribution is decomposed into components of competitive and altruistic contributions -- akin to goal scoring ability and remaining or ``other'' contributions. Using a player's altruistic contribution, along with assists, we ultimately develop the playmaking metric.
The playmaking metric accounts for the strength of a player's linemates, and is based on $5$-on-$5$ statistics so they are independent of how much power play or shorthanded time a player receives. It is uses both shots and goals and is less subject to random fluctuations than metrics based only on goals. We demonstrate that our metric is more consistent than assists by showing the year-to-year correlation of our metric is higher than that of assists. We also confirm our metric is better than assists at predicting future assists by showing that we get a lower mean-squared error between predicted assists and actual assists when using our metric instead of assists.
\subsection{Previous applications}
Several previous studies in cooperative game theory provide the inspiration for our playmaking metric. They have a common theme of identifying competitive and altruistic contributions of game participants, and explore both how to reasonably quantify these aspects, and the utility in doing so. We discuss them next to provide the context of existing cooperative game theory approaches from which our analysis is derived.
Publications by \cite{cooperation-arney-peterson}, \cite{coop-peterson}, and \cite{coop-space-arney-peterson} address cooperation in subset team games. In these games, team players pursue a common goal. Each player has some positive contribution toward the goal. Contributions are broken down into competitive and altruistic (selfish or unselfish, greedy or not greedy) components. Using this decomposition, cooperation within organizations and teams is assessed for players or subsets of players.
Determining competitive and altruistic player contributions has several useful applications such as assessing past and future performance and measuring chemistry among subsets of players. By categorizing players according to their relative competitive and altruistic components, we can assess individuals and groups and conjecture the kinds of team compositions that lead to good group dynamics.
\paragraph{Pursuit and Evasion Games}
In pursuit and evasion games, a team of pursuers targets a team of evaders. The pursuers attempt to catch the evaders before they reach a safe zone. Each player operates autonomously.
Using a na\"{\i}ve greedy search heuristic, a pursuer would chase its closest or most vulnerable target. This approach represents completely competitive minded participants. Alternatively, the pursuer could attempt to communicate with his or her teammates the location of the evader, and develop a more holistic strategy towards team success. The pursuer may not get ``credit'' for catching that particular evader but did contribute to the success of the team. In the latter circumstance, complementing competitive players with altruistic ones would intuitively lead to better results for the team.
\paragraph{Communications Networks}
Consider an information network as in Figure \ref{network-flow-example}.
\begin{figure}[h!]
\centering
\includegraphics[width=.3\textwidth]{network-flow-example}
\caption{An illustration of a communications network}
\label{network-flow-example}
\end{figure}
In this game, the players (nodes) on the left must transmit information through the network to the nodes on the right. Players have different amounts of information to transmit, and each channel has a unique capacity. The goal is to maximize the amount of information transmitted. Nodes can transmit straight across their channel, or use the channels of adjacent nodes. The nodes act autonomously, with only information about the nodes and channels next to them. An optimum solution for the system is therefore unknown by the individual nodes.
Each player contributes to the goal through behaviors that could be classified as selfish or unselfish. For example, a selfish player will transmit as much as possible in its own channel, while an unselfish player will let neighboring nodes with more information to transmit use their channel. Different behaviors lead to different competitive and altruistic contributions for each player. One can determine the combinations of player types, in terms of competitive and altruistic contribution, that lead to the best results.
\section{Notation and Definitions}\label{defs}
We will use notation consistent with definitions and results by Arney and Peterson in \cite{cooperation-arney-peterson}, \cite{coop-peterson}, and \cite{coop-space-arney-peterson} regarding cooperation in subset team games, beginning with the sets:
\begin{eqnarray*}
&T:& \mbox{a set of all players on a given team,}\\
&A:& \mbox{a specific player or subset of players in $T$; in other words, $A \subseteq T$.}\\
&A^c:& \mbox{the complement of $A$; in other words, $A$'s teammates,}\\
& & \mbox{or $T \backslash A$, the players in $T$ which are not in $A$.}
\end{eqnarray*}
The function $u$ is a utility function, or value function, which assigns a real number to every outcome of the game. The quantity $u_X(Y)$ represents the value to $X$ when $Y$ participates. The quantities we are most interested in, corresponding to the subsets $T$, $A$, and $A^c$, are
\begin{eqnarray*}\label{utt}
&u_{T}(T):& \mbox{the value to the team, when everyone participates,}\\
&u_{A^c}(T):& \mbox{the value to everyone but $A$, when everyone participates, and}\\
&u_{A^c}(A^c):& \mbox{the value to everyone but $A$, when $A$ does not participate.}
\end{eqnarray*}
These definitions are revisited in greater detail when calculations are later completed.
\subsection{Defining and decomposing marginal contribution}
Our analysis begins with decomposing marginal contribution into its competitive and altruistic components, denoted as
\begin{eqnarray*}
&c(A):& \mbox{the competitive contribution of $A$, and}\\
&a(A):& \mbox{the altruistic contribution of $A$.}
\end{eqnarray*}
Marginal contribution is defined as the sum of these respective contributions,
\begin{align} \label{marg}
m(A) = c(A) + a(A).
\end{align}
The competitive component is determined from direct contributions by $A$. The term \textit{direct} refers to tallies (i.e. goals in hockey) towards team productivity attributed to $A$. This competitive component is the difference in the value to $T$ and the value to $A^c$, when everyone participates:
\begin{align}\label{defcomp}
c(A) = u_{T}(T) - u_{A^c}(T).
\end{align}
It may be helpful to think of the phrase ``value to'' as ``productivity of'', so that competitive contribution can be thought of as the difference in the productivity of $T$ and the productivity of $A^c$.
The altruistic contribution of $A$ is the difference in the value to $A$'s teammates when $A$ does and does not participate, and is defined as
\begin{align}\label{defalt}
a(A) = u_{A^c}(T) - u_{A^c}(A^c).
\end{align}
In other words, $a(A)$ is the difference in the productivity of $A$'s teammates when $A$ does and does not play. This measure is high when the contributions of $A$ are valuable to $A$'s teammates, or, in other words, when $A$ increases the productivity of $A$'s teammates.
Substituting equations \eqref{defcomp} and \eqref{defalt} into \eqref{marg}, we arrive at an equivalent expression for marginal contribution of $A$:
\begin{align}
m(A) &= c(A) + a(A) = \left[ u_{T}(T) - u_{A^c}(T) \right] + \left[ u_{A^c}(T) - u_{A^c}(A^c) \right] \nonumber
\end{align}
or
\begin{align}\label{defmarg}
m(A) &=u_T(T) - u_{A^c}(A^c).
\end{align}
This expression says that marginal contribution of $A$ is the difference in the productivity of the team when everyone plays and the productivity of $A$'s teammates when $A$ does not play.
\section{Assessing a Player's Contributions in Hockey}\label{goals}
Having established necessary background, terms, and definitions, we extend this methodology to the sport of hockey. Specifically, we identify a hockey player's marginal contribution and decompose that contribution into competitive and altruistic components. Using these measures, we can subsequently use a players altruistic component to develop a measure of a player's playmaking abilities.
\subsection{Data}
Our analysis pulled data from \cite{nhlcom}. Working with any professional sports data set provides inherent advantages and disadvantages, and hockey is no different. Before proceeding with our analysis, it is worthwhile to highlight some of the pros and cons of working with this database.
Conversely to a model parameterized entirely by its designer, analyzing competitive sports presents challenges unique to working with prescribed model parameters and subsequently produced data. \cite{cooperation-arney-peterson}, \cite{coop-peterson}, and \cite{coop-space-arney-peterson} focused on developing theories about cooperation, and applying it to pursuit and evasion games and information networks. As theoretical scenarios, the designer has full control over all parameters. Player attributes are adjusted to create circumstances of interest to the model's designer. Similarly, they also control the rules of the game, and alter them accordingly to set their desired conditions.
With exception of those connected to team management, a hockey analyst has no control over the kinds of players that play together and cannot try combinations of their choosing. There is, however, an abundance of real data available for analysis. Our focus is to identify useful data and choose appropriate, meaningful, and interpretable values and payoff functions.
Hockey data is conducive to analysis for several reasons. The data is relatively accurate, complete, and detailed. The NHL data used provided the players on the ice for every second of every game, and all corresponding events such as goals, shots, hits, giveaways, etc. Sports data also provides easily quantifiable natural objective outcome values, such as goals scored or wins, that are not a subjective assessment. In most other kinds of organizations, the ideas of value, outcomes, contributions, and teamwork are typically more subjective in terms of both measurement and definition.
\subsection{Defining contributions using goals} \label{gsa}
The next few sections walk through improving iterations of our analysis of player marginal contribution, and its decomposition into competitive and altruistic components. Detailing this evolution is useful to illustrate the pitfalls of other seemingly simpler or more intuitive approaches, and reinforce the validity of our final solution.
We start quantifying contributions in hockey with perhaps the simplest choice for value, goals scored. The value of a season to a subset of players is defined as goals scored by those players during that season. The quantity $u_X(Y)$ is defined as the goals scored by players in $X$ when the players in $Y$ participate. We consider the case when $A$ is a single player, and we have
\begin{align*}
u_{T}(T) &= \mbox{goals scored by the team when everyone participates,} \nonumber \\
u_{A^c}(T) &= \mbox{goals scored by $A$'s teammates when everyone participates, and} \nonumber \\
u_{A^c}(A^c) &= \mbox{goals scored by $A$'s teammates when $A$ does not play.}
\end{align*}
Note that when we say ``when everyone participates,'' we do not mean the whole team is participating at the same time. In hockey this never happens, since at most five players (plus a goalie) play at once for a given team. Therefore, we interpret the scenario for everyone participating as any subset of five players in $T$ on the ice at a given time. Similarly, when we say ``when $A$ does not play'' we mean when a subset of $A^c$ is on the ice.
Here, and throughout this paper, we consider only 5-on-5 situations in which both goalies are on the ice. We do not want our metrics to depend on if a player's coach happens to give him power play or short handed time, or happens to play him at the end of the game when one team has pulled their goalie.
Let $G$ be the goals scored by $T$, let $g$ be the goals scored by player $A$, and let $gf$ (goals for) be the goals scored by the team when $A$ is on the ice. Then we have
\begin{align*}
u_T(T) &= G \\
u_{A^c}(T) &= G - g \\
u_{A^c}(A^c) &= G - gf
\end{align*}
Recalling that $c(A) = u_T(T) - u_{A^c}(T)$ and $a(A) = u_{A^c}(T) - u_{A^c}(A^c)$, we have
\begin{align*}
c(A) &= G - (G - g) = g \\
a(A) &= (G-g) - (G - gf) = gf - g
\end{align*}
With these definitions, player $A$'s competitive contribution, $c(A)$, is simply the goals scored by player $A$, and his altruistic contribution is the goals that his teammates score when he is on the ice. Note that $a(A)$ is high when the team scores many goals when $A$ plays, but $A$ himself does not score many of the goals. The team does well when $A$ plays, but $A$ is not necessarily getting the credit for the goals.
\begin{table}[h!]
\begin{center}
\caption{Top five forwards in altruistic contribution}
\label{cooptop5}
{\small
\begin{tabular}{llrrrrrrrrr}
\addlinespace[.3em] \toprule
Player & Pos & Team & $u_T(T)$ & $u_{A^c}(A^c)$ & $m(A)$ & $c(A)$ & $a(A)$ & A & Pts & Mins \\
\midrule
H. Sedin & C & VAN & 162 & 95 & 67 & 9 & 58 & 44 & 53 & 1194 \\
J. Toews & C & CHI & 167 & 98 & 69 & 20 & 49 & 27 & 47 & 1187 \\
D. Sedin & LW & VAN & 162 & 93 & 69 & 22 & 47 & 35 & 57 & 1145 \\
R. Getzlaf & C & ANA & 134 & 79 & 55 & 8 & 47 & 32 & 40 & 1114 \\
B. Boyes & RW & STL & 162 & 106 & 56 & 9 & 47 & 30 & 39 & 1146 \\
\bottomrule
\end{tabular}
}
\end{center}
\end{table}
The top five forwards in altruistic contribution, $a(A)$, are given in Table \ref{cooptop5}. The last three columns denote assists (A), points (Pts), and minutes played (Mins) during the 2010-11 season. The results are what we might expect. Players who play on good offensive teams and get a lot of assists, and may or may not score many goals themselves, have high altruistic contributions.
The correlation between assists and altruistic contribution is fairly high (0.91), and a scatterplot of altruistic contribution versus assists is given in the left of Figure \ref{fig-goals}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=.45\textwidth]{assists-alt}
\includegraphics[width=.45\textwidth]{fanddg}
\caption{(Left) Scatter plot of assists vs altruistic contribution. (Right) Scatter plot of competitive versus altruistic contribution for forwards (blue circles) and defensemen (red dots).}
\label{fig-goals}
\end{center}
\end{figure}
In the right of Figure $\ref{fig-goals}$, we see how the competitive and altruistic contributions of both forwards and defensemen are distributed. A distinct clustering of forwards and defensemen appears. This is an intuitive result considering the inherent goal scoring opportunities (or lack thereof) that accompany their positions. For example, defensemen typically have a low competitive contribution due to their relatively low goal scoring rate. Defensemen also typically play more minutes than forwards, yielding higher marginal contributions, and therefore higher altruistic contributions.
\subsection{Defining contributions using goals per 60 minutes}\label{gsa60}
Under the approach in Section \ref{gsa}, goals, assists, and altruistic contribution, for example, are highly influenced by playing time. We expect a player receiving significant playing time to amass more opportunities for goals and assists, both of which influence competitive and altruistic measures.
To eliminate this playing time bias, we could alternatively capture statistics as a rate per 60 minutes. This adjustment not only standardizes comparison between players, but mitigates the significant correlation between altruistic contribution and assists reflected in Figure \ref{fig-goals}. Although assists and altruistic contribution remain correlated using a rate statistic, the correlation is not as extreme.
The simplest way to do this is to replace ``goals'' with ``goals per 60 minutes'' in the previous definitions. Variables of interest are altered as follows:
\begin{description}
\item[$u_{T}(T)$] = goals per 60 minutes scored by the team during the times when $A$ does and does not play.
\item[$u_{A^c}(T)$] = goals per 60 minutes scored by everyone except $A$ during the times when $A$ does and does not play.
\item[$u_{A^c}(A^c)$] = goals per 60 minutes scored by everyone except $A$, during only the times when $A$ does not play.
\end{description}
\begin{eqnarray*}
m(A) &=& \mbox{the difference in the goals per 60 minutes scored by the team} \\
& &\mbox{during all times and during only the times when $A$ does not play.}\\
c(A) &=& \mbox{the number of goals per 60 minutes player $A$ scored.}\\
a(A) &=& \mbox{the difference in the goals per 60 minutes scored by $A$'s teammates } \\
& &\mbox{during all times and during only the times when $A$ does not play.}
\end{eqnarray*}
We can still decompose $m(A)$ into two components:
$$ m(A) = c(A) + a(A). $$
In other words, a player's marginal contributions are divided into two components, the goals per 60 minutes he scores, and the increase (or decrease) in the goals per 60 minutes his teammates score during all times and during only the times when $A$ is on the ice.
\subsection{A further adjustment to our definitions} \label{linemates}
A high altruistic contribution with the current definition indicates a player's team scored a lot of goals when they were on the ice relative to when they were off the ice, but they themselves did not score many of those goals. Although it is tempting to associate a high altruistic rating with the innate playmaking qualities of the individual, this metric can prove misleading. Consider the hypothetical team of 4 forward lines and 3 defense pairings in Table \ref{faketeam}.
\begin{table}[h!]\centering
\caption{A hypothetical team of above average and below average players.}
\begin{tabular}{llllll}
\addlinespace[.5em]
\toprule
LW & C & RW & \quad & D & D \\
\midrule
\textbf{Player A} & Above & Above & \quad & Above & Above \\
Below & Below & Below & \quad & Below & Below \\
Below & Below & Below & \quad & Below & Below \\
Below & Below & Below & \quad & & \\
\bottomrule
\end{tabular}
\label{faketeam}
\end{table}
Player $A$, the below average player in the first line, typically plays with above average players on a team with mostly below average players. He will have a high $u_{A^c}(T)$ because his teammates score a lot of goals when he plays (which he often did not contribute to because he is a below average player), and will have a low $u_{A^c}(A^c)$ because his team does not score that many goals when he does not play. He will have a high $a(A)$ and categorize as ``unselfish." Player $A$, however, may or may not have anything to do with the increase in goals when he plays, because the strength of his linemates is much greater than the strength of the rest of the team.
Player $A$'s altruistic contribution should perhaps not be so heavily dependent on the performance of players he never plays with. This observation motivates modified value and payoff functions that account for the strength of teammates a player experiences ice-time with. The $u_{A^c}(A^c)$ term is calculated as a weighted average based on playing time with $A$, instead of as an unweighted average of team goals per 60 minutes scored when $A$ is off the ice.
We note that this version of marginal contribution is similar to the With Or Without You (WOWY) and on-ice/off-ice statistics described in
\cite{fyffe-vollman},
\cite{boersmawowy},
\cite{seppa},
\cite{gabewowy},
\cite{deltasot},
\cite{tango-wowy},
\cite{wilson-wowy}, and
\cite{davidjohnson}, although some of those metrics use different data or are computed in slightly different ways. Instead of writing
$$ m(A) = u_T(T) - u_{A^c}(A^c), $$
we could use the notation $GF_{on}$ for $u_T(T)$, and $GF_{off}$ for $u_{A^c}(A^c)$, and using \eqref{defmarg} we can write marginal contribution in a notation that is closer to what the online hockey analyst community would use:
$$ m = GF_{on}- GF_{off}.$$
This notation is perhaps more intuitive and highlights that marginal contribution is measuring what happens when a player is on the ice versus off the ice. We will continue to use $GF_{on}$ and $GF_{off}$ in lieu of $u_T(T)$ and $u_{A^c}(A^c)$ going forward, especially since we have changed the meaning of these terms slightly.
The first term $GF_{on}$ is simply the goals per 60 minutes scored by the team when $A$ is on the ice. For the second term, we first let $GF_i$ be the goals for per 60 minutes for player $i$ when playing \textit{without} $A$, and let $w_i$ denote playing time \textit{with} player $A$. Then we define $GF_{off}$ to be the weighted average
$$ GF_{off} = \frac{\sum GF_i \, \, w_i}{\sum w_i}, $$
where the sums are taken over all $i$. Teammates frequently paired with $A$ have high $w_i$ and are more influential in this statistic, while those never playing with $A$ will have no affect on $GF_{off}$. This prevents undue influence from teammates that $A$ is seldom or never paired with, and likewise emphasizes data with greater supporting information.
We remark that we could have chosen to define marginal contribution using one of the regression-based metrics referenced in the Section \ref{conclusions and future work}. We have chosen the method presented here because of speed of computation, and because we would ultimately like to consider the case where $A$ is a subset of two or more players instead of a single player. In Table $\ref{M5}$, we see that our choice for marginal contribution quantifies performance well, as these players are generally regarded as being among the best offensive players in the league.
\begin{table}[h!]
\begin{center}
\caption{Top five forwards in marginal contribution using goals per 60 minutes} \label{M5} {\footnotesize \begin{tabular}{rrrrll}
\addlinespace[.3em] \toprule
Rk& Player & Pos & Team & m & Time \\
\midrule
1 & Sidney Crosby & C & PIT & 1.55 & 3614 \\
2 & Henrik Sedin & C & VAN & 1.28 & 4530 \\
3 & Pavel Datsyuk & C & DET & 1.27 & 4259 \\
4 & Daniel Sedin & LW & VAN & 1.22 & 4164 \\
5 & Alex Ovechkin & LW & WSH & 1.17 & 4896 \\
\bottomrule
\end{tabular}
}
\end{center}
\end{table}
This new definition varies a bit from those introduced in previous sections, but we can still decompose marginal contribution into competitive and altruistic components. Player $A$'s competitive contribution $c(A)$ is the goals per 60 minutes scored by $A$ himself, and his altruistic contribution is everything else:
$$ a(A) = m(A) - c(A) .$$
In the left of Figure \ref{assists-alt-g60w}, we see that competitive and altruistic contributions appear uncorrelated, especially for forwards (black circles), evidence that they are measuring different skills. Sidney Crosby is an outlier, but this is not terribly surprising, especially since he only played half the season in $2010$-$11$ and we have a relatively small sample size in his case.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=.45\textwidth]{fanddg60w}
\includegraphics[width=.45\textwidth]{assists-alt-g60w}
\caption{(Left) Competitive versus altruistic contribution for forwards (black circles) and defensemen (red dots) for 2010-11. (Right) Altruistic contribution versus assists per 60 minutes for forwards in $2010$-$11$.} \label{assists-alt-g60w}
\end{center}
\end{figure}
In the right of Figure $\ref{assists-alt-g60w}$, we see that for forwards altruistic contribution is fairly correlated with assists per 60 minutes (correlation $\approx 0.75$).
\subsection{Defining contributions using shots per 60 minutes}
Since scoring is relatively infrequent in hockey, goals can be somewhat unreliable to consistently represent on-ice performance.
In \cite{possessioniseverything}, \cite{shots-fwick-corsi}, and \cite{spm}, the authors conclude that shots are both more consistent than goals and better than goals at predicting future goals. Since assists are based on goals, a player's assists are subject to the same randomness. Our previously developed altruistic contribution is based on goals and has the same problem. In fact, it actually has a slightly lower year-to-year correlation than assists.
Unfortunately, while the NHL records assists, or the number of a player's passes that immediately precede a teammate's goal, they do not record the number of a player's passes that immediately precede a teammate's shot. So there is no hope for developing a shot-based metric that is analogous to assists with data that is currently available to the public.
However, while the NHL's historical databases do not contain information about passes that led to shots, they do contain information about the players on the ice for every shot taken, as well as the player who took the shot. This data is exactly what is need to develop a shot-based version of altruistic contribution that is analogous to the goal-based version described in Section \ref{linemates}.
We can define $m(A), c(A),$ and $a(A)$ in the same way we did previously, except using shots per 60 minutes instead of goals per 60 minutes. A player's marginal contribution is found using
\begin{align*}
m &= SF_{on} - SF_{off}.
\end{align*}
where $SF_{on}$ and $SF_{off}$ are computed like $GF_{on}$ and $GF_{off}$ using shots instead of goals. A player's competitive contribution $c(A)$ is now defined as his shots per $60$ minutes. The altruistic component,
$$a(A) = m(A) - c(A),$$
can be thought of as the difference in \textit{shots} per $60$ minutes by the player's teammates when he is on the ice versus off the ice. This version of altruistic contribution using shots is what we use in the next section to develop our playmaking metric.
\section{The Playmaking Metric} \label{improving}
We now develop our playmaking metric, which combines both assists and our shot-based altruistic contribution metric to form a measure of a player's contributions towards his teammates' productivity. This metric has two benefits over assists which we can provide statistical evidence. Our metric is (1) more consistent than assists, and (2) better than assists at predicting future assists.
Points (1) and (2) are essential. Up to this point, we have defined a shot-based version of altruistic contribution, which is a way to measure how a player affects the number of shots that his teammates take, in the same way that assists are a way to measure how a player affects the number of goals that his teammates score. While the definition makes intuitive sense, we have not yet given any statistical evidence that these measures are actually useful or better than any existing metrics. In this section we provide evidence that our metric is better than assists.
\subsection{Calculation and comparison}
We compare two linear regression models: one that uses only assists as a predictor, and one that uses both assists and our shot-based altruistic contribution metric.
More precisely, we compare
\begin{equation}\label{assists-model}
y = \beta_0 + \beta_1 A + \epsilon
\end{equation}
with
\begin{equation}\label{play-model}
y = \beta_0 + \beta_A A + \beta_{Alt} Alt + \epsilon,
\end{equation}
where $A$ and $Alt$ denote assists and altruistic contribution per $60$ minutes in one half of a season and $y$ denotes assists per $60$ minutes in the other half of a season.
The expected assists per $60$ minutes obtained from \eqref{play-model} are what we call our playmaking metric. Recall that we are only considering 5-on-5 situations in which both goalies are on the ice.
We built these models for forwards and defensemen separately, using both half and full seasons of data. In all cases, \eqref{play-model} outperformed \eqref{assists-model}. Figure $\ref{half-year-to-half-year-correlation}$ illustrates playmaking is a more consistent measure of performance than assists for both forwards and defensemen.
\begin{figure}[h!]
\centering
\includegraphics[width=.49\textwidth]{half-year-to-half-year-correlation-F-and-D}
\includegraphics[width=.49\textwidth]{y2y-cor-vs-minutes}
\caption{(Left) Half season to half season correlation of assists (gray) and our playmaking metric (red) for forwards and defensemen with a minimum of $300$ minutes of playing time in both halves of the season. (Right) Half season to half season correlations for forwards for different choices of minimum minutes cutoff.}
\label{half-year-to-half-year-correlation}
\end{figure}
It is significant to note that rate statistics are vulnerable to variability for smaller sample sizes. For example, a winger called up from the AHL to the NHL could potentially score one minute into his first NHL shift. His resulting scoring rate would be an impressive 60 goals per 60 minutes, far exceeding that of league superstars like 2012-13 scoring leader Alex Ovechkin.
For this reason, we chose a minimum playing time cut-off of 300 minutes when computing these correlations. This choice is somewhat arbitrary, so in the right of Figure \ref{half-year-to-half-year-correlation}, we show that our general conclusions do not change for different choices of cut-off.
The results for year-to-year correlations are similar. In particular, we get a correlation of $0.53$ for our playmaking metric for forwards. In fact, the half-season to half-season correlations for our playmaking metric are higher than the full season to full season correlations for assists.
Statistical measures for goodness of fit further support our playmaking metric from \eqref{play-model}. That model has a better adjusted $R^2$, Mallows' C$_p$, and AIC than \eqref{assists-model}, which all indicate our metric is better than assists at predicting future assists.
A $10$-fold cross-validation also showed the mean squared error of predicted assists versus actual assists is smaller for \eqref{play-model} than for \eqref{assists-model}.
The same is true whether we divide the data into half seasons or full seasons, or use defensemen instead of forwards.
\subsection{Top playmakers}
In Table $\ref{playtable}$, we give the top five playmakers in $2010$-$11$ according to expected assists, using our full season to full season model for forwards. These expected assists are calculated from our playmaking metric, which is in the units of expected assists per 60 minutes, along with the player's playing time that year.
\begin{table}[h!]
\begin{center}
\caption{Top five forwards in playmaking ability in $2010$-$11$. }
\label{playtable}
{\small
\begin{tabular}{llrrrrrrr}
\addlinespace[.5em] & & & \multicolumn{2}{c}{$2009$-$10$} & \multicolumn{2}{c}{$2010$-$11$} & \multicolumn{2}{c}{Difference} \\
\toprule
Player & Pos & Team & A & PLAY & A & PLAY & A & PLAY \\
\midrule
Henrik Sedin & C & VAN & $53$ & $32$ & $44$ & $31$ & $9$ & $1$ \\
Anze Kopitar & C & L.A & $18$ & $22$ & $33$ & $25$ & $15$ & $3$ \\
Claude Giroux & RW & PHI & $15$ & $17$ & $33$ & $25$ & $18$ & $9$ \\
Daniel Sedin & LW & VAN & $36$ & $22$ & $35$ & $24$ & $1$ & $2$ \\
Bobby Ryan & RW & ANA & $17$ & $19$ & $28$ & $24$ & $11$ & $5$ \\
\bottomrule
\end{tabular}
}
\end{center}
\end{table}
The columns A and PLAY denote assists at even-strength and expected assists from our playmaking metric, respectively. The last two columns are the absolute difference between the $2009$-$10$ and $2010$-$11$ statistics. Note that for these players, the playmaking metric tended to be more consistent from year-to-year than assists. It is interesting that our playmaking metric had Claude Giroux as the third best playmaker in the league in $2010$-$11$, the season before he was a top three scorer.
\section{Conclusions and Future Work} \label{conclusions and future work}
In this paper, we have introduced a measure of NHL playmaking ability capable of better predicting future assists than assists themselves. Our metric (1) adjusts for the strength of a player's linemates, (2) is more consistent than assists, and (3) is better than assists at predicting future assists.
Identifying playmaking ability in this way can compliment the expertise of coaches, general managers, and talent evaluators to differentiate relative value among a collection of talented players. Their decision making is assisted by better identifying and understanding the potential value of a prospective player joining their organization. Trade targets, free agent signings, and draft picks can be assessed by not only using traditional performance measures, but also through considering their fit into the chemistry of an existing organization. Specialization in terms of playmaking ability, competitive contributions, and altruistic contributions can be targeted in accordance with a team's needs.
Several possibilities exist for future study. Alternative measures of player marginal contributions can be explored using the player ratings
in
\cite{thomas-ventura},
\cite{schuckerscurro}, and
\cite{gramacy-jensen-taddy},
or the adjusted plus-minus ratings
in \cite{apm}, \cite{apm2}, and \cite{ridge}. These choices of marginal contribution may be preferred since they account for the strength of a player's opponents, and in some cases, the zone in which a player's shifts typically begin. Additionally, although we focus on contributions within a competitive sports team, similar analysis could benefit any team, organization, corporation, or military unit working towards a common goal, albeit quantifying such scenarios is difficult in the absence of well defined value and payoff functions available in competitive sports.
Lastly, we note that our focus was on the case where $A$ denotes a single player, since we were most interested in developing a metric for an individual player's playmaking ability. However, all of the definitions of marginal, competitive, and altruistic contributions remain the same in the case where $A$ is a subset of two or more players. In this case, an assessment of chemistry between two or more teammates can be pursued in an attempt to reveal what player combinations yield higher on-ice productivity.
\bibliographystyle{DeGruyter}
| {
"attr-fineweb-edu": 1.917969,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |
BkiUa3o5qsNCPdQKsVRs | \section{Introduction}
ZJUNlict is a RoboCup Small Size League(SSL) team from Zhejiang University with a rich culture. We seek changes and upgrades every year from hardware to software, and try our best to fuse them together in order to form a better robot system. With a stable dribbler developed during 2017-2018, team ZJUNlict focused mostly on dynamic passing and shoot with the advantage of the stable dribbler in 2018-2019. In fact the algorithm helped us gain a ball possession rate of $68.8\%$ during $7$ matches in RoboCup 2019.\\
To achieve the great possession rate, safe and accurate passing and shooting, our newly developed algorithm are developed into four parts:
\begin{enumerate}
\item The passing point module calculates the feasibility of all passing points, filters out feasible points, and uses the evaluation function to find the best passing point.
\item The running point module calculates the points where the offensive threat is high if our robots move there, to make our offense more aggressive.
\item The decision module decides when to pass and when to shoot based on the current situation to guarantee the success rate of the passing and shooting when the situation changes.
\item The skill module helps our robots perform passing and shooting accurately.
\end{enumerate}
This paper focuses on how to achieve multi-robot cooperation. In Sects.2 and 3, we discuss our main optimization on hardware. In Sects.4 and 5, we discuss the passing strategy and the running point module respectively. In Sect.6, we analyze the performance of our algorithms at RoboCup 2019 with the log files recorded during the matches.
\section{Modification of Mechanical Structure of ZJUNlict}
\subsection{The position of two capacitors}
During a match of the Small Size League, robots could move as fast as $3.25 m/s$. In this case, the stability of the robot became very important, and this year, we focused on the center of the gravity with a goal of lower it. In fact, there are already many teams got there hands busy with lowering the center of the gravity, eg, team KIKS and team RoboDragon have their robot compacted to $135 mm$, and team TIGERs have their capacitor moved sideways instead of regularly laying upon the solenoid \cite{tiger}.
Thanks to the open source of team TIGERs \cite{tiger}, in this year's mechanical structure design, we moved the capacitor from the circuit board to the chassis. On the one hand, this lowers the center of gravity of the robot and makes the mechanical structure of the robot more compact, On the other hand, to give the upper board a larger space for future upgrades. The capacitor is fixed on the chassis via the 3D printed capacitor holder as shown in figure \ref{FXZ-1}, and in order to protect the capacitor from the impact that may be suffered on the field, we have added a metal protection board on the outside of the capacitor which made of 40Cr alloy steel with high strength.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{FXZ-1}
\caption{The new design of the capacitors}
\label{FXZ-1}
\end{figure}
\subsection{The structure of the dribbling system}
The handling of the dribbling part has always been a part we are proud of, and it is also the key to our strong ball control ability. In last year's champion paper, we have completely described our design concept, that is, using a one-degree-of-freedom mouth structure, placing appropriate sponge pads on the rear and the lower part to form a nonlinear spring damping system. When a ball with certain speed hits the dribbler, the spring damping system can absorb the rebound force of the ball, and the dribbler uses a silica gel with a large friction force so that the ball can not be easily detached from the mouth.
The state of the sponge behind the mouth is critical to the performance of the dribbling system. In RoboCup 2018, there was a situation in which the sponge fell off, which had a great impact on the play of our game. In last year's design, as shown in figure \ref{FXZ-2}, we directly insert a sponge between the carbon plate at the mouth and the rear carbon plate. Under frequent and severe vibration, the sponge could easily to fall off\cite{champion2018}. In this case, we made some changes, a baffle is added between the dibbler and the rear carbon fiberboard, as shown in figure \ref{FXZ-3}, and the sponge is glued to the baffle plate, which made it hard for the sponge to fall off, therefore greatly reduce the vibration.
\begin{figure}[htbp]
\centering
\begin{minipage}{6cm}
\includegraphics[scale=0.22]{FXZ-2.png}
\caption{ZJUNlict 2018 mouth design}
\label{FXZ-2}
\end{minipage}%
\begin{minipage}{6cm}
\includegraphics[scale=0.22]{FXZ-3.png}
\centering\caption{ZJUNlict 2019 mouth design}
\label{FXZ-3}
\end{minipage}%
\end{figure}
\section{Modification of Electronic Board}
In the past circuit design, we always thought that the board should be designed into multiple independent boards according to the function module so that if there is a problem, the whole board can be replaced. But then we gradually realized that instead of giving us convenience, it is unexpectedly complicated, on the one hand, we had to carry more spare boards, and on the other hand, it was not conducive to our maintenance.
For the new design, we only kept one motherboard and one booster board, which reduced the number of boards, making the circuit structure more compact and more convenient for maintenance. We also fully adopted ST's STM32H743ZI master chip, which has a clock speed of up to 480MHz and has a wealth of peripherals. The chip is responsible for signal processing, packet unpacking and packaging, and motor control.
Thanks to the open source of TIGERs again, we use Allergo's A3930 three-phase brushless motor control chip, simplifying the circuit design of the motor drive module on the motherboard. The biggest advancement in electronic this year was the completion of the stability test of the H743 version of the robot. In the case of all robots using the H743 chip, there was no robot failure caused by board damage during the game.
In addition, we replaced the motor encoder from the original 360 lines to the current 1000 lines. The reading mode has been changed from the original direct reading to the current differential mode reading.
\section{Passing and Shooting Strategy Based on Ball Model}
\subsection{Real-time Passing Power Calculation}
Passing power plays a key role in the passing process. For example, robot A wants to pass the ball to robot B. If the passing power is too small, the opponent will have plenty of time to intercept the ball. If the passing power is too large, robot B may fail to receive the ball in limited time. Therefore, it's significant to calculate appropriate passing power.
Suppose we know the location of robot A that holds the ball, its passing target point, and the position and speed information of robot B that is ready to receive the ball. We can accurately calculate the appropriate passing power based on the ball model shown in figure \ref{DDQ}. In the ideal ball model, after the ball is kicked out at a certain speed, the ball will first decelerate to $5/7$ of the initial speed with a large sliding acceleration, and then decelerate to $0$ with a small rolling acceleration. Based on this, we can use the passing time and the passing distance to calculate the passing power. Obviously, the passing distance is the distance between robot A and its passing target point. It's very easy to calculate the Euclidean distance between these two points. Passing time consists of two parts: robot B's arrival time and buffer time for adjustment after arrival. We calculate robot B's arrival time using last year's robot arrival time prediction algorithm. The buffer time is usually a constant (such as $0.3$ second).
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{ballmodel.pdf}
\caption{Ideal ball model}
\label{DDQ}
\end{figure}
Since the acceleration in the first deceleration process is very large and the deceleration time is very short, we ignore the moving distance of the first deceleration process and simplify the calculation. Let d, t and a be the passing distance, time and rolling acceleration. Then, the velocity of the ball after the first deceleration and the passing power are given by the following:
\begin{equation}
v_1=((d+\frac{1}{2})at^2)/t
\end{equation}
\begin{equation}
v_0=v_1/\frac{5}{7}
\end{equation}
According to the capabilities of the robots, we can limit the threshold of passing power and apply it to the calculated result.
\subsection{\textit{SBIP}-Based Dynamic Passing Points Searching (DPPS) Algorithm}
Passing is an important skill both offensively and defensively and the basic requirement for a successful passing process is that the ball can't be intercepted by opponents. Theoretically, we can get all feasible passing points based on the \textit{SBIP(Search-Based Interception Prediction)} \cite{champion2018}\cite{etdp2019}. Assuming that one of our robots would pass the ball to another robot, it needs to ensure that the ball can't be intercepted by opposite robots, so we need the SBIP algorithm to calculate interception time of all robots on the field and return only the feasible passing points.
In order to improve the execution efficiency of the passing robot, we apply the searching process from the perspective of passing robot.
As is shown in Figure \ref{JSH-1}, we traverse all the shooting power in all directions to apply the SBIP algorithm for all robots on the field. According to the interception time of both teammates and opponents under a specific passing power and direction, we can keep only the feasible passing directions and the corresponding passing power.
\begin{figure}[h]
\centering
\includegraphics[width=0.35\textwidth]{JSH-1.jpg}
\caption{Dynamic passing points searching process}
\label{JSH-1}
\end{figure}
When considering that there is about $3$ degree's error between the accurate orientation of the robot and the one obtained from the vision, we set the traversal interval of direction as $360/128$(about $2.8$) $degree$. And the shooting power, which can be considered as the speed of ball when the ball just kicked out, is divided equally into $64$ samples between $1 m/s$ and $6.5 m/s$, which means the shooting accuracy is about $0.34 m/s$. Because all combinations of passing directions and passing power should be considered, we need to apply SBIP algorithm for $262144$ times(we assume there are $16$ robots in each team, $32$ in the field), which is impossible to finish within about $13 ms$ by only serial computing. Fortunately, all of the $262144$ SBIPs are decoupled, so we can accelerate this process by GPU-based parallel computing technique\cite{CUDA-1}\cite{CUDA-2}\cite{CUDA-3}, and that's why the numbers mentioned above are $128$, $64$ and $32$.
\subsection{Value-based best pass strategy}\label{4.3}
After applying the DPPS algorithm, we can get all optional pass strategies. To evaluate them and choose the best pass strategy, we extract some important features $x_i (i = 1, 2, 3...n)$ and their weights $ (i = 1, 2, 3...n) $, at last, we get the scores of each pass strategy by calculating the weighted average of features selected equation \ref{sigema} \cite{soccer-1}\cite{soccer-2}
\begin{equation}
\label{sigema}
\sum_{i=1}^{n}\omega _i\cdot x_i
\end{equation}
For example, we chose the following features to evaluate pass strategies in RoboCup2019 Small Size League:
\begin{itemize}
\item Interception time of teammates: close pass would reduce the risk of the ball being intercepted by opposite because of the ideal model.
\item Shoot angle of the receiver's position: this would make the teammate ready to receive the ball easier to shoot.
\item Distance between passing point and the goal: if the receiver decides to shoot,
short distance results in high speed when the ball is in the opponent's penalty
area, which can improve the success rate of shooting.
\item Refraction angle of shooting: the receiver can shoot as soon as it gets the ball if the refraction angle is small. The offensive tactics would be executed smoother when this feature is added.
\item The time interval between the first teammate's interception and the first opponent's interception: if this number is very small, the passing strategy would be very likely to fail. So only when the delta-time is bigger than a threshold, the safety is guaranteed.
\end{itemize}
In order to facilitate adjustment of parameters, we normalize length and angle values by dividing their upper bound, while keeping the time values unchanged.
After applying the DPPS algorithm, evaluating the passing points and choosing the best pass strategy, the results will be shown on the visualization software. In figure \ref{JSH}, the orange $\times$ is the feasible passing points by chipping and the cyan $\times$ is the feasible passing points by flat shot. The yellow line is the best chipping passing line, and the green line is the best flat shot passing line.
According to \textit{\textbf{a}} in figure \ref{JSH}, there are few feasible passing points when teammates are surrounded by opponents. And when the passing line is blocked by an opponent, there are only chipping passing points. According to \textbf{\textit{b}} in figure \ref{JSH}, the feasible passing points are intensive when there is no opponent marking any teammate.
\begin{figure}[htbp]
\centering
\begin{minipage}{6.5cm}
\includegraphics[scale=0.3]{JSH-2.png}
\label{JSH-2}\\
\centering{a}
\end{minipage}%
\begin{minipage}{6.5cm}
\includegraphics[scale=0.27]{JSH-3.png}
\label{JSH-3}\\
\centering{b}
\end{minipage}%
\caption{Feasible pass points and best pass strategy}
\label{JSH}
\end{figure}
\subsection{Shooting Decision Making}\label{shooting}
In the game of RoboCup SSL, deciding when to shoot is one of the most important decisions to make. Casual shots may lead to loss of possession, while too strict conditions will result in no shots and low offensive efficiency. Therefore, it is necessary to figure out the right way to decide when to shoot. We developed a fusion algorithm that combines the advantages of shot angle and interception prediction.
In order to ensure that there is enough space when shooting, we calculate the valid angle of the ball to the goal based on the position of the opponent's robots. If the angle is too small, the ball is likely to be blocked by the opponent's robots. So, we must ensure that the shot angle is greater than a certain threshold. However, there are certain shortcomings in the judgment based on the shot angle. For example, when our robot is far from the goal but the shot angle exceeds the threshold, our robot may decide to shoot. Because the distance from the goal is very far, the opponent's robots will have enough time to intercept the ball. Such a shot is meaningless. In order to solve this problem, the shot decision combined with interception prediction is proposed. Similar to the evaluation when passing the ball, We calculate whether it will be intercepted during the process of shooting the ball to the goal. If it is not intercepted, it means that this shot is very likely to have a higher success rate. We use this fusion algorithm to avoid useless shots as much as possible and ensure that our shots have a higher success rate.
\subsection{Effective free kick strategy}
We generate an effective free kick strategy based on ball model catering to the new rules in 2019\cite{rules}. According to the new rules, the team awarded a free kick needs to place the ball and then starts the game in 5 seconds rather than 10 seconds before, which means we have less time to make decisions. This year we follow our one-step pass-and-shoot strategy, whereas we put the computation for best passing point into the process of ball placement. Based on the ball model and path planning, we can obtain the ball travel time $t_{p-ball}$ and the robot travel time $t_{p-robot}$ to reach the best passing point. Then we make a decision whether to make the robot reach the point or to kick the ball firstly so that the robot and the ball can reach the point simultaneously.
Results in section \ref{result} show that this easy-executed strategy is the most effective strategy during the 2019 RoboCup Small Size League Competition.
\section{Off-the-ball Running}
\subsection{Formation}
As described in the past section, we can always get the best passing point in any situation, which means the more aggressiveness our robots show, the more aggressive the best passing point would be. There are two robots executing "pass-and-shot" task and the other robots supporting them\cite{robust}. We learned the strategy from the formation in traditional human soccer like "4-3-3 formation"and coordination via zones\cite{robot-soccer}. Since each team consists of at most $8$ robots in division A in 2019 season\cite{rules}, a similar way is dividing the front field into four zones and placing at most one robot in every part(figure \ref{WZ-1}). These zones will dynamically change according to the position of the ball(figure \ref{WZ-2}) to improve the rate of robot receiving the ball in it. Furthermore, we rasterize each zone with a fixed length (e.g. $0.1 m$) and evaluate each vertex of the small grids with our value-based criteria (to be described next). Then in each zone, we can obtain the best running point $x_R$ in a similar way described in section \ref{shooting}.
There are two special cases. First, we can't guarantee that there are always $8$ robots for us on the field for yellow card and mechanical failure, which means at this time we can't fill up each zone. Considering points in the zone III and IV have more aggressiveness than those in the zone I and II, at this time we prefer the best point in the zone III and IV. Secondly, the best passing point may be located in one of these zones. While trying to approach such a point, the robot may be possibly interrupted by the robot in this zone, so at this time, we will avoid choosing this zone.
\begin{figure}[htbp]
\centering
\begin{minipage}{5cm}
\includegraphics[scale=0.22]{WZ-1.png}
\caption{Four zones divided \protect\\by front field}
\label{WZ-1}
\end{minipage}%
\begin{minipage}{5cm}
\includegraphics[scale=0.65]{WZ-2.png}
\caption{Dynamically changed zone\protect\\ according to the position of the ball}
\label{WZ-2}
\end{minipage}%
\end{figure}
\subsection{Value-based running point criteria}
We adopt the similar approaches described in \ref{4.3} to evaluate and choose the best running point. There are five evaluation criteria $x_i (i=1,2,3...n)$ as follows. Figure \ref{WZ-4} shows how they work in common cases in order and with their weights $\omega_i (i=1,2,3...n)$ we can get the final result by equation \ref{equa} showed in f of figure \ref{WZ-4} (red area means higher score while blue area means lower score).
\begin{equation}
\label{equa}
\sum_{i=1}^{n}\omega _i\cdot x_i
\end{equation}
\begin{itemize}
\item \textbf{Distance to the opponent's goal.} It is obvious that the closer robots are to the opponent's goal, the more likely robots are to score.
\item \textbf{Distance to the ball.} We find that when robots are too close to the ball, it is difficult to pass or break through opponent's defense.
\item\textbf{ Angle to the opponent's goal.} It doesn't mean robot have the greater chance when facing the goal at 0 $degree$, instantly in some certain angle range.
\item \textbf{Opponent's guard time.} Guard plays an important role in the SSL game that preventing opponents from scoring around the penalty area, and each team have at least one guard on the field. Connect the point to be evaluated to the sides of the opponent's goal, and hand defense area to $P$ and $Q$ (according to figure \ref{WZ-3}). Then we predict the total time opponent's guard(s) spend arriving $P$ and $Q$. The point score is proportional to this time.
\item \textbf{Avoid the opponent's defense.} When our robot is further away from the ball than the opponent's robot, we can conclude that the opponent's robot will approach the ball before ours, and therefore we should prevent our robots being involved in this situation.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{WZ-4.png}
\caption{How Individual evaluation criterion affects the overall}
\label{WZ-4}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{WZ-3.png}
\caption{Method to get location P and Q}
\label{WZ-3}
\end{figure}
\subsection{Drag skill}
There is a common case that when our robot arrives at its destination and stops, it is easy to be marked by the opponent's robot in the following time. We can call this opponent's robot "defender". To solve this problem, we developed a new "Drag" skill. First of all, the robot will judge if being marked, with the reversed strategy in \cite{etdp2019}. According to the coordinate information and equation(\ref{wz}) we can solve out the geometric relationship among our robot, defender and the ball, while they are clockwise with $Judge>0$ and counterclockwise with $Judge<0$. Then our robot will accelerate in the direction that is perpendicular to its connection to the ball. At this time, the defender will speed up together with our robot. Once the defender's speed is greater than a certain value $v_{min}$, our robot will accelerate in the opposite direction. Thus there will be a huge speed difference between our robot and defender, which helps our robot distance defender and receive the ball safely.
The application of this skill allows our robots to move off the opponent's defense without losing its purpose, thus greatly improves our ball possession rate.
\begin{equation}
\label{wz} Judge=(x_{ball}-x_{me})(y_{opponent}-y_{me})-(x_{oppenent}-x_{me})(y_{ball}-y_{me})
\end{equation}
\section{Result}\label{result}
Our newly developed algorithms give us a huge advantage in the game. We won the championship with a record of six wins and one draw. Table 1 shows the offensive statistics during each game extracted from the official log.
The possession rate is calculated by comparing the interception time of both sides. If the interception time of one team is shorter, the ball is considered to be possessed by this team.
\begin{table}
\caption{Statistics for each ZJUNlict game in RoboCup 2019}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Game} & \textbf{\makecell[c]{Possession\\Rate(\%)}} & \textbf{\makecell[c]{Goals by\\Regular Gameplay}} & \textbf{\makecell[c]{Goals by\\Free Kick}} & \textbf{\makecell[c]{Goals by\\Penalty Kick}} & \textbf{Total Goals}\\
\hline
RR1 & 66.4 & 2 & 2 & 0 & 4\\
\hline
RR2 & 71.6 & 3 & 2 & 1 & 6\\
\hline
RR3 & 65.9 & 0 & 0 & 0 & 0\\
\hline
UR1 & -- & 2 & 1 & 1 & 4\\
\hline
UR2 & 68.2 & 1 & 0 & 1 & 2\\
\hline
UF & 69.2 & 1 & 1 & 0 & 2\\
\hline
GF & 71.4 & 1 & 0 & 0 & 1\\
\hline
Total & -- & 10 & 6 & 3 & 19\\
\hline
Average & 68.8 & 1.4 & 0.9 & 0.4 & 2.7\\
\hline
\end{tabular}
\end{table}
\subsection{Passing and Shooting Strategy Performance}
Our passing and shooting strategy has greatly improved our offensive efficiency resulting in 1.4 goals of regular gameplay per game. 52.6\% of the goals were scored from the regular gameplay. Furthermore, Our algorithms helped us achieve a 68.8\% possession rate per game.
\subsection{Free-kick Performance}
According to the game statistics, we scored an average of 0.9 goals of free-kick per game in seven games, while 0.4 goals for other teams in nineteen games. And goals we scored by free kick occupied 32\% of total goals (6 in 19), while 10\% for other teams (8 in 78). These statistics show we have the ability to adapt to new rules faster than other teams, and we have various approaches to score.
\section{Conclusion}
In this paper, we have presented our main improvements on both hardware and software which played a key role in winning the championship. Our future work is to predict our opponent's actions on the field and adjust our strategy automatically. Improving our motion control to make our robots move faster, more stably and more accurately is also the main target next year.
\input{ref.bbl}
\end{document}
\section{Introduction}
ZJUNlict is a RoboCup Small Size League(SSL) team from Zhejiang University with a rich culture. We seek changes and upgrades every year from hardware to software, and try our best to fuse them together in order to form a better robot system. With a stable dribbler developed during 2017-2018, team ZJUNlict focused mostly on dynamic passing and shoot with the advantage of the stable dribbler in 2018-2019. In fact the algorithm helped us gain a ball possession rate of $68.8\%$ during $7$ matches in RoboCup 2019.\\
To achieve the great possession rate, safe and accurate passing and shooting, our newly developed algorithm are developed into four parts:
\begin{enumerate}
\item The passing point module calculates the feasibility of all passing points, filters out feasible points, and uses the evaluation function to find the best passing point.
\item The running point module calculates the points where the offensive threat is high if our robots move there, to make our offense more aggressive.
\item The decision module decides when to pass and when to shoot based on the current situation to guarantee the success rate of the passing and shooting when the situation changes.
\item The skill module helps our robots perform passing and shooting accurately.
\end{enumerate}
This paper focuses on how to achieve multi-robot cooperation. In Sects.2 and 3, we discuss our main optimization on hardware. In Sects.4 and 5, we discuss the passing strategy and the running point module respectively. In Sect.6, we analyze the performance of our algorithms at RoboCup 2019 with the log files recorded during the matches.
\section{Modification of Mechanical Structure of ZJUNlict}
\subsection{The position of two capacitors}
During a match of the Small Size League, robots could move as fast as $3.25 m/s$. In this case, the stability of the robot became very important, and this year, we focused on the center of the gravity with a goal of lower it. In fact, there are already many teams got there hands busy with lowering the center of the gravity, eg, team KIKS and team RoboDragon have their robot compacted to $135 mm$, and team TIGERs have their capacitor moved sideways instead of regularly laying upon the solenoid \cite{tiger}.
Thanks to the open source of team TIGERs \cite{tiger}, in this year's mechanical structure design, we moved the capacitor from the circuit board to the chassis. On the one hand, this lowers the center of gravity of the robot and makes the mechanical structure of the robot more compact, On the other hand, to give the upper board a larger space for future upgrades. The capacitor is fixed on the chassis via the 3D printed capacitor holder as shown in figure \ref{FXZ-1}, and in order to protect the capacitor from the impact that may be suffered on the field, we have added a metal protection board on the outside of the capacitor which made of 40Cr alloy steel with high strength.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{FXZ-1}
\caption{The new design of the capacitors}
\label{FXZ-1}
\end{figure}
\subsection{The structure of the dribbling system}
The handling of the dribbling part has always been a part we are proud of, and it is also the key to our strong ball control ability. In last year's champion paper, we have completely described our design concept, that is, using a one-degree-of-freedom mouth structure, placing appropriate sponge pads on the rear and the lower part to form a nonlinear spring damping system. When a ball with certain speed hits the dribbler, the spring damping system can absorb the rebound force of the ball, and the dribbler uses a silica gel with a large friction force so that the ball can not be easily detached from the mouth.
The state of the sponge behind the mouth is critical to the performance of the dribbling system. In RoboCup 2018, there was a situation in which the sponge fell off, which had a great impact on the play of our game. In last year's design, as shown in figure \ref{FXZ-2}, we directly insert a sponge between the carbon plate at the mouth and the rear carbon plate. Under frequent and severe vibration, the sponge could easily to fall off\cite{champion2018}. In this case, we made some changes, a baffle is added between the dibbler and the rear carbon fiberboard, as shown in figure \ref{FXZ-3}, and the sponge is glued to the baffle plate, which made it hard for the sponge to fall off, therefore greatly reduce the vibration.
\begin{figure}[htbp]
\centering
\begin{minipage}{6cm}
\includegraphics[scale=0.22]{FXZ-2.png}
\caption{ZJUNlict 2018 mouth design}
\label{FXZ-2}
\end{minipage}%
\begin{minipage}{6cm}
\includegraphics[scale=0.22]{FXZ-3.png}
\centering\caption{ZJUNlict 2019 mouth design}
\label{FXZ-3}
\end{minipage}%
\end{figure}
\section{Modification of Electronic Board}
In the past circuit design, we always thought that the board should be designed into multiple independent boards according to the function module so that if there is a problem, the whole board can be replaced. But then we gradually realized that instead of giving us convenience, it is unexpectedly complicated, on the one hand, we had to carry more spare boards, and on the other hand, it was not conducive to our maintenance.
For the new design, we only kept one motherboard and one booster board, which reduced the number of boards, making the circuit structure more compact and more convenient for maintenance. We also fully adopted ST's STM32H743ZI master chip, which has a clock speed of up to 480MHz and has a wealth of peripherals. The chip is responsible for signal processing, packet unpacking and packaging, and motor control.
Thanks to the open source of TIGERs again, we use Allergo's A3930 three-phase brushless motor control chip, simplifying the circuit design of the motor drive module on the motherboard. The biggest advancement in electronic this year was the completion of the stability test of the H743 version of the robot. In the case of all robots using the H743 chip, there was no robot failure caused by board damage during the game.
In addition, we replaced the motor encoder from the original 360 lines to the current 1000 lines. The reading mode has been changed from the original direct reading to the current differential mode reading.
\section{Passing and Shooting Strategy Based on Ball Model}
\subsection{Real-time Passing Power Calculation}
Passing power plays a key role in the passing process. For example, robot A wants to pass the ball to robot B. If the passing power is too small, the opponent will have plenty of time to intercept the ball. If the passing power is too large, robot B may fail to receive the ball in limited time. Therefore, it's significant to calculate appropriate passing power.
Suppose we know the location of robot A that holds the ball, its passing target point, and the position and speed information of robot B that is ready to receive the ball. We can accurately calculate the appropriate passing power based on the ball model shown in figure \ref{DDQ}. In the ideal ball model, after the ball is kicked out at a certain speed, the ball will first decelerate to $5/7$ of the initial speed with a large sliding acceleration, and then decelerate to $0$ with a small rolling acceleration. Based on this, we can use the passing time and the passing distance to calculate the passing power. Obviously, the passing distance is the distance between robot A and its passing target point. It's very easy to calculate the Euclidean distance between these two points. Passing time consists of two parts: robot B's arrival time and buffer time for adjustment after arrival. We calculate robot B's arrival time using last year's robot arrival time prediction algorithm. The buffer time is usually a constant (such as $0.3$ second).
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{ballmodel.pdf}
\caption{Ideal ball model}
\label{DDQ}
\end{figure}
Since the acceleration in the first deceleration process is very large and the deceleration time is very short, we ignore the moving distance of the first deceleration process and simplify the calculation. Let d, t and a be the passing distance, time and rolling acceleration. Then, the velocity of the ball after the first deceleration and the passing power are given by the following:
\begin{equation}
v_1=((d+\frac{1}{2})at^2)/t
\end{equation}
\begin{equation}
v_0=v_1/\frac{5}{7}
\end{equation}
According to the capabilities of the robots, we can limit the threshold of passing power and apply it to the calculated result.
\subsection{\textit{SBIP}-Based Dynamic Passing Points Searching (DPPS) Algorithm}
Passing is an important skill both offensively and defensively and the basic requirement for a successful passing process is that the ball can't be intercepted by opponents. Theoretically, we can get all feasible passing points based on the \textit{SBIP(Search-Based Interception Prediction)} \cite{champion2018}\cite{etdp2019}. Assuming that one of our robots would pass the ball to another robot, it needs to ensure that the ball can't be intercepted by opposite robots, so we need the SBIP algorithm to calculate interception time of all robots on the field and return only the feasible passing points.
In order to improve the execution efficiency of the passing robot, we apply the searching process from the perspective of passing robot.
As is shown in Figure \ref{JSH-1}, we traverse all the shooting power in all directions to apply the SBIP algorithm for all robots on the field. According to the interception time of both teammates and opponents under a specific passing power and direction, we can keep only the feasible passing directions and the corresponding passing power.
\begin{figure}[h]
\centering
\includegraphics[width=0.35\textwidth]{JSH-1.jpg}
\caption{Dynamic passing points searching process}
\label{JSH-1}
\end{figure}
When considering that there is about $3$ degree's error between the accurate orientation of the robot and the one obtained from the vision, we set the traversal interval of direction as $360/128$(about $2.8$) $degree$. And the shooting power, which can be considered as the speed of ball when the ball just kicked out, is divided equally into $64$ samples between $1 m/s$ and $6.5 m/s$, which means the shooting accuracy is about $0.34 m/s$. Because all combinations of passing directions and passing power should be considered, we need to apply SBIP algorithm for $262144$ times(we assume there are $16$ robots in each team, $32$ in the field), which is impossible to finish within about $13 ms$ by only serial computing. Fortunately, all of the $262144$ SBIPs are decoupled, so we can accelerate this process by GPU-based parallel computing technique\cite{CUDA-1}\cite{CUDA-2}\cite{CUDA-3}, and that's why the numbers mentioned above are $128$, $64$ and $32$.
\subsection{Value-based best pass strategy}\label{4.3}
After applying the DPPS algorithm, we can get all optional pass strategies. To evaluate them and choose the best pass strategy, we extract some important features $x_i (i = 1, 2, 3...n)$ and their weights $ (i = 1, 2, 3...n) $, at last, we get the scores of each pass strategy by calculating the weighted average of features selected equation \ref{sigema} \cite{soccer-1}\cite{soccer-2}
\begin{equation}
\label{sigema}
\sum_{i=1}^{n}\omega _i\cdot x_i
\end{equation}
For example, we chose the following features to evaluate pass strategies in RoboCup2019 Small Size League:
\begin{itemize}
\item Interception time of teammates: close pass would reduce the risk of the ball being intercepted by opposite because of the ideal model.
\item Shoot angle of the receiver's position: this would make the teammate ready to receive the ball easier to shoot.
\item Distance between passing point and the goal: if the receiver decides to shoot,
short distance results in high speed when the ball is in the opponent's penalty
area, which can improve the success rate of shooting.
\item Refraction angle of shooting: the receiver can shoot as soon as it gets the ball if the refraction angle is small. The offensive tactics would be executed smoother when this feature is added.
\item The time interval between the first teammate's interception and the first opponent's interception: if this number is very small, the passing strategy would be very likely to fail. So only when the delta-time is bigger than a threshold, the safety is guaranteed.
\end{itemize}
In order to facilitate adjustment of parameters, we normalize length and angle values by dividing their upper bound, while keeping the time values unchanged.
After applying the DPPS algorithm, evaluating the passing points and choosing the best pass strategy, the results will be shown on the visualization software. In figure \ref{JSH}, the orange $\times$ is the feasible passing points by chipping and the cyan $\times$ is the feasible passing points by flat shot. The yellow line is the best chipping passing line, and the green line is the best flat shot passing line.
According to \textit{\textbf{a}} in figure \ref{JSH}, there are few feasible passing points when teammates are surrounded by opponents. And when the passing line is blocked by an opponent, there are only chipping passing points. According to \textbf{\textit{b}} in figure \ref{JSH}, the feasible passing points are intensive when there is no opponent marking any teammate.
\begin{figure}[htbp]
\centering
\begin{minipage}{6.5cm}
\includegraphics[scale=0.3]{JSH-2.png}
\label{JSH-2}\\
\centering{a}
\end{minipage}%
\begin{minipage}{6.5cm}
\includegraphics[scale=0.27]{JSH-3.png}
\label{JSH-3}\\
\centering{b}
\end{minipage}%
\caption{Feasible pass points and best pass strategy}
\label{JSH}
\end{figure}
\subsection{Shooting Decision Making}\label{shooting}
In the game of RoboCup SSL, deciding when to shoot is one of the most important decisions to make. Casual shots may lead to loss of possession, while too strict conditions will result in no shots and low offensive efficiency. Therefore, it is necessary to figure out the right way to decide when to shoot. We developed a fusion algorithm that combines the advantages of shot angle and interception prediction.
In order to ensure that there is enough space when shooting, we calculate the valid angle of the ball to the goal based on the position of the opponent's robots. If the angle is too small, the ball is likely to be blocked by the opponent's robots. So, we must ensure that the shot angle is greater than a certain threshold. However, there are certain shortcomings in the judgment based on the shot angle. For example, when our robot is far from the goal but the shot angle exceeds the threshold, our robot may decide to shoot. Because the distance from the goal is very far, the opponent's robots will have enough time to intercept the ball. Such a shot is meaningless. In order to solve this problem, the shot decision combined with interception prediction is proposed. Similar to the evaluation when passing the ball, We calculate whether it will be intercepted during the process of shooting the ball to the goal. If it is not intercepted, it means that this shot is very likely to have a higher success rate. We use this fusion algorithm to avoid useless shots as much as possible and ensure that our shots have a higher success rate.
\subsection{Effective free kick strategy}
We generate an effective free kick strategy based on ball model catering to the new rules in 2019\cite{rules}. According to the new rules, the team awarded a free kick needs to place the ball and then starts the game in 5 seconds rather than 10 seconds before, which means we have less time to make decisions. This year we follow our one-step pass-and-shoot strategy, whereas we put the computation for best passing point into the process of ball placement. Based on the ball model and path planning, we can obtain the ball travel time $t_{p-ball}$ and the robot travel time $t_{p-robot}$ to reach the best passing point. Then we make a decision whether to make the robot reach the point or to kick the ball firstly so that the robot and the ball can reach the point simultaneously.
Results in section \ref{result} show that this easy-executed strategy is the most effective strategy during the 2019 RoboCup Small Size League Competition.
\section{Off-the-ball Running}
\subsection{Formation}
As described in the past section, we can always get the best passing point in any situation, which means the more aggressiveness our robots show, the more aggressive the best passing point would be. There are two robots executing "pass-and-shot" task and the other robots supporting them\cite{robust}. We learned the strategy from the formation in traditional human soccer like "4-3-3 formation"and coordination via zones\cite{robot-soccer}. Since each team consists of at most $8$ robots in division A in 2019 season\cite{rules}, a similar way is dividing the front field into four zones and placing at most one robot in every part(figure \ref{WZ-1}). These zones will dynamically change according to the position of the ball(figure \ref{WZ-2}) to improve the rate of robot receiving the ball in it. Furthermore, we rasterize each zone with a fixed length (e.g. $0.1 m$) and evaluate each vertex of the small grids with our value-based criteria (to be described next). Then in each zone, we can obtain the best running point $x_R$ in a similar way described in section \ref{shooting}.
There are two special cases. First, we can't guarantee that there are always $8$ robots for us on the field for yellow card and mechanical failure, which means at this time we can't fill up each zone. Considering points in the zone III and IV have more aggressiveness than those in the zone I and II, at this time we prefer the best point in the zone III and IV. Secondly, the best passing point may be located in one of these zones. While trying to approach such a point, the robot may be possibly interrupted by the robot in this zone, so at this time, we will avoid choosing this zone.
\begin{figure}[htbp]
\centering
\begin{minipage}{5cm}
\includegraphics[scale=0.22]{WZ-1.png}
\caption{Four zones divided \protect\\by front field}
\label{WZ-1}
\end{minipage}%
\begin{minipage}{5cm}
\includegraphics[scale=0.65]{WZ-2.png}
\caption{Dynamically changed zone\protect\\ according to the position of the ball}
\label{WZ-2}
\end{minipage}%
\end{figure}
\subsection{Value-based running point criteria}
We adopt the similar approaches described in \ref{4.3} to evaluate and choose the best running point. There are five evaluation criteria $x_i (i=1,2,3...n)$ as follows. Figure \ref{WZ-4} shows how they work in common cases in order and with their weights $\omega_i (i=1,2,3...n)$ we can get the final result by equation \ref{equa} showed in f of figure \ref{WZ-4} (red area means higher score while blue area means lower score).
\begin{equation}
\label{equa}
\sum_{i=1}^{n}\omega _i\cdot x_i
\end{equation}
\begin{itemize}
\item \textbf{Distance to the opponent's goal.} It is obvious that the closer robots are to the opponent's goal, the more likely robots are to score.
\item \textbf{Distance to the ball.} We find that when robots are too close to the ball, it is difficult to pass or break through opponent's defense.
\item\textbf{ Angle to the opponent's goal.} It doesn't mean robot have the greater chance when facing the goal at 0 $degree$, instantly in some certain angle range.
\item \textbf{Opponent's guard time.} Guard plays an important role in the SSL game that preventing opponents from scoring around the penalty area, and each team have at least one guard on the field. Connect the point to be evaluated to the sides of the opponent's goal, and hand defense area to $P$ and $Q$ (according to figure \ref{WZ-3}). Then we predict the total time opponent's guard(s) spend arriving $P$ and $Q$. The point score is proportional to this time.
\item \textbf{Avoid the opponent's defense.} When our robot is further away from the ball than the opponent's robot, we can conclude that the opponent's robot will approach the ball before ours, and therefore we should prevent our robots being involved in this situation.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{WZ-4.png}
\caption{How Individual evaluation criterion affects the overall}
\label{WZ-4}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{WZ-3.png}
\caption{Method to get location P and Q}
\label{WZ-3}
\end{figure}
\subsection{Drag skill}
There is a common case that when our robot arrives at its destination and stops, it is easy to be marked by the opponent's robot in the following time. We can call this opponent's robot "defender". To solve this problem, we developed a new "Drag" skill. First of all, the robot will judge if being marked, with the reversed strategy in \cite{etdp2019}. According to the coordinate information and equation(\ref{wz}) we can solve out the geometric relationship among our robot, defender and the ball, while they are clockwise with $Judge>0$ and counterclockwise with $Judge<0$. Then our robot will accelerate in the direction that is perpendicular to its connection to the ball. At this time, the defender will speed up together with our robot. Once the defender's speed is greater than a certain value $v_{min}$, our robot will accelerate in the opposite direction. Thus there will be a huge speed difference between our robot and defender, which helps our robot distance defender and receive the ball safely.
The application of this skill allows our robots to move off the opponent's defense without losing its purpose, thus greatly improves our ball possession rate.
\begin{equation}
\label{wz} Judge=(x_{ball}-x_{me})(y_{opponent}-y_{me})-(x_{oppenent}-x_{me})(y_{ball}-y_{me})
\end{equation}
\section{Result}\label{result}
Our newly developed algorithms give us a huge advantage in the game. We won the championship with a record of six wins and one draw. Table 1 shows the offensive statistics during each game extracted from the official log.
The possession rate is calculated by comparing the interception time of both sides. If the interception time of one team is shorter, the ball is considered to be possessed by this team.
\begin{table}
\caption{Statistics for each ZJUNlict game in RoboCup 2019}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Game} & \textbf{\makecell[c]{Possession\\Rate(\%)}} & \textbf{\makecell[c]{Goals by\\Regular Gameplay}} & \textbf{\makecell[c]{Goals by\\Free Kick}} & \textbf{\makecell[c]{Goals by\\Penalty Kick}} & \textbf{Total Goals}\\
\hline
RR1 & 66.4 & 2 & 2 & 0 & 4\\
\hline
RR2 & 71.6 & 3 & 2 & 1 & 6\\
\hline
RR3 & 65.9 & 0 & 0 & 0 & 0\\
\hline
UR1 & -- & 2 & 1 & 1 & 4\\
\hline
UR2 & 68.2 & 1 & 0 & 1 & 2\\
\hline
UF & 69.2 & 1 & 1 & 0 & 2\\
\hline
GF & 71.4 & 1 & 0 & 0 & 1\\
\hline
Total & -- & 10 & 6 & 3 & 19\\
\hline
Average & 68.8 & 1.4 & 0.9 & 0.4 & 2.7\\
\hline
\end{tabular}
\end{table}
\subsection{Passing and Shooting Strategy Performance}
Our passing and shooting strategy has greatly improved our offensive efficiency resulting in 1.4 goals of regular gameplay per game. 52.6\% of the goals were scored from the regular gameplay. Furthermore, Our algorithms helped us achieve a 68.8\% possession rate per game.
\subsection{Free-kick Performance}
According to the game statistics, we scored an average of 0.9 goals of free-kick per game in seven games, while 0.4 goals for other teams in nineteen games. And goals we scored by free kick occupied 32\% of total goals (6 in 19), while 10\% for other teams (8 in 78). These statistics show we have the ability to adapt to new rules faster than other teams, and we have various approaches to score.
\section{Conclusion}
In this paper, we have presented our main improvements on both hardware and software which played a key role in winning the championship. Our future work is to predict our opponent's actions on the field and adjust our strategy automatically. Improving our motion control to make our robots move faster, more stably and more accurately is also the main target next year.
\input{ref.bbl}
\end{document}
| {
"attr-fineweb-edu": 1.99707,
"attr-cc_en_topic": 0,
"domain": "arxiv"
} |